Lost in Translations (from Drawing to Building)
“[Digital technologies] are no longer the tools for making: they are primarily tools for thinking.”
Mario Carpo, The Alternative Science of Computation1
“The theme of this article is translation...There are all those other identically prefixed nouns too: transfiguration, transformation, transition, transmigration, transfer, transmission, transmogrification, transmutation, transposition, transubstantiation, transcendence, any of which would sit happily over the blind spot between the drawing and its object, because we can never be quite certain, before the event, how things will travel and will happen to them on the way.”
Robin Evans, Translations from Drawing to Building2
If Robin Evans’ ceaselessly regurgitated Translations from Drawing to Building was to signify anything other than a debunking of the common architectural myths associated with notions of drawing buildings and building drawings, it would be the emergence of a renewed interest in the intellectual value of scrutinizing the very media in which we work. Writing in 1986, Evans is admittedly responding to various cultural dialogues concerning, on one hand, the fabled autonomy of the drawing, and on the other, architecture’s abstract disciplinary knowledge.3 We’ve heard this countless times before, and we all know how the story goes. Yet, it is thirty-one years later, and here we are, still discussing how to translate between a drawing and a building. But it seems that our current preoccupations address a different kind of translation; one not necessarily concerned with whether the drawing itself exists as “the real repository of architectural art.”
Before diving in, let’s make two assumptions. (1) That we live in the world of ubiquitous software; and (2) that the architectural drawing (at least the kind that dreams of becoming a building) is primarily a digital artifact. Putting aside any nostalgic opprobrium this might incite, “drawing” in the case of this essay will neither refer to the intellectual act of disegno, nor to the drafting of lines, whether digital or analog, projective or perspectival. Instead, “drawing” should be understood as a placeholder for a variety of digital file formats that are readily used in contemporary architectural practice (eg. DWG, PDF, JPG). Therefore, given our previous assumptions, we can situate the architect as a figure whose principal task is not only to translate between drawing and building, but also to translate across a vast, ever-updating landscape of standardized file-types and graphical user interfaces. While this may appear obvious, perhaps even remedial, a critical discourse surrounding these processes has not surfaced until recently. An understanding of design software has been stifled by a desire to marginalize it as a utility with the humanist paradigm that prioritizes design intellect’s hegemony over mechanical tools. But the analogy of the digital as a tool for the realization of some architectural a priori is just the myth Evans debunks.4 In other words, software is no longer a simple vehicle for communicating that which we blindly create in our heads, but rather contributes to the formulation of that very thought from the first time we are in contact with digital media.
It is this reformulation of our relationship to the digital—what some are calling the second digital turn—that I wish to discuss.5 From Autodesk Revit coordination to the management of plug-in applications to optimizing PDF files for printing, an ever-growing repertoire of software knowledge is crucial to the dissemination of a design project at all scales. Put differently, “we know that as soon as communications leave our lips or fingertips they are immediately diced and rearranged into information packets better suited for streaming digital compression.”6 However, despite the ubiquity of the digital, I am not suggesting that architects should become programmers, nor that computation be conflated with design. Instead, my point is that, because it is so pervasive there needs to be a deeper understanding of the critical and cultural role that software plays in design processes from education to practice. As Lev Manovich argues in Software Takes Command, “[i]t is, therefore, the right moment to start thinking theoretically about how software is shaping our culture, and how it is shaped by culture in turn.”7
Manovich has widely been regarded as the progenitor of these thoughts within the humanities, having been one of the earliest thinkers to synthesize—and historicize—the development of “cultural software:” a loose umbrella term referring to computer applications involved in the creation of cultural artifacts, interactive services, aesthetic content, online social interactions, and interactive cultural experiences.8 Each discipline, of course, has its own catalog of go-to software, which relies largely on a combination of specified workflows, licensing costs, and industry standards. For us architects, we know the usual suspects: Adobe Photoshop/Illustrator/InDesign, Autodesk AutoCAD/Revit/3d Studio Max/Maya, Rhino 3d, to name a few. Yet while students and professionals use these programs on a daily basis, Manovich’s theses posit that most discussions rarely touch on their impact on cultural conventions or their historical development.
But for now, let us return to the issue of translation (from drawing to building). If a major task of the contemporary architect is to manage the collection and transmission of various file-types through networks both local and international, then one would expect architectural curricula to address the fundamentals of navigating this digital landscape.9 To a certain extent, design education teaches technical skills for successfully outputting a desired object from a piece of software, in some cases even covering best practices for compression, optimization, or conversion. These techniques, codified during the early 2000s (a.k.a. the first digital turn), were the results of students tinkering with early modeling software.10 At the time, important themes began to surface, such as the basic discrepancies between OBJ meshes and IGES B-splines or those between raster and vector images. It should be noted, of course, that this knowledge was shared at the discretion of those early adopters of digital tools who saw the medium as a limitless playground for novel architectural forms. This emphasis on software as a means to an end remains the dominant pedagogical approach. Students now enter the professional field with a vast technical understanding and ability to translate two-dimensional drawings into three-dimensional models, and output results in a variety of media. If this has been working well for the better half of the 2010s, what is the value of a deeper immersion into the annals of software’s history and socio-cultural impact?
For one, I would suggest that the presumption of software as a simple tool subservient to our architectural whims is an outdated pedagogical model. A few years back, the assertion that a designed product is only as good as the designer would yield nods of agreement from most in the field, a sentiment derived perhaps from the skeuomorphic qualities of CAD’s “paperspace” digitizing drafting tables. As Building Information Modeling software spreads its reach, there are fewer analogies left. If drafting in CAD is akin to drafting by hand, then BIM is like building the building, before you build the building: a simulated act. Not only is it a digital simulation, but it is also dependent on data management. Thus, the process of translating from drawing to building today depends less on one’s ability to form an apt analogy of what a “drawing” is or what it represents, and more on one’s expertise in navigating information management systems, and coordinating between file-types from Navisworks, Revit, AutoCAD, Revit MEP, RISA 3D, etc. But what if instead of learning the actions to perform in each program, there was such a thing as design software theory; a course in which software interfaces, workflows, and file-types are dissected to relate to our shared experiences with other cultural media as a whole. This new addition of an architectural curriculum, dedicated to a broader discussion of design software, might tease out new analogies for translating from drawing to building.
Apropos of the above, let’s look at a hypothetical scenario. Take a simple topographical survey; a set of information common to both architects and landscape architects. A traditional approach to modeling such a survey is to extract contour lines at specific intervals, which results in an abstract, stepped representation of the specific topography. To represent the surface as a smoother continuous surface, one would have to interpolate between these lines using common B-spline modifiers, either “lofting” or “draping” complex doubly-curved surfaces over the contour splines as formwork. The result would be an approximation of a more realistic terrain. However, let’s say that the designer had taken a course in the history of computer graphics. She would have most likely covered early attempts at representing complex textures using displacement mapping algorithms, such as ones used by Pixar’s RenderMan engine. In this process, surfaces are subdivided into triangular meshes whose vertices’ height coordinate correspond to a specific distance from an origin managed by a greyscale “heightmap.” In other words, instead of interpolating offset contour lines, a displacement map on a mesh surface recreates a topography based on a series of points dictated by pixel grayscale value. On a typical 8-bit RGB image, this allows for up to 256 different height values, resulting in a high-fidelity, realistic terrain.
Now, the curious part about this hypothetical scenario is that the concepts remain software-independent. Most modeling software today will facilitate both methods (contouring and displacement) to some extent. However, disciplinary bias separates these ways of working. Architectural design favors the former, contour-based model, and video game/VFX design favors the latter. Tracing the historical lineage of these biases, we will find that displacement methods were much more computationally demanding, and thus were reserved for industries with large budgets. Architects, using relatively low-cost software in the early days of CAD such as Form*Z and Rhino 3D, would naturally gravitate towards a simpler, faster, and more abstract method for representing topographies. Indeed, many of our disciplinary proclivities for certain methods trickle down from an era where computation was a precious resource to be conserved. Nevertheless, as Mario Carpo has recently noted, the second digital turn is a shift from the formal vocabulary of calculus to that of infinite datasets. This suggests that if calculus is a compression tool used to express a complex geometrical order in a simple way, this compression is no longer necessary when processors can add up a large dataset to represent the same figure.11 The landscape scenario exists as another analogy for this shift: why represent a landscape through approximate curves, when you can recreate topography with a pixel-to-polygon level of resolution?
There are obviously more complex translations at play in the execution of displacing bitmaps into three-dimensional landscapes. For instance, heightmaps are only produced in two ways: (1) by compositing satellite imagery at different points in time to calculate elevation data, or (2) by randomly generating grayscale fields with various bitmap algorithms, such as Perlin noise.12 Both call for interactions with software outside the canon of Adobe and Autodesk; a daunting task for most students I’ve taught. Such an endeavor would require one to be fluent in the kinds of file-formats available and be able to translate from one to another with minimal loss of information. But more importantly, our hypothetical student would have to choose between which software would yield the faster and desired results, instead of forcing a method into a program better suited for another technique.
When I introduce students to Autodesk 3d Studio Max, I usually begin by retelling a short history of modeling software. Typically, this involves an explanation of the OBJ, FBX, DXF/DWG file standards, a short anecdote about early visual effects technologies, a brief mention of Form*Z’s influence, and, if we’re willing to get political, Autodesk’s relentless monopolization of the field. After understanding the program’s background, we start to familiarize ourselves with the user interface and tools. Most of the observations outlined above stem from my experience attempting to teach not only software as a tool, but software as an extension of our everyday interactions with interfaces. While some of these thoughts may still be in their infancy, I have an increasing suspicion that themes from software and internet studies will find their way into design curricula. The need for tutorial-based, step-by-step sequencing of interactions will therefore dwindle in favor of a more expansive approach where students experiment across a variety of differing media, testing the limits of file-formats, discovering new workflows, and translating their concepts across platforms seamlessly.