The Thoughtful Making of Cyberspaces
The mainstream adoption of augmented reality-enabled contact lenses and related devices could liberate architecture from the practical necessities of physical construction by simply encoding the designs as three-dimensional electronic visions projected onto any observer’s eyes. Form would literally become independent from function, likely provoking a wave of unprecedented design risk-taking exacerbated by the chaos of networked distribution. Marriage-destroying remodeling ordeals might be supplanted by 99-cent home façade downloads, while holographic pop-up ads could stop traffic in Times Square.
Such a revolution seems decades away, but the possibilities have become technically plausible enough to be worthy of consideration by future architectural visionaries. Mobile location-based applications such as Foursquare have already begun bridging the gap between the web and physical reality, ticket sales for 3D-enchanced movies are climbing, and University of Washington researcher Babak Amir Parviz has developed augmented reality contact lenses that solve many of the biological interface problems involved in placing an LCD screen next to a wearer’s eyes. Organizations such as the military, having already spent decades using “head-up” information displays to enhance their air combat operations, could represent a likely future developer of such emerging systems. From there, the technology could trickle down to civilian use through numerous compelling possibilities. Automotive applications could reduce traffic accident rates, while educational software might transform a stroll down a city street into an interactive museum of local history. As these new augmented reality technologies proliferated, they might eventually, with the exact time frame of adoption dependent on our society’s collective level of squeamishness, be seamlessly integrated into our everyday experience through direct brain implants. Ultimately, our perception of the world, one of the most basic of human senses, would now be influenced by the mutations of a massive international network.
The result would be a new type of space unlike anything that architects have ever drawn before. In a future where any building’s appearance could potentially be designed or changed in true independence from its function, both the historical Modernist union between form and function and the ancient Virtuvian equation compelling architects to balance between firmness, commodity, and delight, would now be shattered. The shifting nature of these electronic visions would undermine the field’s even older values of stability and permanence, with building designs transformed into fleeting, temporal visions that could fluctuate in response to changes in society, market demand, or the will of any three-year-old viewer who imagines his or her dream house. The results might transcend architecture’s primary known language of space, time, physicality, and even meaningful purpose.
Architects, however, would not face these challenges entirely unprepared. As far back as Le Corbusier’s free plan and free facade concepts, modern designers have supported the separation of form and function; new-found steel technology was used to structurally support Corbusier’s buildings on pilotis, or columns, artistically liberating the facade and plan from the weight of gravity. Louis Kahn similarly segregated his buildings’ mechanical and electrical systems inside defined corridors, or “servant spaces,” to prevent their functional needs from intruding on outside aesthetic decisions. Separatist rebellion continued after the demise of modernism with Robert Venturi and Denise Scott Brown’s 1972 publication, Learning From Las Vegas, which advocated a “decorated shed” building typology where ornament was applied on top of a basic generic framework to address a building’s specific informational and emotional needs independent of its universal structural engineering. Memorable symbolism was as crucial to the success of such a postmodern building as its ability to function as a basic shelter.
Venturi and Scott Brown’s architectural adoption of ornament and pop culture references was inspired by the exaggerated features of roadside architecture, whose primary method of viewing was a fleeting glimpse from the highway rather than the more immersive levels of engagement previously found when appreciating a building at the slow pace of walking on foot. The partners rendered their buildings’ elevations in bold, bright, attention-grabbing lines designed to be clearly noticed at 55 miles an hour, but an even faster method of viewing architecture arrived near the turn of the millennium. Our last decade’s spectacular attention-seeking contemporary museums erected by Frank Gehry, Zaha Hadid, and other “starchitects” have likely been seen by more people on the Internet than will ever visit the actual buildings in reality. Designers such as Diller/Scofidio + Renfro and Jean Nouvel have further blurred the lines between the architecture and the virtual by integrating displays and projections into their buildings. Although Kazys Varnelis and Anne Friedberg argue, in their Networked Publics article, “Place: The Networking of Public Space”, that the rise of electronic architecture is unlikely to provoke a complete elimination of physical space in favor of virtual reality, it still seems likely that increasing portions of our buildings will become virtual in the years to come.
Further augmented reality developments, although ambitious, could also be motivated by pragmatic financial incentives. As smaller and more mobile electronics insinuate their way into expanding portions of our everyday lives at the same time that our economic recession continues, architects and developers might begin to approach virtual reality as a more affordable means of creating radical facade shapes than physical construction. Liberated from such concerns as finances, engineering, and lawsuits, virtual elements could enable exponentially higher levels of architectural risk-taking than were witnessed in even the competitive icon-generating atmosphere of the early 2000′s. Architectural culture would likely respond by placing greater emphasis on superficial exterior style, with the potential for buildings to adopt the more rapid pace with which trends change in faster-moving fields such as fashion.
Eliminating most of the risks and requirements involved in creating architecture would also lower the barriers of entry into the profession. Architecture would transition from a field that demands as many as eight to ten years of schooling and internships before one obtains a practitioner’s license, to a medium in which anybody with the appropriate easy-to-use computer software could participate. With opportunities to alter the appearance of our built environment no longer restricted to those financially secure enough to hire or become architects, such a development could potentially transform architecture into a relevant mainstream cultural medium, and perhaps even achieve the old Modernist holy grail of applying architecture to address public social issues.
Alternatively, the democratic proliferation of architectural design could just as easily spell the death of the profession. Just as computer networks overturned all previously known methods regarding how music, films, and print media are created and distributed, architecture’s transition into a partially digital medium would undoubtedly disrupt the field’s existing power structures as a flood of untrained newcomers projected their most impulsive and willful architectural fantasies while rapidly sharing (not to mention pirating and hacking) other users’ designs from around the world. As with the shortfalls now facing graphic designers who often struggle to compete with cheaper non-schooled competitors, professional architectural practice might recede from a legal requirement on serious building projects to an entirely niche luxury tailored for a few dedicated remaining connoisseurs.
If architects lose their relevance, computer programmers are inadvertently preparing to fill the void. Their first foothold is language. Countless architectural job seekers have labored to unearth opportunities to practice their ancient field amidst a morass of technology company propositions seeking the many programmers who have now styled themselves as “information architects.” With most augmented reality developments currently originating in the technology sector, programmers and web designers occupy a natural position from which to pioneer the Internet’s expansion beyond the edges of the computer screen and into the three-dimensional outside world.
How will those digital designers react upon leaving their familiar domain? According to Varnelis and Friedberg, current precedents would suggest that they may be unprepared for the new form of perception: “Although the Web has become graphically more sophisticated, when we visit it we navigate a two-dimensional interface. Corporate presences on the Internet appear to us as brochureware, not as virtual structures that we can enter into and inhabit. Indeed, the Web is curiously nonspacial …” Such an analysis would suggest that, when spatial augmented reality finally arrives, the predominately two-dimensional experience of the web’s traditional designers could create a vacuum of unmet demand for the three-dimensional spatial skills which thousands of hungry architects have spent years perfecting. This could become architecture’s next opportunity to reassert its relevance to society, and from a Modernist point of view it would indeed be our imperative to join this emerging technological revolution.
Finally, from a less altruistic angle, architecture’s expanding audience and new digital format could potentially revitalize the profession with greater market demand and higher salaries. What if tomorrow’s architects found themselves recruited as high-flying employees of Microsoft or Google, or even dreamed up their own augmented reality start-up companies? After all, the creator of the term “information architect,” Richard Saul Wurman, originally emerged from an architectural background, and our field could still have an opportunity to reclaim that title if the next generation of designers possesses sufficient vision to imagine our future possibilities.
(Note: Several people have compared this article to the premise of the movie Inception. I saw the movie a few days after I had finished my penultimate draft of this article. As I watched the movie, my growing realization that comparisons could be made to my ideas did much to undermine my film-going experience.)