It’s time for the profession to prepare. New software and hardware platforms are emerging that allow immersive environment representation—aka virtual reality, or VR—along with gestural modeling, or the translation of hand movements captured via computer vision into design information. Taken together, these two tools allow designers to visualize and virtually inhabit three-dimensional spatial conditions at “full scale,” where we can do design work with intuitive hand and body motions. The implications for architectural practice are dramatic.
First, it means we need to create new interfaces and custom workflows. The keyboard and mouse take a backseat in the design process. Second—and best of all, in my opinion—these platforms for augmented reality (AR) or VR stand to reengage the designer’s hands in the act of making, digitally.
While the benefits of VR for advancing architectural practice are relatively easy to imagine, combining these 3D immersive environments with the capacity for using our hands to design is a little harder to envision. Yet new tools, as complex as wearable sensors or as simple as tablet screens, can capture design data from freeform hand movements. In this way, we craft shapes and spaces using our hands, akin to a sculptor at work, with these new gestural modeling tools.
At CarrierJohnson + CULTURE, we saw this advance as the perfect complement to VR spatial simulations and, recognizing this rapid surfacing of new AEC hardware and software technologies that will dramatically impact practice, established our Design Technology Group in 2014. These changes inspired us to begin developing advanced computational strategies for “virtualized drawing techniques” by creating a link between simulation and visualization through various software and hardware platforms. By using gestural modeling with AR and VR, we think it’s possible to trigger a paradigm shift in the understanding of drawings “at scale.” We stipulate that the 3D model is a dynamic drawing that can be better utilized within an immersive spatial condition. This means that designers see their 3D models not as merely a virtualized, three-dimensional object, but rather as a fully realizable, virtualized construction.
This has a few big benefits. It removes the burden of scale in the representation of a project. In this way, it allows the act of drawing (modeling) to become entangled with the act of making itself. No longer is a drawn line merely representative of a surface; rather, it is the surface itself. In its essence, this approach moves drawing beyond the controlled scalar representation of space that has been utilized by architects from the Renaissance through the 20th century. Instead, it establishes drawing as a one-to-one exercise.
New Workflows for the Design Process
So how do these ideas work in practice? Generally we think they improve the design process and help both in-house and client teams better visualize our projects. Here’s a rundown of the software and hardware approaches we’ve taken at CarrierJohnson + CULTURE, and the resulting workflows:
1. Virtual Environments
For an immersive simulation of space to be executed within the practice, practitioners can leverage several existing hardware and software platforms. Using head-mounted displays (HMDs) such as the Oculus Rift DK2, it is possible to create a condition for the designer to experience “full scale” designs. At the desktop application level, it is critical to the successful adoption of VR for designers to easily place themselves into the virtual context without having to introduce new software workflows. To this end, the use of gaming platforms such as Unreal and Unity—preferred tools within the VR community—tend to cause additional work and result in an unfavorable view of the HMD/VR environment as a design tool. To alleviate this concern for the project designers, a variety of off-the-shelf software platforms can successfully execute transitions between our most-used design platforms and the VR environments of Fuzor and IrisVR.
In our experience, both of these VR platforms give the designer the ability to translate 3D geometry with surface texture maps into a “walkable” environment experienced using the Oculus Rift DK2 headset. Architects and designers can experience and validate design decisions in this way, immersing themselves in the spatial qualities of the design in a one-to-one environment. The platforms also allow for the creation of multiple design solutions that may be modified in situ; natural and artificial lighting conditions, furniture layouts, and materiality are some of the features that can be modified within the native VR environment.
These simulated models are often meant to be used within the design process; as a result, the models are often more focused on giving the designer freedom of movement through the model to inspect and verify various points of view by controlling the point of view via a linked Xbox 360 gaming controller. In this way, the models are often rendered as simple tonal values or proxies for materials to be used in more formal renderings of the project.
2. Photorealistic VR
Using similar processes, it is also possible to replicate the immersive environment in order to facilitate client presentations and comment sessions. In these instances, the preference is to represent the project in a photorealistic rendering. To execute this, we have utilized current rendering methodologies to create equirectangular panoramic renderings that are then mapped to a spherical environment within the RoundMe iOS/Android app. The app allows project teams to use the Google Cardboard mobile VR viewer in coordination with a smartphone to share the immersive environment of the design.
In addition to using the RoundMe app in combination with Google Cardboard, architects in the office are able to use the Google Photosphere app or other similar 360 panorama photo apps to create photo spheres of construction sites and share these with the office as well as project consultants, contractors, and clients. These panoramic photos allow for a deeper level of documentation of the construction process as well as allow project participants who are unable to visit the job site to experience a project’s process in a more immersive way.
3. Augmented Conditions
The Google Cardboard VR device is not the only moment where technology is used to bridge the gap between the understanding of our design and the existing conditions of the project's site at full scale. Using various augmented reality (AR) applications in combination with smartphone sensors, we are able to geolocate projects in the design software and “publish” the geolocated project to the correct place on the globe, to be viewed with augmented reality software. This capability allows designers to visualize the project in situ, as seen through the screen of a mobile phone or tablet device, and to perform an augmented “job walk” of the project proposal within a real-world context.
While we take advantage of AR software for design visualization in early siting and concept stages of various projects, we also use mobile building information modeling (BIM) software such as BIM360 Glue/Field to coordinate and visualize 3D BIM models via tablets in the field during construction. While these tools have helped facilitate coordination and resolution of field conflicts, we have found that being able to embed the 3D information in the drawings during design review and bidding is also helpful. By embedding AR triggers—such as QR codes—into our drawing sets, the AR software allows architects to “tag” specific drawings and details so that contractors can visualize an expanded three-dimensional model of the specific drawing or detail. Selectively cropping and publishing specific 3D details has resulted in a much deeper understanding of design intent for contractors and subcontractors, facilitating the coordination and resolution of on-site issues.
4. Gestural Control
Using the same premise that architects must be able to draw at a “one-to-one” scale, at CarrierJohnson + CULTURE's Design Technology Group we have been exploring various input methodologies. These tools allow designers to more fully engage the projects being designed. They also exploit the fact that, in recent years, digital models are produced at a much faster pace than physical models.
In an effort to engage the designers in the “making” of projects without input by keyboard or mouse, we have evaluated various motion trackers such as Microsoft Kinect and gestural input devices like Leap Motion. These products create virtualized interactions between the user’s hands and the various software platforms used in the design process. By leveraging the sensitivity of input found in the Leap Motion detector and software applications such as Gamewave, we are able to map custom gestural motions to commonly used tools.
These new gestural workflows allow for a novel interaction and creation capability between the designer and the object within the design software. Plug-ins such as Firefly for Grasshopper/Rhino3D and Gamewave for Sketchup have given us the flexibility to create custom definitions and ruby scripts that take the user's hands and fingers as input and control the cameras within the design software while simultaneously manipulating geometry in a more intuitive and interactive manner than the traditional keyboard/mouse input.
What Does the Future Hold?
Although VR as a technology has existed for almost 40 years, widespread adoption and especially application of the technology within the AEC industry is a relatively new phenomenon, and at CarrierJohnson + CULTURE we have found varying levels of successful application of the technologies and integration into existing workflows as well as establishing new methodologies. We have been able to leverage several VR and AR technologies for application in the design and construction processes on select projects in the office. For example, during the design process of the West Pedestrian Building at the San Ysidro Land Port of Entry, a comprehensive BIM model was created as a function of meeting client requirements as well as a desire to execute a fully virtualized construction of the project prior to actual construction. Using the building information model of the project, we were able to not only coordinate building design and systems but also share with the client a deeper understanding of the project through the use of VR and AR technologies available. The use of these technologies has continued through the construction stage, namely, virtual coordination with subcontractors and augmented real-world/virtual model overlays of necessary design changes in the field, allowing the project to move ahead of schedule and reduce the number of field conflicts, requests for information (RFIs), and project changes.
Yet there remain a number of hurdles to overcome before every user dons their VR goggles and works at their desk with an HMD on. The technologies that are used both in the office and in the field rely not only on the 3D models and various software platforms, but also on the ability of the hardware to accurately gather sensory data regarding user position, orientation, point of view, and so forth to create a believable immersive experience.
While this data successfully immerses the user in the VR environment, the user is left unable to interact with the physical environment itself. In order for the modification of spatial conditions to be performed within the VR environment, a means of controlling modeling input is necessary. This creates a problem for users who want to interact with the virtual environment at a deeper level than simply viewing and moving around, that is, who wish to manipulate objects and the space itself. Until the HMD developers put the various input methodologies currently in development into the user’s hands, our Design Technology Team has explored various methodologies that overlap the individual platforms previously discussed to create an immersive modeling environment.
Other challenges we have faced in adopting these technologies into a more consistent workflow stem from a lack of native support for the hardware across the various design platforms that are currently used in practice. Using the software development kits for the Oculus Rift and Leap Motion, we are able to create custom protocols and methodologies that integrate gestural-based input within the VR environment. This allows the user’s hands to be virtually perceived within the space as well as to control specific modeling functions. This new methodology allows the user to not only create spatial forms via previously learned modeling gestures, but also to experience the ramifications of those spatial modifications while in the immersive environment. This workflow is a close approximation to what we feel will be the future development of immersive modeling environments.
These challenges are significant but hardly insurmountable. And let’s face it: The acts of drawing and modeling have evolved rather impressively since the perspective studies of the Renaissance as the primary method of exploring, representing and communicating architectural ideas. When considering VR and gestural tools, it helps to position the act of drawing in the 21st century as a continuum of methodologies that started in the Renaissance and moved through the 19th century (descriptive geometry) to the early 20th century (axonometric), into the late 20th century (diagrammatic notation). Today, it stands at the crossroads of database as drawing.
Through this reading of the evolution of architectural representation from drawing to immersive modeling, we posit that the role of the representational tool in architectural design must be understood anew in an increasingly virtual world.
Casey Mahon is a designer and the director of the Design Technology Group at CarrierJohnson + CULTURE, a leading global architecture, design and strategic branding practice known for innovative building, living, and communications solutions, reflecting the unique three-dimensional brand opportunities for each situation. Mahon is also an adjunct faculty member at Woodbury University.