Nowadays, architectural work is closely linked to technology and the advances that emerge in this field. In that sense, various aspects of artificial intelligence have been widely discussed. The reality is that, rather than plunging into a competition of capabilities between Architects vs AI, —with nuances that could evoke some aspects of the ideology of 19th-century English Luddites— advances in this field can be seen as tools to optimize processes and open new perspectives within the profession.
In this context, architecture often spans various stages, from early phases where data decisively shapes built environments to later ones where generative design tools for spaces play a fundamental role in spatial configuration. In this process, visualization plays the crucial role of graphically understanding the expression of what is being designed. Thus, iterating on the visualization and evaluating each of the results is vital not only to express ideas but also to use those visualizations to interpret aesthetic elements.
These visualization processes typically use "traditional" tools, involving both individual and collective applications, using tracing or modeling software, rendering or ambiance software, and culminating in post-production processes. Usually, this consumes considerable amounts of time, with constant cycles of iterations until reaching a final result. In that sense, AI-powered visualization tools have emerged as exciting options.
Within the realm of apps for architects, some artificial intelligence systems like DALL-E, Midjourney, and Adobe Firefly have gained prominence for their extensive use, characterized by reciprocal interaction between the user and the AI. In this process, users, through prompts, can describe what they intend to visualize. For example, when asked, ChatGPT 3.5 defines it as follows:
AD / Enrique Tovar: In the context of AI and visualization, what is a prompt?
ChatGPT 3.5: In the context of AI and visualization, a "prompt" typically refers to a specific instruction or input provided to an AI model to generate a response or output. It serves as a way for users to interact with and guide the AI system. The prompt can take various forms depending on the application, and it's crucial in influencing the output of the AI model.
Regardless of the AI system used for visualization purposes, the obtained results are extremely interesting and have empowered a new class of processes. These now heavily depend on the author's ability to connect architectural concepts through words, parameter settings, the length and level of detail or specificity of their prompt, as well as a new emerging ability to interact with the AI to achieve the expected results. Therefore, it is crucial to take into account not only the excellence of the image or outcome but also the prompt or parameters that are used.
Illustrating the diverse approaches to the subject, emerging new tools could better exemplify the capabilities of AI visualization and design. In the future, the impact of AI tools in architecture may streamline workflows, predict design decisions, foster creativity, and provide fresh perspectives within the expansive realm of architecture.
From a rendering perspective, tools such as ArkoAI may reduce the time spent on the design and rendering process in the Architecture, Engineering, and Construction (AEC) industry. The seamless integration between these types of artificial intelligence systems and AEC industry software prompts a transformation of the conventional design process. In the future, with a few words, building designs will be rendered with different environments, lighting, and styles. In this context, Generative AI generates new design ideas or rapidly renders directly from models developed in software such as Revit, SketchUp, and Rhino.
Beyond architectural design, architects are currently and will continue to be involved in other design processes, such as creating furniture that complements the interiors of the environments they design. For instance, Philippe Starck led the design of a production chair enabled by AI in collaboration with humans. In partnership with Autodesk Research and the contemporary furniture maker Kartell, Starck leveraged generative design in Autodesk Fusion to create the chair, aptly named “A.I.”. To bring the chair to life, Starck contributed to the overall vision, and Fusion's advanced Generative Design algorithms suggested various design options that met the manufacturing requirements.
Overall, within these applications and proposals for the use of AI, we are witnessing a multidimensional technological revolution that, similar to other industries parallel to architecture, is still taking shape. At the same time, discussions about its risks and benefits continue.
It is crucial that, in this process, we experiment with these tools to understand their capabilities and potential, as well as to identify aspects that may not be positive. What is certain is that the generational diversity within our society, coupled with diverse perspectives on the subject, is likely to open up new horizons; ones for which we have not yet even defined a prompt.