As AI has become more accessible, we have witnessed examples illustrating its diverse applications. Prominent among these are generative AIs, which excel in their ability to “create” images through prompts, many distinguished by their composition and vividness. These AI systems are neural networks with billions of parameters, trained to create images from natural language, using a dataset of text–image pairs. Thus, although the initial question posed by Turing in the 1950s, “Can machines think?” still recurs today, the generation of images and text is grounded in existing information, limiting their capabilities.
What has surprised many is the increasingly apparent closeness to overcoming the Turing test and the growing similarity, in terms of visualizations, to what an architect with skills in this field can achieve. In this context, while the debate persists in the architectural community about whether AI can process architectural concepts, this article explores how it interprets materials to develop these visual representations. With that in mind, a single prompt was developed for this experiment (with materiality as its variable) to delve into the obtained results.