We are still in the foothills of generative design’s capabilities. Some of the most intriguing developments are being made in conjunction with another digital technology — 3D printing — which offers the compelling prospect of fully integrating digital design and construction. Foster + Partners, for example, has developed analysis and control tools that link design with Fused Deposition Modelling, a robotic technique for 3D printing not in layers but in 3D space. Although the method compromises slightly on accuracy, it opens up the possibility of giving shape to far more complex, digitally designed forms. According to Foster’s Jan Dierckx, another member of the Specialist Modelling Group, “This changes the design method considerably.
Whereas for traditional printing any model can be sliced into 2D layers and printed automatically, now there is an opportunity to explicitly design and optimize the structure.”
Many construction components are a certain shape and size because of manufacturing and logistics constraints, points out Bengtson. If they could be 3D-printed on site using the minimum amount of material to meet strength and other requirements, the potential savings on materials and transport are huge. “That’s why this technique is so interesting and why it will be so disruptive: that’s when we can really start to save the world.”
Mining for data
If one thing unites all of these digital modelling tools, it is their insatiable thirst for data. Matterport, for example, hosts a vast amount of geospatial data on Amazon web servers — over 650,000 3D models, primarily of real estate. The owners of Matterport are exploring the use of AI to automatically categorize and interpret the spaces and objects in this database. “In real estate it might recognize when a lounge is a lounge or a kitchen is a kitchen by recognizing the standard objects in a room type,” explains Karl Pallas, co-director at Immerse UK, which sells Matterport in the UK. “They are mining the data, and the more they have, the more they can do with it.”
Likewise, BIM models will become more intelligent as wider trends such as big data and the internet of things filter through to the design process. Data extracted from internet-connected sensors will give designers unprecedented access to metrics related to building use, performance and user behaviour. At the same time, archived data on projects, including 3D models, 2D drawings, images and text, can be interrogated and those insights transferred to new projects.
Gaming company Ubisoft has trained an AI program to spot when its coders are about to make a mistake and alert them. Its R&D division fed the program with ten years of code from its software library, so that it could learn from historic mistakes and predict when a coder is about to repeat them. Why couldn’t this work for building design too? Post-occupancy evaluation data could be mined along with archived project data to create a repository of solutions and flag up potential issues early in the process.
In time, building models could be linked together to provide ever more complex simulations of the built environment. “On large schemes, we build up BIM models over multiple buildings until we have coverage over a quite considerable area,” says Nick Edwards, principal at architect BDP. “In other areas we might work with landowners who own an estate — as each individual building is mapped and data coordinated, a picture is built up.”
As these models become broader in scope, they become increasingly powerful tools, averting the need to duplicate survey information and providing a more comprehensive understanding of how buildings and their users interrelate. “We could overlay pedestrian and cycle routes with air-pollution mapping and sunlight and noise to see how well public spaces work,” says Edwards. “The more you can feed in, the more you can extract.”
Cities including London, Hamburg, Singapore and Helsinki are all developing intelligent 3D models of the urban realm to help streamline planning and design. In Chicago, WSP is developing a model that interlinks with existing engineering design software, to display not just buildings but transportation and other infrastructure projects.
One day soon these models could be expanded to include live data feeds on anything from vehicle traffic to building performance to air pollution, offering an unprecedented opportunity for city authorities to monitor and tweak systems, and for developers and architects to test out the impact of schemes on their surroundings. “It is not an unrealistic expectation,” says Edwards, “but it needs more lead from the public sector because, by default, the private sector is in competition, so many of the projects we work on involve signing non-disclosure agreements. There’s a lot of protection about data, yet if you can unlock some of that in a collective way, there is benefit for everyone — not just teams trying to progress projects, but society as a whole.”
In an increasingly digital world, computers will complement rather than replace human intelligence. Designers will be marshalling ever more powerful tools and interpreting and refining the results they produce. “There still needs to be judgement because ‘computer says’ doesn’t necessarily mean it’s right,” points out Edwards. “Sometimes things can run counter to each other: in a city people might want more public space, but that can push them further apart and reduce the efficiency of local services. Sometimes a smaller amount of space that’s better maintained can produce a better outcome. The more tools we have to make those judgements, the better.”
When computers can take on the heavy lifting, design time will be freed up to focus on areas where human insight can genuinely bring value. “We need to think about what’s unique to us,” says Bengtson. “Computers will struggle for many years to understand feelings, empathy, fantasy — that’s what we should add.”
Article originally published on www.the-possible.com