Artificial Intelligence (AI) is not a structural engineering trend; it’s a transformative technology reshaping all industries. Some might categorize AI as the latest buzzword, while others believe it will drive innovation and create new ways to analyze, design + optimize engineering solutions. For structural engineers and the A/E/C community at large, the question is not whether AI will impact the industry, but rather how will we adapt + integrate these advancements in ways that are appropriate, responsible, and do not abdicate our obligation to be in responsible charge. 

ai – an evolution

ai, particularly in the form of machine learning, is not a new concept. Machine learning has been used for decades, enabling computers to identify patterns + make predictions based on data. Twenty-four years ago, a friend of mine in grad school was building neural networks for optimizing culvert design. 

Artificial General Intelligence (we’ll say big AI, as opposed to little ai) – the ability for machines to perform any intellectual task that a human can – has not arrived yet. We’re at an inflection point between the two where current advancements in machine learning, coupled with the enormous amounts of digitized data + processing power, have given rise to tools that can generate content + respond in ways that appear intelligent and knowledgeable, or excel at specific tasks in ways that mimic or exceed human capabilities. 

ai in engineering

For generating blog posts (like this one) and non-technical or non-critical content, the distinction between appearing intelligent and being intelligent may not be relevant.
But for the practice of structural engineering, this distinction is critical.
How does ai fit into our industry? Is ai a technology we can employ today for: 

  • Generating meeting minutes? 
  • Proposal generation? 
  • Quality control? 
  • Technical reports? 
  • Research? 
  • Analysis + design? 
  • Construction documents?

If not today, what about tomorrow? 

In the short term, the industry faces significant hurdles in adoption + adaptation. In my opinion, the biggest hurdle isn’t technical; it’s us getting our arms around what the capabilities, limitations, and applications of these tools are, and how they align with the needs + obligations of the industry. This is particularly challenging because the rapid evolution of ai changes the capabilities, limitations + potential applications at a timescale that is much different than the A/E/C industry typically operates. 

For example, two years ago I believed that the behemoth AE firms had insurmountable advantages in this space due to their ability to train or fine tune models based upon their extensive proprietary data and ability to dedicate resources. Around then Microsoft Copilot, was made available to the “public” provided you committed to a minimum of 300 seats. Would this finally be the prophesied “death of the midsize firm?”

Then a year later, Copilot was available for purchase by everyone. Alternate technologies matured that could improve accuracy + performance of models for specific use cases without requiring the time + processing power previously required for training + fine-tuning. Now, every data-centric application we “own” has built-in ai (perhaps as a separate SaaS tier or murky data privacy policies). Larger firms still have more data at their disposal, but the barrier to entry has lowered. 

ai in the A/E/C industry – now + in the future 

The current landscape is diverse, fragmented, dynamically changing and filled with promises. There are now literally thousands of ai tools available to the A/E/C industry. With the barrier to entry to employing ai significantly lowered, how each firm approaches implementation of these technologies will be inconsistent. Some firms have mature policies around ai, and build + train their own models on their own private network. Some firms are comfortable utilizing free applications where data may not be protected. In addition to different security and privacy approaches, there are also differing levels of trust + content verification provided by ai. This may impact consistency + accuracy of solutions, and may have ethical implications as well. 

As we embrace ai, we must not only remain aware of its limitations, but also our own. Anthropomorphism—attributing human-like intelligence to ai—can lead to overreliance or misinterpretation of results. People are creative and already good at creating content, but perhaps not as good at auditing + reviewing content created by others. Confidence built upon using large language models, trained on vast amounts of public data, to perform general tasks may be misplaced when applying ai to specific technical tasks using smaller unstructured data sets.  

I expect that over the next few years, the use and adoption of ai in engineering will be a bit turbulent without consistent application or consistent industry-wide expectations of its responsible use. The tools are amazing + powerful, so I believe few will sit on the sidelines and wait for the things to settle down before jumping in. But at the same time, the low barrier to entry of using some of these tools means that just because someone claims to be an early adopter doesn’t mean they are a knowledgeable or advanced user of this technology.  

It will be critical to ask partners how they use ai, if they have an ai policy, if that policy is current + adhered to, and most importantly, are they using ai in a manner that is consistent with your expectations and your clients’ expectations.

AUTHORS

RELATED POSTS

Leave a Reply

Your email address will not be published. Required fields are marked *