Aon’s Adam Peckman delves into the risks of underestimating AI and how organisations can get on the front foot

Everyone is talking about AI, but perhaps not enough about AI risks.

Adam Peckman, global practice leader of cyber risk consulting for Aon told StrategicRISK that while there is no doubt that companies across all industries are focusing in on the opportunities associated with AI, especially generative AI, this focus is not being matched in assessing AI risks.

artificial intelligence strategy hand robot

“There appears an imbalance between the strategic importance being placed on the role of AI to drive competitiveness and create new growth opportunities, and the potential risks that the adoption and dependency on AI poses companies,” said Peckman

“During our most recent survey of over 2,800 business leaders on the top risk issues facing their organisations, the risks associated with artificial intelligence was ranked at number 49 out of 60.

“At the same time, the majority of chief risk officers and insurance managers, at 98%, are telling us that the velocity of new AI risks is outpacing legacy approaches to manage enterprise risks,” he said.

What are the potential implications of underestimating the threat of AI?

Peckman said that we are observing threat actors use AI, and particularly generative AI, to improve the efficiency and efficacy of their hacking campaigns.

“Threat actors are using GenAI to create more realistic deepfakes during social engineering campaigns, and new LLM tools, such as WormGBT or FraudGBT, are being used to identify security vulnerabilities and create malware programmes in a more timely and scalable manner,” he said.

“Unfortunately, this trend has been coupled with the increase in Shadow AI. Employees using AI ‘outside’ the normal security, legal, and risk approval processes.

“The imperative to capture the economic opportunities associated with AI has, on occasion, led companies to rush AI technologies to market or employees to experiment with AI without approval or the correct security checks. Potentially exposing system and data to privacy, regulatory, and security risks. The result of this has been a large unsecure AI attack surface has emerged for threat actors to target.”

However, Peckman said, AI risks are not only a security risk. “There are potentially far-reaching consequences on the velocity and impact to other enterprise risks topics.”

“Threat actors are using GenAI to create more realistic deepfakes during social engineering campaigns”

Peckman said these include knowns incidents and claims in areas such as Director and Officer risks, where there can be allegations of “AI washing,” misleading or untruthful statements on uses of AI and associated risks.

Employment practices risks, where biases or discrimination are inadvertently introduced by AI models in people management. Media liability and libel risks, where liabilities arise when AI tools produce false and reputation harming information about a person. 

Beyond this, there are intellectual property risks, where there are unintended breach of intellectual property rights, or accidental disclosure of trade secrets from the use of unsecure AI tools. Professional liability risks, where there is false or misleading information introduced into work products or advice for clients through use of AI models.

And finally people and casualty risks, with injuries arising from uses of AI in the workplace — automation of machinery, robots, unmanned vehicle.

“Compounding the above challenges is the issue of ‘Silent AI’ across legacy insurance programmes. That have not yet been identified or tested,” he said.

How can awareness be increased?

“Risk leaders cannot afford to wait until these artificial intelligence initiatives ‘go live’ before investigating the risk and insurance implications,” said Peckman.

“Risk managers play a crucial role in providing analysis and advice to the various teams working on AI projects at the ‘digital frontier’ of their companies — the place of experimentation and adoption of new technologies, occurring three to five years ahead of business-as-usual.”

However, Peckman continued, risk managers are not currently involved in these initiatives at an early enough stage.

“Risk managers need to be part of their company AI committee that can advise on the risks and insurance implications of AI projects at a more formative stage”

Accordingly, it is only when these AI products or services are close to being launched, or are launched, that issues around security, privacy, compliance, or other business risks surface and risk leaders are asked to get involved. This compounds the issues of ‘Shadow AI’ and ‘Silent AI’ exposing the company to uninsured and unbudgeted losses.

“How do we solve this? Risk managers need to be part of their company AI committee that can advise on the risks and insurance implications of AI projects at a more formative stage. This involvement can reduce the potential of any additional rework, project abandonment, or worse, liability to the company and executives,” said Peckman.

“Risk managers can play a constructive role directing the company AI agenda. They can contextualise and rationalise AI related risks and provide strategic guidance to inform better operational and capital decision making,” he added.