Many more companies are implementing AI into their operations, but the risks and insuring them, are about to get a lot more complicated
New innovations such as self-driving vehicles and artificial intelligence-backed systems that can do everything from accounting to writing news stories are designed to make our lives easier. But they could cause a headache for insurance managers.
One reason is that these developments are going to cause a shift from individual accountability to corporate-level liability.
Think about it from the perspective of an accountant. Whereas once an individual’s professional indemnity policy would have responded in the case of a negligence claim, it’s possible to imagine how the liability for an error may now fall at the door of the software provider whose AI has made an assumption too far.
But what does that mean for risk managers? Legal expert Jonathan Moss says it’s helpful to think about it through the lens of self driving cars.
Driving change
Suppose, your car is in autonomous mode and suffers a malfunction sending you ploughing into a cow stood in the road. “The rules of negligence, whether the driver was responsible for an act or omission does not apply to that,” Moss, who is the global head of transport for law firm DWF, says. “It would be a systems or software failing.”
But that has the ability to create a conflict between policies, as to which one responds. And that’s if there is sufficient guidance from case law for the judge even to understand the complexity of the situation.
“One of the exposures to liability is that the courts will apply the usual negligence rules to the situation, but they may not look at it as a product liability case,” Moss explains.
“Similarly, the individual driver will have a motor policy, but will not have a product liability policy.”
“That will also cause a headache for insurers and reinsurers about which policy responds and what happens when there’s no policy covering that participating angle,” Moss says.
The risk manager’s role
At a corporate level, that headache will be handled by risk managers.
Not only that, but Moss questions whether - with very little case law to rely upon - courts are even equipped to answer the questions over liability created by the new technology.
“ Certain judges may have little interest in technology or will need specialist guidance,” Moss says. “So they will have to rely on IT experts to completely understand what’s happened to avoid making the wrong judgment.”
Without a body of case law to fall back on, Moss says parliamentarians may need to step in.
“I think there needs to be a statute which enshrine various aspects of strict liability or determines when a manufacturer is responsible and when the software provider is responsible for incidents,” he says.
That would then give judges a foundation from which to work through a statute, which could then be interpreted. That could even see the introduction of new driving tests that include the training in the use of autonomous vehicles.
What to do?
For risk managers, Moss says the key thing is to stay on top of the technology being employed by their firm and those advancements to their insurance brokers.
“They should ask their broker about how insurers view the relationship between an individual’s negligence and product failure,” he says.
“In the event that brokers advise that product failure has a part to play in an assured or a corporate’s business, then they should sniff out programs which provide them protection in the event of product failure.”
Finally, the thing that a lot of people are overlooking is the firm’s responsibility if that new technology is hacked, Moss says.
“What happens if you’re operating a vessel at sea and somebody with criminal intent decides to hack your software and that causes the ship to have an accident?”
“They should be asking their brokers what protections they’ve got for cyber hacking.”
No comments yet