Panel warns that AI could lead to social unrest if businesses are not prepared to upskill existing workers
Risk managers have been warned that the decade to come is will see a fundamental shift in the way businesses operate, and the need for human interaction in an ever decreasing number of roles.
Delegates gathered in London for the Airmic Risk Forum were told that as artificial intelligence (AI) continues to evolve there could be a major displacement of human employment.
Daniel Hulme, CEO and chief AI officer at Satalia said that unless governments and businesses recognised the rising influence of AI and began to look at how to mitigate the risks, the threat of major societal disruption is real.
He said: “In 18 months AI will move to the point where you will have a PhD in your pocket, and in a further 18 months it will be a university professor.
“What we need to do is ensure that the people are using this technology to deliver more profit for power, but we are seeing this already.”
”What we will be able to do with AI is to free people up from repetitive tasks which will allow them to do something more interesting.”
He continued: “What it will be like in 10 years’ time we simply do not know. However, as AI become more sophisticated it is likely it will be able to undertake far more tasks and jobs that are current done by human hands.
“There is the concern that AI will start to take people’s jobs, at a pace in which we cannot retrain for new roles quickly enough and that new roles will not be created which leads to a real risk of creating social unrest.”
“The trajectory of AI will not slow down. We are set to see orders of magnitude in its progression. You will need to look at how it will integrate into your business.”
“What is clear is that with AI you need to give it a purpose. If it does not have a purpose then it will not work.”
AI became the dominant theme at the event with Airmic CEO Julia Graham saying that the year ahead will be described by AI, its use, its benefits and its risks.
“Who would have thought three weeks ago we would be able to download a Trump Risk Index, which calculates the political risks for countries across the world under a Trump administration,” she said. “We also meet at the start of the Chinese New Year – the year of the snake.
“Many in Asia say it is quite apt that this is the year of the snake, as the snake is adaptable, agile and changes its skin.
“For me, the year of the snake will be the year of AI, and to we will need to be adaptable, agile and have the ability to change our skin if we are to meet the challenges. We are likely to need to reevaluate what we are doing and how we do it in the face of AI.”
Speaking on a panel David Pryce, senior partner at Fenchurch Law said that as a company which is full of knowledge workers, the impact of AI will be significant.
“The risk I am thinking about will be those which will impact everyone in this room,” he added. “As the chair of the board when it comes to risk, I am thinking about strategy and how this risk will affect the business.
“I believe we are at a moment of fundamental change, and I agree with Julia that 2025 will be the year of AI… We will see AI going from tools to the arrival of synthetic sentient beings this year and that will affect us all as knowledge workers.
“As knowledge workers we have three different areas, intelligence, knowledge and experience. In terms of intelligence and knowledge AI has beaten us as humans already. Experience is the last piece we still have.
“What we need to do with AI is split it between intelligence and knowledge and then to use our experience to supplement and capture those streams. AI cannot access what is in our heads and what is stored on our private company systems.”
Hulme said that large language models (LLM) were now at the forefront of the AI revolution.
“The key word at present is adaptive. It is the capability for AI to learn from its mistakes and firm its calculations and mate the outcome better.
“Reasoning is the next battle ground for linguistic models We have a long way to git before AI can do what human can do. LLM are not good at making decisions and at present we often make the mistake of applying new technology to solve the wrong problems.”
No comments yet