Currently, 47% of organisations who use AI do not have any specific AI cyber security practices or processes in place. As the landscape grows more complicated, organisations are looking to regulators to pave the way forwards, writes Jon Guy
Risk managers are becoming increasingly concerned with the rise in the use of artificial intelligence and believe global governments, including the UK, need to come together to create a regulatory system which will mitigate many of the risks they face.
The concerns come as the UK’s Department for Science, Innovation and Technology’s (DSIT) consultation period on its draft Code of Practice on AI cyber security comes to an end and just a fortnight after the European Union’s AI Act became law across the single market.
Given how rapidly AI has evolved and how it has been embedded, DSIT said it had developed the code based on the National Cyber Security Centre’s (NCSC) guidelines for secure AI system development, to ensure cyber security will underpin AI safety.
Governments worldwide are concerned with the threat posed by AI-driven disinformation during elections, and as such, many countries are looking to bring in new laws in an effort to control its use.
When launching the code, Viscount Camrose, former UK Minister for AI and Intellectual Property explained: “Artificial Intelligence (AI) is a vital technology for the UK economy and for supporting people’s everyday lives. [It] is enabling organisations to provide better services to customers and offer people quicker access to information.
“Currently, 47% of organisations who use AI do not have any specific AI cyber security practices or processes in place.”
However, he continued: “As adoption continues to grow across society, we must ensure that end-users are protected from cyber security risks. This is essential when so many other AI risks stem from an insecure system.
“Organisations in the UK face a complex cyber security landscape and we want to ensure that they have the confidence to adopt AI into their infrastructure. Currently, 47% of organisations who use AI do not have any specific AI cyber security practices or processes in place. It is therefore imperative that we ensure that AI is designed, developed, deployed and maintained securely.”
Looking to Europe
In Europe the new laws have been praised. The new AI Act essentially regulates what artificial intelligence can and cannot do in the EU.
The Commission said the European AI Act aims to protect future users of a system from the possibility that an AI could treat them in a discriminatory, harmful or unjust manner. If an AI does not intrude in sensitive areas, it is not subject to the extensive regulations that apply to high-risk systems.
“If AI software is created with the aim of screening job applications and potentially filtering out applicants before a human HR professional is involved, then the developers of that software will be subject to the provisions of the AI Act as soon as the program is marketed or becomes operational,” explained professor of computer science at Saarland University, Holger Hermanns.
“The Act is an attempt to regulate AI in a reasonable and fair way, and we believe it has been successful.”
“However, an AI that simulates the reactions of opponents in a computer game can still be developed and marketed without the app developers having to worry about the AI Act.”
He added: “The AI Act shows that politicians have understood that AI can potentially pose a danger, especially when it impacts sensitive or health-related areas.”
“I see little risk of Europe being left behind by international developments as a result of the AI Act… The Act is an attempt to regulate AI in a reasonable and fair way, and we believe it has been successful.”
Growing concerns in the risk management community
Risk managers agree that the complex cybersecurity ladnscape is cause for concern.
Airmic, has been consulting its members on the issue of AI and say the results highlight the concerns risk managers have about the emergence of AI across business and the wider community.
A poll of its membership found that 74% of respondents say the UK government’s draft Code of Practice on AI cyber security should be made compulsory because AI is already being used in misinformation and disinformation campaigns, presenting clear threats to democratic societies.
Hoe-Yeong Loke, head of research at Airmic, told Strategic Risk: “The results were surprising given our members have generally been of the view that too much regulation comes at the expense of innovation and business strategy. The results reflect the deep concerns they have of AI wreaking havoc for our democratic societies.”
“The results reflect the deep concerns they have of AI wreaking havoc for our democratic societies.”
Julia Graham, CEO of Airmic, added: “In principle, Airmic is supportive of the UK government’s efforts to align AI regulations and standards with international standards, given concerns that Airmic members have that a patchwork of competing and even conflicting regulations and standards on AI around the world could be developing.”
There is a clear need for international governments to align regulation, because multinational organisations require a degree of clarity and consistency,. A patchwork of differing regulations will defeat the purpose of ensuring standards are maintained and will create new challenges and risks for business.
For risk managers the message is that they need to approach AI risks as with any other risk confronting their business and its operations. While it is clear that risk professionals want to see robust and compulsory regulations, they will take time to put in place, and in the meantime, risk professionals need to identify the risks from AI and look at how they can manage and mitigate them now.
1 Readers' comment