The rapid ascent of DeepSeek underscores the need for risk managers to reassess their strategies for managing AI-related risks.

In January, Chinese-developed AI model, DeepSeek, released its latest version and created waves across the globe.

It quickly rose to the top of the Apple App Store’s download charts, catching the attention of employees at companies and tech investors.

Artificial intelligence

The latest version of DeepSeek has impressed AI specialists and sparked a wider conversation about its potential to disrupt the AI market.

US President Donald Trump even weighed in, calling it a “wake-up call” for American companies to step up their game in the race for AI dominance.

“DeepSeek’s rapid rise highlights the need to assess emerging technology risks, particularly around data security and market volatility.”

What sets DeepSeek apart is its ability to offer a competitive AI solution at a fraction of the cost of industry leaders like OpenAI, thanks to its reliance on fewer advanced chips.

This innovation has sent shockwaves through the tech world, with chipmaker Nvidia losing nearly $600 billion in market value, the largest single-day drop in US history.

For risk managers, DeepSeek’s rapid rise highlights the need to assess emerging technology risks, particularly around data security and market volatility.

What is DeepSeek?

DeepSeek, a Hangzhou-based AI chatbot, is disrupting the global AI market with its cost-effective alternative to Western giants like OpenAI and Google.

While DeepSeek’s rise is a significant moment in the tech industry, it’s also highlighting potential security challenges associated with the increasing adoption of AI-driven tools. A recent cyberattack on DeepSeek has brought these vulnerabilities to the forefront, signaling the growing need for stronger cybersecurity protocols in the AI sector.

Why is its launch significant?

DeepSeek’s launch marks a shift in the AI landscape, particularly as it challenges the dominance of established Western tech companies. The platform offers a more affordable solution for businesses and consumers, making advanced AI accessible to a wider audience.

As AI adoption accelerates, DeepSeek’s success demonstrates how non-Western companies are increasingly making their mark in global markets.

However, this shift also brings attention to the security risks associated with AI technology.

The recent DDoS (distributed denial-of-service) attack on DeepSeek revealed a significant weakness in its cybersecurity infrastructure, raising concerns about the safety of personal data and the integrity of the platform.

What risks does it bring?

The DeepSeek incident highlights several key risks tied to AI platforms.

One of the most critical is data privacy. Many AI services, including DeepSeek, require users to input personal information, making them prime targets for cyberattacks.

“The use of AI in phishing and social engineering schemes is on the rise”

Data breaches could expose sensitive details, putting both individuals and businesses at risk.

Another major concern is the potential for AI model manipulation. Cybercriminals could exploit vulnerabilities in the platform to generate harmful content, including malicious software or instructions for creating ransomware and toxins.

The use of AI in phishing and social engineering schemes is on the rise, as AI-generated content can be used to craft convincing fake communications to deceive users into sharing confidential information.

What should risk managers do?

The recent cyberattack on DeepSeek serves as a critical reminder for organisations to reassess their AI security strategies.

As AI platforms gain widespread adoption, the risks associated with them grow in complexity and scope.

One in 12, or 8 per cent of UK adults, which means around one million people, have used generative AI for work. For risk managers, the immediate priority should be conducting thorough risk assessments to evaluate both the technical security measures of AI vendors and their data handling practices.

Data governance is paramount in mitigating these risks. Establishing robust policies for the secure handling and sharing of sensitive information with AI platforms is crucial, ensuring that data is encrypted and protected across all stages of interaction.

Additionally, given the rise in AI-driven phishing and social engineering threats, risk managers must prioritise employee training programs.

“Would I recommend putting anything sensitive or personal or private on them? Absolutely not … Because you don’t know where the data goes,” 

Staff should be equipped with the knowledge to identify risks, particularly those tied to AI applications, and understand best practices for securing personal and organisational data. Put simply, employees will be using DeepSeek for work and risk managers need to be across that.

Furthermore, organisations must have tailored incident response plans in place to swiftly address breaches involving AI platforms. These plans should include protocols specifically designed for AI-related incidents, allowing for a rapid, coordinated response to minimise impact.

Michael Wooldridge, a professor of the foundations of AI at the University of Oxford, told The Guardian that it was not unreasonable to assume data inputted into the chatbot could be shared with China.

“I think it’s fine to download it and ask it about the performance of Liverpool football club or chat about the history of the Roman empire, but would I recommend putting anything sensitive or personal or private on them? Absolutely not … Because you don’t know where the data goes,” Wooldridge said.

Given the sensitive nature of data exchanged with AI platforms, encouraging employees to exercise restraint when interacting with AI systems — particularly when inputting confidential or proprietary information — is an essential component of a comprehensive security posture.