DeepSeek and the Sovereignty Risks of AI
Exploring the implications of AI sovereignty, risks of foreign AI models, and strategic considerations for CIOs and CTOsDeepSeek AI has stunned markets by developing a model with fewer resources than its U.S. counterparts, raising eyebrows among investors and governments alike.
Its R1 model is reasoning-centric, cost-efficient, and open source, making it an attractive option for enterprises looking to integrate AI into their operations.
However, it also introduces a new and often-overlooked risk: the sovereignty of AI models.
For CIOs and CTOs, this is an issue that is both obvious and urgent.
Sovereignty concerns already shape decisions in cloud computing and data governance, yet AI remains a grey area.
While businesses rigorously assess the security and compliance of their cloud vendors, few are looking under the hood of their AI-powered SaaS and software tools to examine which models are being used, where they come from, and what risks they carry.
The question is no longer just about AI adoption, but whether organisations are choosing models that align with their risk and compliance frameworks.
In today’s interconnected yet geopolitically tense world, the provenance of technology holds strategic importance.
Governments are increasingly scrutinising the origins of AI models, questioning whether foreign-developed AI could pose national security risks.
At the same time, the concept of AI nationalism is emerging, where AI is viewed as a critical strategic asset, much like oil or semiconductors.
This shift reflects a broader trend in which nations seek to assert control over their technological ecosystems, raising new concerns for enterprises integrating AI into their operations.
A Growing Concern for CIOs and CISOs
According to a 2024 ADAPT survey of 125 Cloud and Infrastructure leaders, 60% of organisations looking to use or pilot LLMs plan to primarily consume off-the-shelf models through SaaS providers.
Some of this usage includes integrating APIs into existing workflows and fine-tuning models on internal knowledge bases.
Yet, while these solutions may be convenient, they also introduce risks related to the unknown assumptions embedded within these AI models and their potential future behaviours.
DeepSeek AI, as a Chinese-developed model, exemplifies both the promise and the challenge.
While open-source models can be transported into private clouds without direct vendor dependencies, the real concern lies in understanding how these models were developed, what biases or assumptions they contain, and how they might behave when granted ‘agency’—the ability to make autonomous decisions or interact dynamically as AI agents.
The days of treating AI as a neutral, borderless technology are over.
Companies are now expected to scrutinise their tech stack not just for performance and cost-effectiveness, but for the underlying logic and intent driving these systems.
ADAPT’s Security Edge Surveys from April and October 2024 reveal that among 140 leading Australian CISOs, data security risks and third-party risks rank as the second and third highest security threats this year—well ahead of state-sponsored threat actors, which rank ninth.
This highlights the growing importance of understanding where AI models originate, how they handle data, and whether they align with organisational security requirements.
The consequences of ignoring this debate are already playing out.
Governments are taking a harder line on foreign technology providers, particularly in critical infrastructure and financial services.
Some organisations are adopting internal policies to screen for AI model origins, ensuring their stack aligns with compliance and risk management frameworks.
But this raises a difficult question: if the most cost-effective and innovative AI comes from jurisdictions with regulatory or political concerns, does rejecting these models put businesses at a disadvantage?
DeepSeek AI’s promise of accessible, reasoning-driven technology is undeniable.
Models like the R1 hold transformative potential, from optimising supply chains to enhancing customer engagement.
But they also challenge CIOs and CTOs to adopt a more strategic lens, weighing risks alongside benefits.
In the same ADAPT survey, only 3% of CISOs rated their organisations as mature in protecting intellectual property, and just 2% reported maturity in controlling data leakage.
This underscores the reality that AI-related risks are already under-addressed—even before factoring in the challenges of AI sovereignty.
Cloud Sovereignty as a Precedent for AI Sovereignty
The issue of AI sovereignty is not happening in a vacuum.
A year earlier, in 2023, 60% of Cloud and Infrastructure leaders stated that sovereignty was important when choosing cloud vendors.
Across both 2023 and 2024, security features remained the most important criterion for evaluating infrastructure vendors.
These insights suggest that CIOs and CTOs already recognise the importance of sovereignty in cloud computing.
The next logical step is to extend this thinking to AI models.
If sovereignty and security concerns influence decisions around cloud providers, why shouldn’t the same level of scrutiny be applied to AI solutions?
So, how can CIOs and CTOs ensure they are looking under the hood of their AI-powered SaaS and software products?
The first step is transparency—demanding that vendors disclose which models power their solutions and where they were developed.
Organisations should also conduct AI audits, assessing sovereignty risks alongside traditional cybersecurity factors.
Finally, firms must build internal policies that align AI procurement with broader governance, security, and compliance standards.
While DeepSeek offers a robust tool with a low-cost structure, the need for organisations to be secure and trusted will potentially trump any cost-saving advantages.
The AI landscape is shifting, and as organisations move deeper into an era of AI nationalism, they must weigh the long-term implications of their choices.
AI is no longer just a tool—it’s a decision about the future of your business.
The question is no longer whether your organisation should adopt AI, but which AI you are willing to trust.