Sovereign AI
Is open source safe enough for regulated industries? A direct answer.
We heard this question once and it stuck with us. Not because it was a bad question, but because the assumptions underneath it were so widespread, and so wrong, that they deserved a complete and honest answer.
This is that answer.
If you work in financial services, healthcare, legal, or any other regulated sector, and someone in your organization has asked whether open source is safe enough to run mission-critical infrastructure, this post is written for you. We are going to make a direct case: open source software is not merely "safe enough" for regulated industries. In 2026, it is the only architecture that can meet what modern regulation actually requires.
The myth at the centre of the question
The objection to open source in regulated settings almost always rests on a single, intuitive belief: closed source must be more secure, because attackers can't see the code.
This is called security through obscurity. It is one of the oldest ideas in cryptography. And it was retired, with finality, more than 140 years ago.
In 1883, the Dutch cryptographer Auguste Kerckhoffs published a set of principles for military cryptography. The most enduring of them states that a system should remain secure even if everything about it, except the key, is public knowledge. The American mathematician Claude Shannon later restated the same idea more bluntly: "the enemy knows the system."
The reasoning is simple. If your security depends on attackers not knowing how your system works, you are betting that no one will ever leak it, reverse engineer it, or find a weakness that you missed. That bet has lost, in public, with consequences, every single time it has been made at scale. From DVD encryption to automotive ECUs to proprietary network protocols, the history of computing is a history of obscurity failing the moment a competent attacker decided to look.
This matters for regulated industries because the alternative is not "open source has unique risks." The alternative is demonstrable assurance. And demonstrable assurance is something only open source can provide.
What "closed source" actually means in practice
When a regulated organization buys closed source software, here is what it is actually buying:
It is buying the vendor's claim that the software is secure. It is buying the vendor's claim that there are no backdoors. It is buying the vendor's claim that vulnerabilities are disclosed promptly. It is buying the vendor's claim that the software does what it says, and nothing else.
The buyer's auditor cannot verify any of these claims independently. The buyer's CISO cannot read the code to confirm that the security model holds. The buyer's regulator can request a SOC 2 report, but a SOC 2 report is itself a verification of the vendor's processes, not of the software's behaviour. At the end of the chain, the regulated entity is signing its name to a stack of attestations that it cannot, by design, verify.
This is not a minor procedural problem. Under the Digital Operational Resilience Act (DORA), which entered enforcement across the European Union in January 2025, financial entities remain fully responsible for compliance with all obligations under the regulation, even when ICT services are provided by third parties. The regulation explicitly requires firms to maintain a holistic understanding of their ICT vendor dependencies, including documented exit strategies for critical services and the ability to switch providers if necessary.
If your critical AI infrastructure is closed source, your exit strategy is not under your control. Your understanding of the system is not architectural; it is contractual. And when a regulator asks you to demonstrate that the system behaves as expected, your answer is: the vendor told us so.
That is not assurance. That is delegation.
What open source actually changes
Open source software inverts every one of those points.
Your security team can read the codebase before deployment, instead of signing an NDA and accepting the vendor's claims. Your auditor can verify that the binary running in production was built from the source code that was audited, using reproducible builds. Your CISO can pin a specific commit hash and prove, on demand, exactly what is running in your infrastructure at any point in time.
When a vulnerability is disclosed, you do not wait for the vendor to schedule a patch in their next release cycle. You can see the fix as it is being developed, evaluate it for your own context, and apply it on your own timeline.
If the vendor goes bankrupt, gets acquired, changes its pricing model, or decides to deprecate the product, none of that affects you. The license under which you obtained the software, whether Apache 2.0, MIT, or any other recognized open source license, cannot be revoked. The code is yours to keep, fork, modify, and run for as long as you need it.
These are not aspirational benefits. These are properties of the software itself, enforceable in court if necessary, and verifiable on demand. They are the kind of properties regulators have been asking for, in increasingly explicit language, for the last five years.
The patch speed question
One of the most common counter-arguments to open source is that "more eyes" does not actually mean better security. The argument goes: if the code is public, attackers can also read it, and they will find vulnerabilities first.
This sounds reasonable. The data does not support it.
Empirical research into vendor patching behaviour, including studies conducted at Carnegie Mellon, has consistently found two things. First, open source vendors patch vulnerabilities faster than closed source vendors on average. Second, vulnerabilities are far more likely to be patched after public disclosure than before, with one widely cited study putting the increase at 137 percent.
The reason is structural rather than philosophical. When a vulnerability is disclosed publicly in an open source project, the fix is developed in public, reviewed in public, and shipped in public. The disclosure itself creates pressure to patch quickly. In closed source projects, the same disclosure may exist on a private bug tracker for weeks or months before a patch ships, with no external visibility into the timeline.
This is not a hypothetical concern for regulated industries. DORA requires financial entities to implement an ICT-related incident management process that identifies, tracks, logs, and categorizes incidents. It requires testing of all critical tools and applications at least annually. It requires that weaknesses, deficiencies, and gaps are promptly identified, mitigated, and eliminated. Open source software, by giving you visibility into the actual state of patches and vulnerabilities, makes those obligations significantly easier to discharge.
It is also worth naming what regulated industries already run on. The infrastructure of European banking is built on Linux. The transactional databases of European insurers are built on PostgreSQL. The orchestration layer of European hospital networks is built on Kubernetes. Every one of these is open source. Every one of them has been deemed safe enough, by every relevant regulator, to handle data far more sensitive than the average AI workload. The argument that open source is not safe for regulated industries is, in 2026, an argument against the existing reality of those industries.
What modern regulation actually demands
Reading DORA, the EU AI Act, FINMA's circulars on outsourcing, and the various sector-specific guidance from BaFin, the FCA, and other European regulators in sequence, a pattern emerges. Modern regulation is no longer asking whether your systems are secure. It is asking whether you can demonstrate that they are secure, and whether you can continue to operate when something goes wrong.
The specific requirements vary by regulation, but the architectural principles converge:
Auditability. You must be able to show, to a regulator, exactly what your systems are doing. You must be able to reconstruct decisions after the fact. The EU AI Act, in particular, requires this for high-risk AI systems, with specific provisions on logging, traceability, and post-hoc explainability.
Vendor lock-in mitigation. You must have a documented exit strategy for critical ICT services. You must be able to switch providers if necessary. DORA Article 28 makes this explicit. FINMA's outsourcing requirements have made it explicit since 2018.
Continuity under stress. You must be able to continue operating under conditions of supplier failure, geopolitical disruption, or regulatory change in a third-country jurisdiction. Recent years have made it clear that extraterritorial sanctions and executive orders can cut off access to cloud-hosted services with little or no notice, affecting even high-profile institutional users. This is no longer a theoretical concern; it is a documented continuity risk.
Architectural transparency. You must understand, not just contractually but technically, how your systems handle data. Where it flows. Where it is processed. Who has access. The GDPR has required this since 2018. The EU AI Act extends it.
Every one of these requirements is harder to satisfy with closed source software, and easier to satisfy with open source. Auditability is improved when the code itself is auditable. Vendor lock-in is mitigated when the software can be self-hosted indefinitely under an irrevocable license. Continuity is preserved when no single party can switch off your access. Architectural transparency is automatic when the architecture is, by definition, transparent.
The case for open source in regulated industries, stated plainly
Open source is safe for regulated industries because it is the only software model that allows regulated industries to prove the things their regulators are asking them to prove.
Closed source asks you to take the vendor's word. Open source lets you read the code. Closed source ties your continuity to a vendor's commercial decisions. Open source makes continuity a property of the license. Closed source makes incident response a vendor's timeline. Open source makes it your timeline. Closed source makes architectural review a contractual exercise. Open source makes it a code review.
These are not subjective preferences. They are the difference between contractual assurance and architectural assurance, and modern regulation increasingly requires the second.
Common objections, answered
Three objections come up often enough that they deserve direct responses.
"Open source means no support." This conflates licensing with commercial relationships. Many open source projects have professional commercial vendors offering enterprise support, SLAs, and dedicated security response. The license affects ownership of the code; it does not affect the existence of support contracts. If anything, open source typically gives you a wider choice of support vendors, because no single vendor controls the underlying technology.
"Open source means anyone can contribute malicious code." Mature open source projects use signed commits, multi-reviewer pull request approvals, reproducible builds, and software bills of materials (SBOMs) precisely to address this. The supply chain security of major open source projects in 2026 is, in many cases, more rigorous than that of comparable closed source products, because the entire pipeline is publicly inspectable.
"Our regulator won't approve it." In our experience, this is almost always a misreading of the regulator's actual position. Regulators care about risk management, demonstrable controls, and continuity. Open source software, properly governed, satisfies those requirements as well or better than closed source. What regulators object to is unmaintained software, unverified dependencies, and undocumented deployment. Those are problems of governance, not of licensing model.
What "properly governed" open source looks like
To pre-empt the natural follow-up question: yes, deploying open source in a regulated environment is not free of obligations. It requires:
A documented inventory of open source components in use, with versions and licenses recorded
A process for monitoring upstream security advisories and applying patches
A maintained build pipeline with reproducible builds and signed artifacts
An SBOM (software bill of materials) for each deployed system
A documented exit and continuity strategy, including which dependencies are critical
A clear separation between the open source layer and any commercial support contracts in place
These are not exceptional requirements. They are, increasingly, the baseline requirements for any software in regulated industries, open source or not. The difference is that open source makes them achievable.
Where Xinity sits in this picture
This is not an abstract argument for us. We built Xinity Runtime, our AI inference engine, to be exactly the kind of open source infrastructure that regulated industries can deploy without having to take anyone's word for anything.
The runtime is licensed under Apache 2.0, which means it is yours to keep, fork, modify, and run on your own infrastructure indefinitely. The code is publicly available at github.com/xinity-ai/xinity-ai. Builds are reproducible. The architecture is documented. The data flow is auditable, by design, with no hidden network calls and no third-party telemetry.
We made these choices because the alternative was to ask regulated industries to trust us. And we did not think that was a fair thing to ask.
If you are a CISO, compliance officer, or architect evaluating AI infrastructure for a regulated environment, you can read every line of the code that would run in your data centre before you commit to anything. That is the standard we held ourselves to. We think it is the standard the rest of the industry should be held to as well.
In summary
The question "is open source safe enough for regulated industries?" has a direct answer.
Open source is not just safe enough. It is, in 2026, the only software architecture that can satisfy what modern regulation actually demands: auditability, continuity, transparency, and demonstrable control. The myth of closed source security through obscurity has been retired in cryptography for over a century, and it should be retired in enterprise procurement now.
The infrastructure of every regulated industry on earth, banks, hospitals, governments, already runs on open source operating systems, databases, and orchestration tools. The question is not whether open source belongs in regulated industries. It is why anyone still treats it as the exception rather than the foundation.
If you are working through this question for your own organization, we would be happy to talk. Not to sell you anything, but to share what we have learned working with regulated customers across the DACH region.
You can reach us at contact@xinity.ai, or read the source for yourself at github.com/xinity-ai/xinity-ai.
Xinity Runtime is the open source AI infrastructure layer for European enterprises. Apache 2.0, self-hostable, GDPR-compliant by architecture.
YOUR AI. YOUR SERVERS.
Ready to Run any AI on Your Own Terms?
No commitment. 30 minutes. We'll show you exactly what deployment looks like for your company.
Use Link
Company
Am Gestade 5/2
1010 Vienna, Austria
© 2026 Xinity
YOUR AI. YOUR SERVERS.
Ready to Run any AI on Your Own Terms?
No commitment. 30 minutes. We'll show you exactly what deployment looks like for your company.
Use Link
Company
Am Gestade 5/2
1010 Vienna, Austria
© 2026 Xinity
YOUR AI. YOUR SERVERS.
Ready to Run any AI on Your Own Terms?
No commitment. 30 minutes. We'll show you exactly what deployment looks like for your company.
Use Link
Company
Am Gestade 5/2
1010 Vienna, Austria
© 2026 Xinity
