Let’s be honest: most companies using AI right now are doing so in a way that would make a compliance officer sweat.
Not because they’re doing anything wrong. But because the EU AI Act — which has been rolling out in phases since early 2025 — introduces a level of accountability for AI systems that most organisations simply haven’t had to think about before. And the biggest deadline is now five months away.
If you’re running AI in a regulated industry, or using it in ways that touch hiring, credit, healthcare, or customer decisions, this is worth your full attention.
So what actually is the EU AI Act?
Think of it as GDPR, but for artificial intelligence. It’s the first law of its kind anywhere in the world — a single framework governing how AI can be built, sold, and used across all 27 EU member states. It doesn’t matter if your company is based in Vienna, London, or San Francisco. If you’re serving EU customers or running AI within the EU, it applies to you.
The law works on a risk pyramid. At the top are AI applications that are simply banned — things like social scoring systems, real-time facial recognition in public spaces, and AI that manipulates people without their knowledge. Those have been illegal since February 2025.
Below that sits the category most businesses need to worry about: high-risk AI. This covers systems used in hiring, credit scoring, medical decisions, education, law enforcement, and critical infrastructure. If your AI touches any of these areas, a significant compliance deadline is coming on August 2, 2026.
Miss it, and the fines go up to €35 million — or 7% of global annual turnover.
What does “high-risk compliance” actually require?
This is where things get practical — and where a lot of companies are going to run into trouble.
The Act doesn’t just ask you to tick a box and carry on. It requires you to demonstrate, in documented and verifiable ways, that your AI systems are operating under proper oversight. Specifically, you’ll need:
A complete inventory of every AI system you use, with risk classifications
Technical documentation showing how those systems work
Audit logs of AI decisions and outputs
Evidence of meaningful human oversight mechanisms
Data protection impact assessments
Conformity assessments and EU database registration for high-risk systems
Here’s the catch: if your AI runs on someone else’s cloud infrastructure — if your prompts and data are being sent to an external LLM provider — producing all of this documentation is genuinely hard. You’re dependent on your vendor’s transparency, their audit processes, their compliance timeline. You’re hoping their paperwork holds up under scrutiny.
That’s a significant amount of trust to place in someone else’s infrastructure.
Why on-premise AI changes this equation
This is something we’ve thought a lot about at Xinity, because it’s central to why we built what we built.
When AI runs on infrastructure you own — in your data centre, on your hardware, under your control — compliance stops being a vendor negotiation and becomes an operational reality. Your audit logs are yours. Your data never leaves your environment, which means there’s no ambiguity about GDPR, no cross-border transfer headaches, no sub-processor agreements to untangle. When a regulator asks you to demonstrate oversight, you can actually show it.
It’s not just about compliance, though. It’s about being an organisation that can genuinely answer the question: do we know what our AI is doing, and can we prove it?
Right now, a surprising number of companies cannot answer that honestly.
A word on where this is all heading
The EU AI Act is the floor, not the ceiling. National governments are already building on top of it — Italy passed its own AI law in October 2025. More will follow. And as AI capabilities grow, the regulatory scope will almost certainly expand.
The organisations that are going to handle this well aren’t the ones scrambling to retrofit compliance onto systems they don’t fully control. They’re the ones that built on a foundation of transparency and data sovereignty from the beginning.
That’s what a compute-independent Europe looks like in practice.
Where Xinity fits in
We help regulated industries run AI on their own infrastructure — not because cloud AI is bad, but because for organisations that take compliance and data sovereignty seriously, owning your stack is increasingly the only way to be genuinely confident in your AI posture.
If you’re thinking through how the EU AI Act affects your organisation, we’re happy to talk through it — no pitch, just a conversation.
This article is for informational purposes only and does not constitute legal advice. For specific EU AI Act compliance guidance, speak with a qualified legal professional.
YOUR AI. YOUR SERVERS.
Ready to Run any AI on Your Own Terms?
No commitment. 30 minutes. We'll show you exactly what deployment looks like for your company.
Use Link
Company
Am Gestade 5/2
1010 Vienna, Austria
© 2026 Xinity
YOUR AI. YOUR SERVERS.
Ready to Run any AI on Your Own Terms?
No commitment. 30 minutes. We'll show you exactly what deployment looks like for your company.
Use Link
Company
Am Gestade 5/2
1010 Vienna, Austria
© 2026 Xinity
YOUR AI. YOUR SERVERS.
Ready to Run any AI on Your Own Terms?
No commitment. 30 minutes. We'll show you exactly what deployment looks like for your company.
Use Link
Company
Am Gestade 5/2
1010 Vienna, Austria
© 2026 Xinity
