Let’s start with an uncomfortable truth: your employees are already using AI. ChatGPT, Copilot, Gemini — someone on your team has already pasted company data into one of these tools.
Maybe it was a quick email draft. Maybe it was a financial summary. Maybe it was a client’s personal information. The question isn’t whether AI is part of your organisation — it’s whether you’re managing the AI risks to information security that come with it.
And if you’re a company in Bosnia and Herzegovina working with EU partners — in Croatia, Slovenia, or Germany — this isn’t just an internal concern anymore. It’s quickly becoming a business requirement.
The AI Risks That Should Be on Your Radar
When we talk about AI risks in the context of information security, we’re not talking about sci-fi scenarios. We’re talking about practical, everyday problems that are already happening in organizations across the region.
Data leakage through AI tools. When an employee pastes sensitive data into a public AI chatbot, that information may be used to train the model — meaning it could surface in someone else’s query. Confidential business strategies, personal data, proprietary code — all potentially exposed with a single copy-paste.
Shadow AI. This is the AI equivalent of shadow IT. Employees adopt AI tools on their own, without approval or oversight. No one tracks which tools are being used, what data flows through them, or whether they meet any security standards. You can’t manage a risk you don’t know about.
“Hallucinated” outputs in decision-making. AI models confidently generate incorrect information. If your team relies on AI-generated reports, analyses, or recommendations without verification, flawed data could make it into business decisions, contracts, or client deliverables.
Supply chain and partner pressure. Your EU clients and partners are subject to the EU AI Act, which is becoming enforceable in stages through 2026 and 2027. Even though Bosnia and Herzegovina isn’t in the EU, if you’re part of their supply chain, their compliance obligations become your problem. They will ask how you manage AI risks — and they’ll want documented answers.
The Regulatory Clock Is Ticking
Three regulatory forces are converging right now, and they all point in the same direction for companies in Bosnia and Herzegovina:
The EU AI Act entered into force in August 2024. Prohibited AI practices are already banned. Rules for high-risk AI systems become enforceable from August 2026, with some categories following in 2027. If you supply products or services to the EU market, this affects you directly.
The new B&H Personal Data Protection Law (Zakon o zaštiti ličnih podataka) came into force in October 2025. AI tools that process personal data — and most of them do — fall squarely within its scope. Do you know where your employees’ AI-processed data ends up?
The FBiH Draft Law on Information Security (Nacrt zakona o informacionoj sigurnosti FBiH) has been sent to parliamentary procedure, partially aligned with the EU’s NIS2 Directive. It signals a clear direction: Bosnia and Herzegovina is building its cybersecurity regulatory framework, and organizations need to be ready.
Add to this the fact that B&H is an EU candidate country, and the picture is clear: alignment with EU standards isn’t optional — it’s the trajectory. The companies that prepare now will have a competitive advantage. Those that wait will scramble.
If You Already Have ISO 27001: You’re Closer Than You Think
Here’s the good news. If your organization already operates an Information Security Management System (ISMS) based on ISO 27001, you have a solid foundation for managing AI risks. Many of the controls you’ve already implemented are directly relevant.
The table below shows how your existing ISO 27001 controls map to common AI risks — and where the gaps are:
| AI Risk | What ISO 27001 Already Covers | What You Still Need |
|---|---|---|
| Data leakage via AI tools | A.8.10 Information deletion, A.8.12 Data leakage prevention | Specific acceptable-use policy for AI tools; classification of data that may/may not be entered into AI systems |
| Shadow AI (unapproved tools) | A.5.9 Inventory of assets, A.8.20 Network security | AI-specific asset inventory; monitoring for unsanctioned AI tool usage |
| AI output errors in decisions | A.5.1 Information security policies (governance framework) | Human oversight requirements for AI-assisted decisions; validation procedures for AI-generated content |
| Third-party AI services | A.5.19–A.5.22 Supplier relationships and security | AI-specific clauses in vendor contracts; due diligence on AI model providers’ data handling |
| Bias and ethical concerns | Limited direct coverage | AI impact assessments; bias detection and monitoring; ethical AI governance — this is where integrating management systems and ISO 42001 come in |
The takeaway? Your ISMS gives you the structure. You just need to extend it to cover AI-specific risks. That means updating your risk assessment to include AI scenarios, adding AI tools to your information asset inventory, creating an acceptable-use policy for AI, and reviewing your supplier agreements for AI-related clauses.
For organizations looking to go further, ISO/IEC 42001 — the world’s first AI management system standard, published in December 2023 — provides a dedicated framework for AI governance. It’s built on the same management system structure as ISO 27001, which means integration is straightforward. Think of it as the natural next chapter for your ISMS in the age of AI.
If You Don’t Have an ISMS Yet: AI Just Made the Case for You
Maybe you’ve been considering ISO 27001 for a while. Maybe a client mentioned it. Maybe it’s been on the “we’ll get to it eventually” list. Here’s the thing: AI just moved it to the top of that list.
Without a structured management framework, AI risks are almost impossible to manage effectively. You end up with ad-hoc rules that nobody follows, no visibility into what tools people are using, no documented process for assessing new risks, and no way to demonstrate to clients or partners that you take information security seriously.
An ISMS based on ISO 27001 gives you exactly what you need: a systematic, repeatable approach to identifying and managing risks — including the new ones that AI introduces. It’s not about paperwork for its own sake. It’s about having a clear picture of your risks and a plan for dealing with them.
And here’s the practical reality: if you’re working with EU-based companies — especially in regulated industries like finance, healthcare, or critical infrastructure — the question of “do you have ISO 27001?” is increasingly becoming a prerequisite, not a nice-to-have. The NIS2 Directive and the EU AI Act are raising the bar across entire supply chains.
The good news is that building an ISMS doesn’t have to be overwhelming. It starts with understanding where you are now, identifying your most critical risks, and building a strong foundation step by step. And when you design it from the start with AI risks in mind, you’re future-proofing your investment.
What to Do This Week
Regardless of where you stand today, here are three things you can do right now:
1. Find out what AI tools your people are actually using. Send a simple survey or talk to department heads. You’ll almost certainly be surprised. This is your shadow AI discovery exercise — and it’s the first step in any gap analysis.
2. Classify your data for AI exposure. Decide which categories of information should never be entered into AI tools (personal data, client confidential, financial data, intellectual property) and communicate that clearly to your team.
3. Put it on the management agenda. AI governance isn’t an IT problem — it’s a business risk. It needs management support and a cross-functional approach. The sooner leadership is involved, the more effective your response will be.
The Bottom Line
AI is already transforming how organizations in Bosnia and Herzegovina operate — and the risks it introduces are real. But they’re also manageable. Whether you already have an ISMS or are starting from scratch, the path forward is the same: understand your risks, build (or extend) a structured framework to manage them, and stay ahead of the regulatory curve.
The companies in the region that act now — rather than waiting for regulations to force their hand — will be the ones that keep their EU partnerships strong, win new business, and avoid costly surprises down the road.
Not sure where to start? Get in touch with us for a free 30-minute consultation. We’ll help you figure out where you stand with AI risks and what your next practical step should be — no jargon, no pressure, just clarity.