Since 2023, the use of artificial intelligence in German companies has changed fundamentally. ChatGPT, Microsoft Copilot, Google Gemini, and numerous specialized AI tools have moved into business units—often faster than IT departments or data protection officers could respond.
What started as experimental use is now everyday practice: employees use generative AI for email drafts, presentations, research, and even the analysis of customer data. This is exactly where internal AI governance comes in. It creates the framework that enables companies to leverage AI’s potential without falling into legal, security-related, or reputational traps.
Why AI Governance Is Now Essential
AI adoption in German companies has accelerated dramatically since the launch of ChatGPT in late 2022. Microsoft 365 Copilot, Google Gemini, and specialized AI applications are now established in marketing, HR, sales, and customer service. But many organizations face the same core issue: technology is moving faster than internal rules.
Without clear governance, dangerous gaps appear. Employees independently use external AI tools (a phenomenon known as shadow IT). A sales employee uploads customer lists into a free chatbot to generate outreach emails. An HR manager uses a browser plugin to analyze application documents. In both cases, personal data may be transferred to third-party providers outside the EU—an obvious GDPR violation.
At the same time, the regulatory framework is tightening. The EU AI Regulation (EU AI Act) was formally adopted in 2024 and has been entering into force in stages since 2025. Together with the GDPR, it forms a binding rule set that forces companies to implement structured oversight. Violations of the AI Act can lead to fines of up to €35 million or 7% of global annual turnover.
The Biggest Risks of Uncontrolled AI Workflows
Many companies have already started experimenting with AI—now professionalization is required. Uncontrolled AI workflows create substantial risks that go far beyond technical issues. Risk categories range from data leakage and flawed decisions to legal exposure.
Data Leakage & Loss of Intellectual Property
The most immediate risk factor is the uncontrolled leakage of sensitive data. When employees enter source code, product roadmaps, customer data, or internal strategy documents into external AI tools, that information leaves the company’s protected environment.
The legal requirements are clear: Article 5 GDPR demands integrity and confidentiality of personal data. Articles 28 and 32 require appropriate technical and organizational measures as well as data processing agreements with service providers. Without such agreements, using external AI services with personal data is not permissible.
Germany’s Trade Secrets Act (GeschGehG) is also relevant: trade secrets can lose their protected status if adequate confidentiality measures are not in place. Uncontrolled input into public AI tools can undermine that protection.
Uncontrolled Model Outputs & Bad Decisions
Generative AI models produce plausible-sounding content regardless of whether it is correct. This becomes problematic when organizations adopt AI output into decision-making processes without validation.
Management carries duties of care and organizational responsibility (§ 93 AktG, § 43 GmbHG). Implementing automated decisions without appropriate controls can result in personal liability. Human-in-the-loop controls and the four-eyes principle are therefore not optional extras, but governance requirements.
Hallucinations & Lack of Traceability
“Hallucinations” describe the phenomenon where large language models (LLMs) present invented facts, sources, or quotes as real. The model “creates” convincing information that has no basis in reality. The “black-box nature” of many AI systems makes it difficult to explain decisions to regulators or affected individuals.
For high-risk systems, the EU AI Act explicitly requires traceability and logging. Internal governance must therefore establish processes for source verification, logging of inputs and outputs, and clear accountability for validating AI results.
Intransparent Use in Business Units
Business units often act faster than central IT or compliance functions. Marketing purchases an image generator subscription independently. HR tests a CV screening tool. Sales uses a browser plugin for conversation analysis. In none of these cases are IT, data protection, or information security involved.
This practice causes multiple issues:
- The AI register remains incomplete
- License terms are not reviewed
- Training data and model behavior are unknown
Weak-Point Processes
“Weak-point processes” refer to individual process steps or interfaces where unsafe AI use is particularly likely or risky. They emerge where data leaves protected corporate governance environments or where manual interventions are required.
Additional weak points include:
- Free-text input fields in internal applications where employees accidentally paste confidential information
- Chat interfaces without content filters that enable prompt-injection attacks
- API connections to external AI services without logging or access control
Internal AI governance must identify, classify, and control these weak points systematically. Data Loss Prevention (DLP) rules, technical blocks for certain data types, and targeted training are key measures.
Building Blocks of Internal AI Governance
AI governance is not a single document, but a combination of policies, roles, processes, technology, and culture. The following building blocks align with common frameworks: the GDPR, the EU AI Act, and standards such as ISO/IEC 42001:2023 for AI management systems.
AI Policies & Usage Rules
Since 2024/2025, every company using AI needs a binding internal AI policy. This “AI Use Policy” defines the rules of engagement for working with AI systems and should be aligned with data protection, IT security, and (where applicable) the works council.
Typical elements include:
- Allowed and prohibited tools: Which AI applications are approved? Which are explicitly forbidden?
- Handling personal data: No entry of customer data, employee data, or other personal information into external AI tools without approval from the data protection officer.
- Handling source code and IP: Developers must not enter proprietary code into public AI services.
- Handling confidential documents: M&A documents, strategy papers, and unpublished product information are excluded from external AI tools.
- Labeling obligations: AI-generated content must be labeled as such—especially in external communication.
Roles & Responsibilities
Clear responsibilities are the foundation of effective governance. Executive management carries overall responsibility, but operational tasks must be distributed.
Typical roles within an AI governance structure:
- AI Governance Board: Cross-functional body defining AI strategy, approving use cases, addressing ethical issues
- Chief Data Officer (CDO) or Chief AI Officer: Responsible for strategic steering, standard coordination, bundling use cases
- Data Protection Officer (DPO): Reviews GDPR compliance, supports DPIAs
- Chief Information Security Officer (CISO): Assesses security risks, defines protective measures
- Business owner per use case: Responsible for domain quality, risk assessment, documentation for each AI application
In smaller companies, these roles can be adapted pragmatically. The DPO may also cover AI-related compliance tasks. What matters is not a perfect org chart, but clear accountability.
Classification of AI Use Cases
A systematic inventory and classification of all AI use cases is a prerequisite for effective governance. Without visibility into what is in use, neither risk management nor compliance is possible.
For each use case, at least the following criteria should be recorded:
- Business unit and accountable person
- Type of data processed (personal, highly sensitive, business-critical)
- Impact on people (informing, supporting, deciding)
- Degree of automation (human review planned vs fully automated)
- External dependencies (cloud services, API connections)
A service chatbot providing general information typically falls under limited AI risk. An HR screening tool that automatically pre-selects applicants may qualify as a high-risk system. A social scoring system for employees would be prohibited in the EU.
Data & Security Standards
Internal AI governance must be tightly connected with existing IT and data governance. Standards such as ISO 27001 or Germany’s BSI baseline protection provide strong guardrails that should be extended for AI-specific requirements.
Core elements include:
- Data Loss Prevention (DLP): Controls that prevent or log uploads of sensitive data to external AI services
- Access control (RBAC): Role-based permissions for AI platforms and model access
- Encryption: End-to-end encryption for data transfers; encryption at rest
- Logging: Recording inputs and outputs for traceability and audits
- Separation of test and production data: Development/training environments use anonymized or synthetic data
Many companies increasingly rely on internal AI platforms with self-hosted models or European cloud services to keep sensitive data within European legal jurisdiction. These architectural decisions should be part of the governance strategy.
Documentation & Traceability
For internal and external audits, traceable documentation is critical. Data protection authorities, internal audit functions, and in regulated industries BaFin expect robust evidence of AI usage.
Documentation should cover:
- Data sources and legal bases
- Training and testing concepts
- Model versions and change history
- Approval decisions with dates and accountable persons
- Risk analyses and mitigation measures
- Incidents and how they were handled
Training & Awareness Across the Company
Without targeted training and awareness campaigns, AI policies remain “paper rules.” Employees must understand why rules exist and how to apply them in daily work.
Recommended formats:
- Company-wide baseline training: E-learning modules on safe prompting, AI and privacy, recognizing hallucinations, and correct labeling of AI content
- Deep-dive trainings for specialized roles: Developers learn secure integration and prompt-injection risks. Data scientists cover bias detection and fairness metrics. Leaders understand oversight duties. Compliance teams are trained on EU AI Act/GDPR intersections.
- Quick reference cards: One-pagers with key do’s and don’ts for desks or intranet
- Regular updates: Refresh trainings when EU AI Act milestones change or new AI tools are introduced
The EU AI Act: What Specifically Affects Companies Now
The EU AI Act was politically finalized in late 2023 and formally adopted in 2024. Its application is phased: prohibitions for unacceptable-risk systems apply from 2025, and full applicability for high-risk systems starts in August 2026.
Its central principle is risk-based regulation:
| Risk Level | Regulation | Examples |
|---|---|---|
| Unacceptable | prohibited | social scoring, manipulative AI |
| High | strict obligations | biometric identification, HR selection, credit scoring |
| Limited | transparency duties | chatbots, deepfakes |
| Minimal | no specific duties | spam filters, game recommendations |
Obligations for Non-High-Risk AI
Many typical corporate applications fall into limited or minimal risk: internal assistants for drafting text, marketing content generation, informational chatbots, or meeting summarization tools.
For these systems, transparency requirements apply:
- Labeling: Users must be informed they are interacting with AI or that content is AI-generated (e.g., “This text was created with AI support”).
- Deepfake labeling: AI-generated content depicting real people must be marked as synthetic.
- Copyright and privacy: Using copyrighted materials as input and handling personal data must be legally compliant.
Obligations for High-Risk AI
High-risk AI under the EU AI Act includes systems that significantly affect people—such as in employment, finance, healthcare, education, or critical infrastructure.
These systems face extensive obligations:
- Risk management system: Continuous identification and mitigation of risks across the lifecycle
- High-quality training data: Requirements on relevance, representativeness, and accuracy
- Technical documentation: Detailed system design, training methods, performance metrics
- Logging: Automated recording of system events for traceability
- Human oversight: Human-in-the-loop or human-on-the-loop concepts
- Robustness and security: Cybersecurity requirements and resilience against manipulation
- Conformity assessment: Self-assessment or review by notified bodies, depending on use case
GDPR & AI: How Companies Build Legally Robust Workflows
AI systems often touch personal data—or at least data that can potentially be linked to individuals. The GDPR therefore remains a core framework for AI deployment.
Article 22 GDPR is particularly relevant: fully automated decisions with significant impacts on individuals require human intervention or at least the ability to request a review.
Key success factors for legally robust workflows:
- A Data Protection Impact Assessment (DPIA) is required for certain AI use cases, especially automated decision-making, profiling, or processing special categories of data.
- Data processing agreements with AI providers must cover the specific processing activities. EDPB guidance clarifies that inputs into LLMs constitute “processing.”
- Retention and deletion concepts must define how long inputs and model outputs are stored and when they are deleted.
Common pitfalls include using US-based AI services without adequate safeguards for data transfers (Schrems II), or training on historical customer data without validating the legal basis.
How to Create a Secure, Controlled Process
From the initial idea for an AI use case to a released and monitored system, companies should follow a structured process:
Phase 1: Idea intake & preliminary review
Business units submit use case ideas via a standardized form. Initial screening clarifies: What problem is solved? Which data is involved? Are there comparable approved solutions?
Phase 2: Risk and legal assessment
The governance team assesses risk class and privacy requirements. If necessary, a DPIA is conducted. Outcome: approval for a proof of concept, required adjustments, or rejection.
Phase 3: Proof of concept
A limited test using controlled data and defined success criteria. Technical safeguards are implemented.
Phase 4: Pilot operation
Expansion to one department or user group. Feedback is collected systematically. Documentation is completed.
Phase 5: Rollout
Following approval by the AI Governance Board, the company-wide rollout starts—supported by communication and training.
Phase 6: Monitoring & review
Continuous monitoring using dashboards capturing usage metrics. Regular reviews (at least annually) reassess risks and compliance.
Typical AI Governance Mistakes—and How to Avoid Them
When establishing AI governance structures, certain recurring mistakes appear. The following patterns and countermeasures help avoid them:
Mistake 1: Focusing only on technology
Governance is treated as a purely technical topic.
Countermeasure: Treat governance as a combination of technology, organization, and culture. Create committees, define roles, build awareness.
Mistake 2: No AI register
No one knows which AI tools are in use. Shadow AI remains undetected.
Countermeasure: Introduce a central register, require reporting of new tools, conduct regular inventories.
Mistake 3: Over-regulation through blanket bans
All external AI tools are prohibited without providing alternatives.
Countermeasure: Provide enterprise instances of Copilot, Azure OpenAI, or similar services.
Mistake 4: No training
Policies exist, but nobody knows them.
Countermeasure: Mandatory baseline training for all users, deep-dive training for specialized roles.
Mistake 6: One-time setup without iteration
Governance is implemented once and then forgotten.
Countermeasure: Plan annual reviews, actively gather feedback from business units, respond to regulatory changes.
Conclusion on Using AI
Internal AI governance is the key to using artificial intelligence safely and effectively. It connects legal requirements from the EU AI Act and GDPR with technical safeguards and organizational processes.
Companies that establish structured AI governance now combine innovation advantage with regulatory certainty. They avoid fines, protect intellectual property, and build trust with customers, employees, and partners.
