18 March 2026

Vision on AI Impact in Software Development — Part 2: Rachèl Heimbach

OpenValue’s Rachèl Heimbach on why successful AI adoption is as much an organizational challenge as a technical one — and what enterprises can do to accelerate the transition.

Series context: In Part 1 of this series, OpenValue CTO Bert Jan Schrijver gave us the strategic view: AI as a power tool, the junior-senior paradox, and the importance of digital sovereignty. He noted that many organizations get stuck precisely because the possibilities feel endless. In this second installment, we go from strategy to the trenches. Rachèl Heimbach, Principal Engineer at OpenValue and Solution Architect at a major Dutch financial institution, shares what it actually takes to implement AI in a complex, regulated enterprise — and what he’s learned about bridging the gap between technical possibility and organizational reality.

Rachèl Heimbach — Vision on AI Impact in Software Development

From demo video to boardroom budget

At the end of 2024, Rachèl started building proof of concepts for AI implementation in a large Dutch financial institution. He integrated design system APIs with GitHub Copilot so developers could generate compliant code and have existing code reviewed against platform standards with AI. By mid-2025, he turned this into a demo video showing the integration in action using MCP (Model Context Protocol). That video travelled further than expected: first to his department leadership, then it was shown during a Quarterly Business Review in front of various IT departments of the financial institution. From there it reached C-level leadership, and suddenly serious development budget became available.

“The organization recognized the strategic importance. We received a significant budget for AI acceleration, with the mandate to move quickly. So we launched a lot of initiatives at once.”

Looking back, Rachèl is honest about where those early initiatives landed. Most were what you might call first-generation AI applications — making Copilot context-aware, building MCP servers, generating documentation from code. Valuable, but still the more accessible applications of AI in software development.

The real learning, he says, came from what happened next: discovering that the biggest challenges weren’t technical at all.

The AI technology is ready for production

“I thought I was stepping into a technology project. It turned out to be an organizational transformation.”

If there’s one thread that runs through Rachèl’s entire experience, it’s this: the AI technology works. Building an MCP server isn’t technically complex. Connecting an LLM to a codebase isn’t hard. Even generating complete front-end and back-end applications with local models is technically possible in the first months of 2026. The real challenges are elsewhere.

In heavily regulated environments, every AI use case requires validation through newly created roles and processes. Explaining and understanding AI to get approval can take weeks or even months before any code is written outside a restricted sandbox environment. Managed devices are locked down. Teams that need to collaborate across organizational boundaries — different domains, different departments — each have their own priorities and timelines.

Rachèl recognizes a pattern that many enterprises will find familiar: the desire to move forward exists at every level, but the mechanisms to do so safely at enterprise scale are still being built.

“Everyone wants to move forward with AI. The challenge is that legal and risk departments are navigating genuinely new territory — it’s not that they don’t want to, it’s that the frameworks for responsible AI at this scale are still being established. That’s a solvable problem, but it takes time and deliberate effort.”

Navigating the enterprise: top-down meets bottom-up

In Part 1, Bert Jan described AI adoption as varying predictably by organization size and risk profile. Rachèl’s experience adds a practitioner’s perspective to that observation.

From the top: Upper management declares that (agentic) AI is the future. Strategic ambition is high, and leadership is willing to invest.

From the bottom: Developers are eager. Some are already using Claude Code and other advanced AI tools in personal projects. They see the potential and want access to better AI models and tooling.

In the middle: The functions that must actually approve, implement, and govern these systems — legal, risk, compliance, enterprise architecture — are working to develop frameworks that don’t yet exist. It’s a new reality that requires new approaches.

Rachèl states: “Bottom-up enthusiasm runs into policies that are still being shaped. Top-down ambition needs to be translated into workable processes. The key is bringing these perspectives together — and that requires patience, good communication, and a willingness to build trust incrementally.”

What helped his team was having executive sponsorship; what Rachèl calls “the CTO’s blessing”. This allowed them to operate with more agility while the broader organization caught up. That exemption became a proving ground: demonstrate value at small scale, document what works, and use those results to build the case for wider adoption.

From data quality to AI readiness

“No SaaS product in the world is going to fix your organizational context layer. That’s always on you.”

One of the most important strategic decisions Rachèl’s team made was to invest in data infrastructure early, rather than chasing individual use cases. The reasoning was straightforward. AI models improve continuously. SaaS products come and go. But your organization’s unique context — its policies, standards, domain knowledge, and architectural decisions — will always require dedicated effort.

“Garbage in, garbage out. It doesn’t matter which AI model you use. If your organizational data isn’t structured and accessible, you have a generic tool that doesn’t understand how your organization works.”

This led to a focus on MCP server integrations that make organizational knowledge machine-readable and accessible to AI systems. These integrations currently power GitHub Copilot with organization-specific context, but they’re designed to be reusable. When autonomous agents arrive — and Rachèl believes that moment is approaching faster than most expect — the same data infrastructure will serve them.

Building this foundation revealed an important insight: data quality is not just a technical concern. Teams that previously wrote documentation and moved on now need to become data owners. These data owners are responsible for maintaining and curating information that AI systems will consume and act upon. Building evaluation frameworks with hundreds of domain-specific test questions falls on subject matter experts who already have full calendars. And some of the data maintenance itself can be partially automated — for instance, generating documentation directly from source code with human-added context where needed.

“We started with the teams that had the best documentation in the organization. Even they have significant work ahead to make their data AI-ready. That tells you something about the scale of this challenge — and the importance of starting early.”

Context engineering: the next frontier

Where Bert Jan spoke about AI-assisted coding as one of three practical categories, Rachèl’s experience points to a fourth — one that sits above all of them: context engineering.

The concept is powerful in its simplicity. Instead of feeding isolated prompts to an AI, you build organizational context layers that inform everything the AI does. Legal policies, risk frameworks, enterprise architecture standards, GDPR requirements, the EU AI Act — all structured so that an LLM can consume and enforce them automatically.

“Imagine a developer starts working on a new feature. The AI already knows: this conflicts with your enterprise architecture. This touches GDPR-sensitive data. This risk policy hasn’t been addressed. You get flagged before you write a single line of code — not after.”

This is fundamentally different from the current model of code generation followed by code review. It shifts AI from a coding assistant to an organizational awareness layer. And it requires something most enterprises haven’t begun to prepare for: every policy, every standard, and every rule needs to be machine-readable.

“Legal has beautiful documents on Confluence and SharePoint. Risk has their own. But none of that is structured in a way that an autonomous agent can work with. That’s the real work to successfully implement AI at enterprise scale.”

Building trust while using AI

“The question isn’t whether we need governance — it’s how we embed it into the AI workflow itself, so it scales with the technology.”

Rachèl identifies what may be the most fundamental question of enterprise AI adoption: how do you maintain governance as AI-generated output scales dramatically?

The current model — where every piece of AI-generated work passes through multiple layers of human review — works today. But it won’t scale. If AI enables a tenfold increase in output, and every piece still requires manual review by risk, legal, and security, the human bottleneck grows rather than shrinks.

“Everyone says ‘human in the loop.’ That’s important — and it will remain important for critical decisions. But we also need to think about embedding governance into the AI context itself, so that compliance happens by design, not just by review.”

The path forward, Rachèl argues, is incremental. Build policies into the AI context layer. Implement large evaluation sets to measure AI output quality objectively. Establish trust gradually, backed by measurable metrics. Start with lower-risk applications where the cost of errors is manageable, and expand as confidence grows.

This approach allows organizations to maintain their governance standards while gradually reducing the manual overhead. It’s not about removing controls — it’s about making them part of the system architecture rather than a purely manual process.

What a CTO should do right away

When asked for concrete advice to a CTO or IT director who wants to get serious about AI, Rachèl is characteristically practical.

Start with your data. Look at your internal products, your policies, and your organizational knowledge. Ask yourself: is this represented in a way that any AI initiative — current or future — can consume? If the answer is no, that’s your first priority. No SaaS product will solve this for you.

Create data ownership. People need to understand that their documentation, policies, and standards will be consumed by AI systems. They’re not just writing for humans anymore — they’re writing for AI agents. That requires a mindset shift and dedicated capacity.

Don’t wait for the technology. The current AI models are already remarkably capable. Local LLMs can handle a surprising amount of work. The constraint isn’t computing power or model intelligence — it’s organizational readiness.

“We’re already at the point where, with the right architecture, we can generate complete applications using local models. The tasks are small enough, the context is specific enough. We don’t need the latest flagship model for everything.”

Private AI, sustainability, and digital sovereignty

Echoing Bert Jan’s emphasis on digital sovereignty from Part 1, Rachèl sees an acceleration in the move toward private, on-premises AI infrastructure. At his client’s organization, the shift away from American cloud dependencies is already underway.

“With the right intent routing, we can classify tasks — send complex ones to powerful models and handle simpler ones locally. For carbon footprint, cost control, and sovereignty, this is becoming essential.”

He also points to an approach already under discussion at his client: distributing AI workloads across developers’ own high-powered machines. This is expensive hardware that currently sits idle much of the day. It’s a pragmatic approach to reducing cloud dependency while leveraging existing infrastructure.

“Every developer has a powerful machine. If you can distribute generation tasks to local hardware, you reduce token costs, carbon footprint, and vendor lock-in all at once.”

AI technology is ready! Are you?

“Don’t underestimate that the AI technology is already here. We don’t need to wait for two more model generations. The organizations need to catch up.”

Recently, Rachèl saw something that shifted his perspective. His team demonstrated autonomous AI generating complete features. So not proof-of-concept demos, but working code that could, with the right organizational infrastructure, move toward production. Rachèl’s opinion went from “this is still a few years away” to “this is happening now.”

“I used to believe we’d spend the next few years reviewing AI-generated code. I’m beginning to think differently. With the right checks and balances, with memory, with policies in the context layer — we can move toward production much faster than most people expect. The question isn’t whether the technology can do it, but whether the organization is ready.”

It’s a conclusion that connects directly to Bert Jan’s observation in Part 1: the developer’s role is shifting from execution to orchestration. Rachèl’s experience shows that this shift isn’t limited to individual developers — it applies to entire organizations. The enterprises that invest in AI-ready data, embedded governance, and organizational transformation today will be the ones leading their industries tomorrow.

And the technology? It’s already waiting.

The bottom line

Where Bert Jan gave us the strategic compass — AI is a power tool, not a replacement — Rachèl provides the AI practitioner’s playbook. The picture that emerges is both ambitious and grounded. AI adoption in complex enterprises is as much an organizational transformation as it is a technology shift. It touches data ownership, governance, team structures, cross-departmental collaboration, and ultimately, trust.

For organizations ready to take the next step, Rachèl’s priorities are clear: invest in your data, structure your policies for machines, build trust incrementally, and above all — don’t wait for the technology. It’s already here. The competitive advantage goes to those who adapt their organizations to use it.

Want to learn more about AI integration and agentic architectures? Follow OpenValue’s AI Integration for Java Developers training.

Rachèl Heimbach is Principal Engineer at OpenValue and Solution Architect specializing in platform engineering and AI adoption at enterprise scale. He bridges the gap between cutting-edge AI capabilities and the organizational realities of regulated industries.

This is Part 2 of the “Vision on AI Impact in Software Development” series. Read Part 1 with Bert Jan Schrijver for the strategic perspective.

Want a head start on AI in your development workflow? Check out the OpenValue training portfolio — by developers, for developers.


Ramon Wieleman

Ramon is driving business development and partnerships for OpenValue Group as Group Director - connecting exceptional software development experts with organizations that need tailor-made solutions. Our mission: Better Software, Faster.