...

Charting the Course for AI Maturity at the AI & Big Data Expo

September 30, 2025

By Anastasiia D.

  • AI Maturity,

  • Enterprise AI,

  • Llms,

  • AI Roi

...

This September, our team had the opportunity to attend AI & Big Data Expo Europe 2025 in Amsterdam. Along with the Ai4 event, it’s one of the key industry gatherings shaping the conversation around enterprise AI.

The event brought together technology leaders, innovators, and practitioners from across the globe to examine where AI is today and where it’s heading next. Janea Systems was represented by Sergey Cujba and Benedetto Proietti, who joined hundreds of senior engineers, data leaders, and executives to explore the latest strategies for scaling AI.

I left the conference with mixed feelings. The excitement is still in the air, but also a new sense of urgency. Everyone knows the easy wins are gone. Now it’s about scaling responsibly and proving real value.

Benedetto Proietti

Head of Architecture at Janea Systems

Across keynotes, panels, and workshops, one theme was impossible to miss: AI is rapidly maturing. The conversations have shifted from experimentation to execution, from hype to measurable business outcomes. In this recap, we’ve distilled the most insightful themes and standout sessions that offer a glimpse into how enterprises will build and scale AI in 2026 and beyond.

The Organizational Transformation: Becoming AI-Ready

One takeaway at AI & Big Data Expo Europe 2025 stood out: strategy sets the direction, but organizational transformation determines whether AI ambitions become reality or remain stuck on paper.

HelloFresh’s Senior Director of Software Engineering, Dilip Saha, delivered a keynote titled “Transforming Your Organization to Be AI-Ready.” It was one of the most practical sessions of the event, laying out a blueprint for building AI readiness.

Three Pillars of an AI-Ready Organization

The AI Readiness framework boils down to three elements: people, systems, and governance. You see, buying new tools won’t make you AI-ready. True readiness comes from aligning skills, architecture, and guardrails.

  • People: Building new capabilities across teams, not just within a central data science group. Upskilling, cross-functional collaboration, and embedding AI literacy across the business are key.
  • Systems: Architecting scalable, decentralized AI platforms that empower business units, not isolate them. This shift from centralized to distributed AI teams signals a more mature, agile operating model.
  • Governance: Putting structures in place that enable innovation while ensuring security, ethics, and accountability.

Instead of a single, siloed team acting as a gatekeeper, business units are equipped to use AI for their challenges. With decentralized AI systems, organizations can move from experimentation to enterprise-wide adoption.

AI Factories > Data Centers

As enterprises move beyond strategic planning, the focus shifts to execution. The AI & Big Data Expo Europe 2025 agenda dedicated significant attention to the engineering disciplines required to build and deploy AI solutions. Everyone, from engineering leads to CDOs, was talking about how to move from one-off data science projects toward an industrialized "AI factory" model.

From Copilots to Integrated Workflows

Last year, GenAI was the shiny new toy on the block. This year, it’s woven into the AI-assisted software development lifecycle. The panel "GenAI for Software Development - Beyond the Hype, Into the Code," featuring engineering leaders from Decathlon, Adidas, eBay, and ING, summed it up perfectly. We’ve moved from evaluating "what's real vs. hype" to "AI integration into the toolchain (IDEs, CI/CD, testing models in ML, including retrieval-augmented generation (RAG) testing, and code review)".

The conversation was not just about speed, but the fundamental changes GenAI is forcing upon engineering culture and practice. At Janea Systems, we conducted a series of experiments exploring how AI-assisted development delivers value across the SDLC. Our team tested AI models for frontend and backend engineering, and the results were impressive. Give it a read!

Risk conversations are getting more mature, too: versioning, reproducibility, and hallucination control are front and center. The industry is entering phase two of GenAI adoption, developing the guardrails and best practices necessary to use it without sacrificing code quality, security, or maintainability.

Internal AI Platforms

If there was a recurring theme across sessions, it was platforms. Uber’s Michelangelo platform was a standout example. Melda Salhab showed how it has evolved to support both predictive and generative workloads, making it a true backbone for innovation rather than just another internal tool.

Owning or renting computing power doesn’t give you a moat – everyone has access to the same chips. But building a well-designed AI factory lets you:

  • Ship new AI-powered products faster
  • Maintain consistency and quality across teams
  • Embed governance and security at the platform level
  • Leverage proprietary data to create domain-specific models that competitors can’t replicate

TBAuctions shared a compelling story, too. By standardizing their tech stack with Databricks, Azure, and Terraform, they are scaling AI agents with precision. What struck me is that companies like these are building core platforms to bake AI into their operational DNA. It’s a deliberate strategic choice: build the foundation once, then innovate on top repeatedly. This strategic investment in core infrastructure is what enables the reliability and scale required for powering global products like Bing Maps.

And let’s be honest, if you want to deploy autonomous AI responsibly, you need that solid foundation. IBM’s workshop on secure Agentic AI made the same point: without a standardized platform acting as a control plane, governance and monitoring become impossible at scale.

Domain-Specific AI Is Where the Real Value Is

Another big takeaway: companies delivering tangible impact don't chase generic solutions. They’re going deep into their own domains. Siemens’ Holmes GPT is a great example: it’s a finely tuned financial copilot trained on Siemens’ proprietary data. That’s what makes it powerful and defensible.

The underlying models are commodities. The unique data, workflows, and deep integration are what set leaders apart. Platforms, GenAI integration, and domain-specific applications are no longer nice-to-haves. They are becoming the backbone of how competitive enterprises build software.

If you’re thinking about how to take your organization from AI strategy to AI execution, now is the moment. Contact us to explore how your company can build its own AI factory.

Operationalizing AI for Production-Grade Performance

Building a great AI model is only half the story. The real test is whether you can run that model reliably, securely, and at scale. This is where MLOps, AI infrastructure, and governance come together. It’s not glamorous, but it’s what separates production-grade systems from proof-of-concept experiments.

The resilience of these pipelines often depends on foundational engineering that ensures ML frameworks perform consistently across all environments. This is exactly what Janea Systems did for PyTorch (and many others). We systematically audited and hardened the test suite, modernized build systems (including support for C++20 modules), improved runtime linking, and integrated profiling and performance tooling. The result was a 50% drop in Windows-specific issues, near-parity stability with other platforms, and a reinforced foundation for scalable deployment across environments.

The Anatomy of a Successful AI Project

Tamara Tatian, AI Architect at IBM, asked the hard questions many of us face: Is GenAI the answer to every project? Should speed of innovation trump governance and security? In the age of powerful but unpredictable LLMs, the fundamentals such as data quality, observability, and governance play a critical role.

This is less about technical execution and more about strategic risk management. The winning organizations will be the ones that balance rapid innovation with responsible guardrails. Microsoft’s session on “Lessons from Red Teaming 100 Generative AI Products” reinforced that point, showing how to uncover hidden risks before they hit customers or regulators.

Pipelines That Heal Themselves

No AI system works without strong pipelines. The Zurich Insurance and JTI panel “Building Robust Pipelines” focused on designing for scalability and efficiency. This focus on performance is critical, as accelerating MLOps and DataOps workflows can yield significant efficiency gains.

But what impressed me most was how far the conversation has moved beyond that. Shah Muhammad from Sweco spoke about self-healing systems — pipelines that validate data, catch failures early, and recover automatically. Think of it as anti-fragility in MLOps: systems that don’t just survive stress but adapt and improve under it. That’s the new frontier — freeing engineers from firefighting so they can focus on higher-value innovation.

Measuring and Maximizing AI ROI

The closing panel, “What Does Real AI ROI Look Like?”, brought together senior leaders from Deutsche Bank, IBM, and the Institute for Digital Transformation. The conversation centered on financial and operational metrics like cost savings, revenue growth, and productivity gains. The industry is finally acknowledging that AI model accuracy means nothing if it doesn’t move a business metric. A technically perfect model that fails to create value is, in the end, a failed investment.

Panelists emphasized aligning AI initiatives with strategic objectives and managing them like a portfolio: prioritize, monitor, and continuously reassess based on actual and potential returns. This is how AI moves from being a series of disconnected projects to a disciplined, value-driven program.

From POC to Production ROI

The presentation by Ranya Bellouki from Snowflake, “From AI to POC to ROI: Productionising AI for Value”, highlighted that achieving high ROI is a direct result of sound engineering and operational discipline. This journey is where value gets created. Janea Systems helps maturing POCs into market-ready products – like when we worked on the ML-powered medical device for OtoNexus.

If you’re ready to turn AI from a cost center to a growth engine, let’s talk about building the ROI flywheel in your organization.

Ranya Bellouki outlined a path from experimentation to revenue generation. The real inflection point comes when you can reliably deploy and manage models at scale. That’s where the “AI factory” concept becomes more than a metaphor. Without that industrialized backbone, AI’s impact will stay isolated in small pilots and proofs of concept.

Harsh Deshpande of IBM Consulting closed with a talk titled “From Strategy to Impact: Making AI Transformation Count on the Balance Sheet.” This was the ROI conversation elevated to the boardroom. Instead of talking about models and pipelines, he talked about profit margins and cashflows.

This is the final stage of AI maturity: when its value is expressed in financial terms, the entire executive team understands. The session introduced a flywheel of productivity, growth, and strategic reinvestment that turns AI from an IT expense into a core driver of business performance.

Directives for AI-Driven Enterprises in 2026

Across the sessions, the AI maturity model for enterprises started to take shape. It’s a useful diagnostic for any organization trying to benchmark its current state and plan its next moves.

From that, we brought together three directives for leaders looking to build durable, AI-driven enterprises in 2026 and beyond:

  1. Treat AI as a Core Business Function, Not an IT Project: The conversation must be elevated beyond the technology department and framed in profit margins, productivity gains, and strategic advantage. This requires deep and sustained engagement from leaders across finance, operations, and IT.
  2. Invest in the AI Factory, Not Just Models: While the allure of the latest AI models is strong, the long-lasting competitive advantage will be built upon proprietary, internal platforms for AI development, deployment, and operations.
  3. Lead the Cultural Transformation: The most significant hurdles to AI adoption are human, not technical. Issues of trust in automated systems, the need for new skills and workflows, the establishment of ethical governance, and the redefinition of human roles are paramount.

The companies that act decisively on them in 2026 will set the pace for the next phase of AI transformation, turning ambition into measurable impact.

At Janea Systems, we run AI Maturity Workshops designed to help organizations assess their current state, identify gaps, and build a practical roadmap for scaling AI. Get in touch with us to explore how we can accelerate your journey.

Related Blogs

Let's talk about your project

600 1st Ave Ste 330 #11630

Seattle, WA 98104

Janea Systems © 2025

  • Memurai

  • Privacy Policy

  • Cookies

Let's talk about your project

Ready to discuss your software engineering needs with our team of experts?