...

What AI Summit New York 2025 Got Right About AI in Production

December 17, 2025

By Anastasiia D.

  • Agentic AI,

  • AI in Production

...

AI has been talked about as a game-changer for years. But inside most organizations, it still shows up as pilots, demos, and one-off experiments rather than systems that run the business.

Recent advances, from agent-based workflows to more capable foundation models, have pushed AI closer to day-to-day operations. At the same time, they’ve exposed a different set of challenges: cost control, trust, organizational readiness, and infrastructure limits.

These tensions were front and center at this year’s AI Summit New York, where leaders from healthcare, finance, automotive, and IT shared what it really takes to move AI beyond experimentation.

Our Business Development Executive, Mario Stalder, shares his thoughts about the event:

There’s no shortage of AI capability today. What’s missing is the structure to support it — data platforms, ownership models, and willingness to treat AI as a long-term project rather than a short-term experiment.

This article distills the most valuable lessons from the AI Summit conference sessions and discussions, along with our experience helping teams move AI into production.

What Agentic AI Is Good For (and What It Isn’t)

One of the more interesting themes that came up was the shift toward Agentic AI — systems that don’t just respond to prompts but can make autonomous decisions and achieve goals.

In the session “Agentic AI: Unlocking What Works, What Doesn’t, and Maximizing ROI,” Tsvi Gal, CTO at Memorial Sloan Kettering Cancer Center, focused less on theory and more on what works in practice.

His main point was that a single “do-everything” agent rarely makes sense. Instead, teams see better results with a mix of general agents and more narrowly scoped ones. For example, agents that handle pre-surgery prep or produce structured patient summaries for doctors.

A big part of the discussion was cost and reliability. These systems need to be tested under real-world load, not demos. If they can’t handle volume or become too expensive to run, they won’t make it past a pilot.

The same themes came up in the panel “C-Suite Perspectives on Human-Centered Agentic AI,” with leaders from Ford, Mastercard, and Neo4j. While agents can act independently, nobody argued for removing people from the process entirely. In higher-risk environments, human oversight is still essential. The general agreement was that AI should help people work faster or more consistently, not make decisions in isolation.

At Janea Systems, we see Agentic AI as a double-edged sword: it offers immense power but introduces significant cost and complexity. Tooling choices matter more than they might seem at first. Decisions like Cursor vs. Copilot change how teams write, test, and maintain agent logic.

Architecture matters even more. Without careful design, agent workflows can quickly drive up token usage and infrastructure costs, which is something we’ve explored in our work on reducing agentic AI costs with JSPLIT.

If you’re thinking about building agent-based systems that can run in production without unexpected costs or brittle workflows, our AI & MLOps Services focus on precisely that.

Human-Centered Design in Healthcare

Healthcare can be one of the most impactful sectors for AI, but only if people trust it. And trust, as the discussions made clear, doesn’t come from the most advanced models. It comes from careful design and discipline.

In the session “Human-Centered Design: Transforming AI Development for Real-World Impact,” Dr. Tanvi Jayaraman, Clinical Lead at OURA, and Daniella De Grande from The Evolv It Group discussed how wearable data is used to support more proactive health decisions.

A key point was that turning this kind of data into something useful isn’t just a modeling problem. It requires compliance (GDPR, HIPAA) and a level of scientific validation that healthcare can stand behind. At OURA, for example, more than 20 PhD scientists are involved in validating models before insights reach users.

That perspective lines up with what we see in the field. In our article, AI in Healthcare & Pharma Summit: Why AI Models Don't Scale, the biggest blockers weren’t model accuracy or tooling; it was fragmented data and the lack of what we call “day-two rigor.”

Many teams can get a prototype working, but struggle to maintain data quality, governance, and validation once systems move into production. Without solid data engineering, data never turns into insights clinicians can rely on — something we also explored in 13 Use Cases & Results in Life Sciences.

You can see this play out in real products as well. Our team worked through similar practical issues with OtoNexus, a company developing a handheld ultrasound device that helps assess middle ear infections. Our contributions made progress from proof-of-concept to production: we built on-device processing and data pipelines, tightened battery and system-level behavior, and set up reliable data transfers for further analysis.

That work wasn’t about a model in isolation, but rather making all the pieces work together in a way clinicians can trust and use.

In our experience, this is where most healthcare AI efforts either stabilize or fall apart. Modern data architecture, clear ownership, and compliance built in from the start make the difference between a demo and something that gets used.

If you’re working on healthcare AI and need to modernize data foundations or build compliant analytics and AI tools, check out our Data & Analytics Services.

AI That Survives Past the Pilot

Getting AI out of the experiment phase and into day-to-day business use isn’t a technical problem. It’s an organizational one.

In his session “Building an AI-Native Enterprise,” Nitin Seth, CEO of Incedo, made the case that companies need to rethink how they’re set up if they want AI to matter at scale. His point wasn’t that every company needs to chase efficiency gains. It was that AI forces changes in how teams work, how decisions get made, and how products are built around customer needs and revenue, not just internal optimization.

That idea was reinforced in the panel “The AI Operating Model,” where Neelesh Prabhu (DTCC) and Sid Vyas (LPL Financial) walked through what this transition looks like in practice. They outlined a framework:

  1. Establish clear AI policies.
  2. Set up enablement teams.
  3. Democratize tooling.
  4. Upskill the workforce.
  5. Ensure rigorous change management.

One point they all agreed on: none of this works without modern data platforms underneath. If data can’t scale, AI initiatives tend to turn into a bottomless investment sink — lots of effort, minimal impact.

We often see enterprises struggle with the "pilot purgatory" phase. Proofs of concept show promise, but nothing ever quite makes it into production. To break that cycle, we often use rapid prototyping as a way to reduce risk early.

By testing ideas in weeks rather than months, teams can figure out which workflows are worth scaling before committing serious budget and headcount. We’ve written about this approach in more detail in Rapid Prototyping as a Strategic De-Risking Tool and Be the 5%: How Businesses Succeed With AI.

Don't let your AI projects stall. Our AI Maturity Workshops focus on building systems and operating models that support AI at scale. Over 3 to 15 business days, we work with your team to pinpoint the use cases that are most likely to pay off, check whether your current setup can support them, and outline how those solutions would be deployed and monitored in your existing environment.

Contact us via the form below to learn more.

Infrastructure: What Everything Else Depends On

None of the things discussed at the summit would work without solid infrastructure underneath. Models, agents, and workflows all hit limits quickly if the foundation isn’t there.

In the session “The AI Backbone,” Nadia Carlsten, CEO of the Danish Centre for AI, talked about Denmark’s sovereign AI supercomputer, Gefion. The focus wasn’t raw compute for its own sake, but control — who owns the data, where it lives, and how compliance is enforced. That kind of setup is becoming more important, especially in regulated industries, where data governance can be just as critical as performance.

Another infrastructure-heavy discussion came from “Unlocking the Semantic Layer,” with Chris Oshiro from AtScale and Rich Williams from Hexaware. Their point was straightforward: if different teams don’t agree on a semantic layer, no amount of modeling will fix it. A shared semantic layer helps keep business logic consistent. It reduces the risk of feeding messy or conflicting data into LLM-driven systems — a problem most teams run into sooner or later.

This is where things usually succeed or fail. Infrastructure sets the ceiling for what AI systems can do. We see this both at the edge, where power, memory, and thermal limits matter, and in the cloud, where cost and scale become the constraints. We mentioned this when discussing 4 Power Management Strategies for Edge AI Devices and the scaling of cloud platforms coming out of Microsoft Build.

Whether you’re tuning low-level performance or building data platforms that need to support AI workloads at scale, infrastructure decisions tend to have the longest-lasting impact. Our work in Mission-Critical Software Engineering is about creating systems that are reliable, compliant, and built to last.

AI as a Core Capability in 2026

Teams that succeed treat AI as part of their core stack, not a side experiment. They invest in architecture, governance, and workflows that support real usage, real scale, and real accountability.

We work with companies at this transition point — when AI needs to move past pilots and start delivering durable value. Whether that means building production-ready agentic workflows, modernizing data platforms, or designing infrastructure that can support AI at scale, our focus is on systems that hold up in real-world conditions.

If you’re navigating that shift, get in touch with us to talk through your use case and map out possible solutions. Check out our Services for more information.

Related Blogs

Let's talk about your project

600 1st Ave Ste 330 #11630

Seattle, WA 98104

Janea Systems © 2025

  • Memurai

  • Privacy Policy

  • Cookies

Let's talk about your project

Ready to discuss your software engineering needs with our team of experts?