November 26, 2025
By Hubert Brychczynski
Artificial Intelligence,
System Integration,
Digital Transformation,
Generative AI

There's mounting evidence that generative AI is in a growing bubble waiting to burst any day. Microsoft, Nvidia, and OpenAI have been reported making circular investment in each other. Michael Burry, who predicted the housing crisis in 2008 and was portrayed by Christian Bale in "Big Short", revealed he's now betting against AI, too. Perhaps most concerningly, however, a July 2025 report from MIT reveals that 95% of enterprises have seen no return on their AI investments.
95% failure rate is shocking. What's even more shocking is that the failure is complete. It's not about getting few returns. The MIT report clearly states there's exactly zero profit in more than 9 out of 10 AI implementations they have investigated.
In a recent article, we described the "death spiral" behind unsuccessful AI implementations, citing poor business alignment, data and research gaps, and prohibitively rising costs as the main reasons for the failures. We also proposed rapid prototyping as an approach to address and mitigate those implementation pitfalls.
Rapid prototyping addresses the death spiral by exposing misalignments early on, allowing organizations to change course before they've invested too much into a project that's not a good fit for the business. It puts user experience at the center, validating use cases in small tests before actual users prove the entire solution wrong once it's already in production. Finally, faster iteration cycles mean that the product can be adapted to the market within weeks instead of months.
When we learned that 95% of businesses can't implement AI with a profit, the first question on our mind was: what do the 5% do differently? What we found prompted our pivot to rapid prototyping as a viable approach to AI implementation in the enterprise. Today, we're sharing the key insights from the report that few seem to be focusing on, shedding the light on some of the best practices in AI implementation that bring profit to the companies that apply them.
Enterprises that succeed with AI demand four specific capabilities from their vendors.
First, deep understanding of workflows. Winners want vendors who invest time understanding technical architecture before proposing solutions, enabling surgical, customized implementations rather than one-size-fits-all platforms.
Second, incremental and iterative approach. The last thing engineering teams need is another system that requires ripping out existing infrastructure. Successful implementations integrate with current tools—CI/CD pipelines, version control, internal portals—without forcing workflow disruptions.
Third, the ability to improve over time. Static systems that perform well during demos but can't adapt to evolving requirements become technical debt. Winners demand tools that learn from corrections, incorporate new edge cases, and build organizational intelligence through use. This precludes off-the-shelf solutions and puts emphasis on agentic, self-learning systems that get smarter with feedback.
Finally, proven track record. Trust and reliability top the list of business considerations when choosing an AI implementation vendor (Fig. 1). Without demonstrated results in similar technical environments, tools don't get past procurement regardless of their capabilities.
Fig. 1: Expectations about vendors from winning AI adopters – based on MIT report
These four expectations align directly with what MIT's research surfaces as four key AI implementation rules that separate the 5% who succeed from the 95% still searching for ROI.
MIT's research reveals that the 5% who succeed with AI follow four crucial steps in their implementation journey: choose the right vendor, target strategic workflows, iterate, and leverage agentic AI for dynamic adaptation (Fig. 2).
Fig. 2: The roadmap to AI implementation success
Let’s break these down one by one.
The data is clear: external partnerships deliver results twice as often as internal builds.
In MIT's sample, external partnerships with learning-capable, customized tools reached deployment roughly 67% of the time, compared to 33% for internally built tools (Fig. 3). The gap is consistent across organizations. More enterprises attempt internal development, but success rates heavily favor external partnerships.
Fig. 3: External partnership vs. internal builds in successful AI implementations
The reason? External partners bring specialized expertise, faster time-to-value, and lower total cost. They've solved multiple integration challenges before. They know what works and what doesn't. They understand workflow pain points.
Engineering leaders value senior engineers who can work autonomously to solve complex challenges. The same logic applies to AI partners—you need teams that transcend technologies and operating systems and ramp up intelligently without constant oversight.
Generic AI tools fail because they try to do everything for everyone. The winners take the opposite approach: they embed AI deeply into specific, high-value workflows where context matters.
The best-performing AI implementations target narrow pain points where automation delivers immediate, measurable impact. Top successful workflows from the MIT study include AI for call summarization and routing, document automation for contract processing, or code generation for repetitive engineering tasks (Fig. 4).
Fig. 4: Top successful categories for AI implementation in business
What makes these implementations work? Deep customization. The standout performers embed themselves inside existing workflows, adapt to operational context, and scale from focused footholds. A system that understands your approval process, data flows, and edge cases will outperform a beautiful interface that doesn't fit how work actually gets done.
The fastest path to AI ROI leads through quick wins that prove value, then systematic expansion.
Top-quartile AI startups are reaching $1.2M in annualized revenue within 6-12 months of launch. Their approach focuses on landing small, visible wins in narrow workflows, then expanding once they've proven the model works.
The pattern is consistent: tools with low setup burden and immediate, visible value outperform heavy enterprise builds. The expansion comes later, once the system has proven it can learn and adapt to your specific environment. But you can't expand what you haven't validated. Start narrow. Prove it works. Scale.
The critical difference between the 5% who succeed and the 95% who fail lies in whether the system learns and improves over time.
Enterprises expect AI systems to get smarter with use. According to MIT's research, 66% of executives demand tools that learn from feedback, while 63% require systems that retain context across sessions. One executive quoted in the research explains: "Our process evolves every quarter. If the AI can't adapt, we're back to spreadsheets."
This is where agentic AI comes into play. Unlike systems that require full context every time, agentic workflows maintain persistent memory, learn from interactions, and autonomously orchestrate complex multi-step processes.
The infrastructure to support this is emerging now through protocols like Model Context Protocol (MCP), Agent-to-Agent (A2A), and NANDA. These frameworks enable specialized agents to work together rather than requiring monolithic systems. They create the foundation for an Agentic Web—a mesh of interoperable agents that replace static applications with dynamic coordination layers.
Early experiments show what's possible: customer service agents that handle complete inquiries end-to-end, financial processing agents that monitor and approve routine transactions, sales pipeline agents that track engagement across channels without human prompting.
Janea Systems has delivered mission-critical software for organizations that can't afford failure. Clients, including PyTorch, Microsoft, Bing Maps, and Rockwell Automation, trust us with deeply technical challenges where standard solutions don't exist.
Our work demonstrates the principles that separate the 5% from the 95%:
Targeting specific workflows: Microsoft PowerToys is one of the most popular open-source projects on GitHub, enabling power users to extend Windows out-of-the-box functionalities with a suite of 20+ productivity tools. Rather than inserting AI into each of those tools, we have implemented one with a significant potential benefit. Advanced Paste, which enhances clipboard operations, is now enriched with generative AI through Semantic Kernel, allowing users to perform complex operations on copied content, including translation, summarization, stylization, and code generation. Learn more.
Delivering value with quick wins: For BigFilter, a social impact startup dedicated to combatting misinformation on the Internet, we have used AI to rapidly develop a working prototype of an automated fact-checking platform. Three months in, the first significant iteration enabled BigFilter to test business assumption, explore potential challenges, and ideate additional features. Read case study.
Adaptable systems, not static tools: For a startup that offers a domain-specific chatbot experience, we have developed a RAG-based evaluation pipeline that optimizes the service’s operation for speed and accuracy by updating the knowledge base and calibrating the output through a continuous, feedback-based process.
The difference between the 5% who succeed with AI and the 95% who fail isn't access to better models. It's approach. The winners start with focused use cases, demand customization for their specific workflows, build systems that learn and adapt, and partner with vendors who understand their technical reality.
Our AI workshops deliver exactly this approach. In 3 to 15 business days, we work with your team to identify high-ROI use cases, assess readiness, design deployment frameworks, and establish monitoring protocols—all tailored to your specific technical environment and business objectives.
We offer flexible investment options based on your AI maturity stage: fully funded workshops for organizations exploring initial use cases, or joint-investment workshops for teams ready to move to deployment. Either way, you get actionable deliverables: prioritized use cases with ROI analysis, data quality assessments, deployment frameworks with cost-benefit estimation, and executive buy-in materials with concrete timelines.
Ready to join the 5%? Let's discuss your AI initiatives. Schedule a consultation to explore which workshop approach fits your organization's needs and technical environment.
Most don’t. According to an MIT report from July 2025, roughly 95% of enterprises see zero profit from their AI initiatives. Only about 5% manage to reach meaningful, measurable ROI.
The successful 5% follow a consistent pattern: partner with vendors that have a proven track record; target specific, high-value workflows instead of generic use cases; start small with rapid prototyping and expand only after quick wins; use agentic, learning systems that improve with feedback and retain context over time.
Janea Systems helps enterprises identify high-ROI use cases, rapidly prototype solutions, and build AI pipelines that adapt to changing conditions. Through AI maturity workshops and implementation projects, we focus on concrete workflows, measurable outcomes, and systems that can evolve with your business.
Ready to discuss your software engineering needs with our team of experts?