August 21, 2025
By Hubert Brychczynski
Artificial Intelligence
The White House has released Winning the Race: America’s AI Action Plan. The document outlines what steps the US administration intends to take to maintain relevance—and, ideally, assert leadership—in the global AI market.
The strategy rests on three pillars: accelerating innovation, building infrastructure, and leading in international AI diplomacy and security.
Security around AI is already an issue. The barrier to entry has never been lower for bad actors as deepfakes, voice cloning, and other AI‑powered tools have entered the mainstream. Misinformation, disinformation, impersonation, and other nefarious activities will likely increase as the technology becomes more powerful.
But artificial intelligence may actually stagnate without a quantum leap in energy production. The Times put it bluntly: “The AI Revolution Isn’t Possible Without an Energy Revolution.” Why? Because we can produce more chips but can’t change the laws of physics that determine how much energy they need.
Finally, even if we curb AI‑emboldened scammers and surmount current infrastructure limits, AI may still need significant research breakthroughs to advance beyond the level of “drunk Harvard professors on speed dial.”
Source: Andriy Burkov’s LinkedIn
The need for innovation in AI has rarely been as pronounced as shortly after the release of ChatGPT 5 in August. Two years of anticipation and billions of dollars poured into development had whetted everyone’s appetites, but the premiere left most unsatisfied. As of this writing, the reviews are rather lukewarm, and the general sentiment is shifting to that of pessimism around scaling and “reasoning” as the principles behind progress in generative AI.
The disappointment surrounding ChatGPT 5 was cushioned by OpenAI’s simultaneous release of GPT-OSS, two open-weight language models available under the Apache 2.0 license that users can freely download and run offline. It remains to be seen how these models compare with alternatives, especially since performance may depend heavily on the strength of local hardware. Still, in light of ChatGPT 5’s tepid reception, this contribution to open-source AI feels more like a fig leaf for an uncomfortable truth: AI is at a crossroads.
At Janea Systems, we believe innovation is the load‑bearing pillar of the AI revolution. The projects we’ve invested in reflect that belief—and they happen to align with the vision presented in the White House documentation.
Continue reading for a brief tour of the Innovation Pillar of the AI Action Plan and see how Janea Systems maps to this federal strategy.
Note: The following overview cites selected items from the Innovation section of the White House Action Plan and pairs them with relevant examples from Janea Systems’ portfolio. Overall, our work to date overlaps with roughly half of the recommendations, and we remain committed to pursuing others in the months to come.
Open-source and open-weight AI models are made freely available by developers for anyone in the world to download and modify. Models distributed this way have unique value for innovation because startups can use them flexibly without being dependent on a closed model provider. They also benefit commercial and government adoption of AI because many businesses and governments have sensitive data that they cannot send to closed model vendors. And they are essential for academic research, which often relies on access to the weights and training data of a model to perform scientifically rigorous experiments.
Prioritize investment in theoretical, computational, and experimental research to preserve America’s leadership in discovering new and transformative paradigms that advance the capabilities of AI, reflecting this priority in the forthcoming National AI R&D Strategic Plan.
JECQ, our open-source, dimension‑aware FAISS compression library shrinks embedding indexes by up to 6x while retaining ~84.6% of full‑precision accuracy in early tests. This reduces storage and memory pressure, makes on‑device/edge AI more feasible, and enables researchers and startups to experiment without closed‑model constraints.
Partnering with the PyTorch community, we drove Windows‑specific open issues to near cross-platform parity, down from 187 to 91, restoring sparse tensor ops, enabling Kineto profiling, modernizing C++20 module support, and improving ARM64 performance so more developers can build, profile, and ship AI on commodity hardware.
Today, the bottleneck to harnessing AI’s full potential is not necessarily the availability of models, tools, or applications. Rather, it is the limited and slow adoption of AI, particularly within large, established organizations. Many of America’s most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.
Our compact, ROI‑guided AI Maturity Workshops foster AI adoption in the enterprise by offering a 3-business day high‑impact use case identification; a 6‑day data quality/structure assessment; a 6‑day deployment framework for cost and scale; a 6‑day monitoring & optimization protocol; and a 15‑day MLOps acceleration.
The Administration “_supports a worker‑first AI agenda_” and will “_advance a priority set of actions to expand AI literacy and skills development, continuously evaluate AI’s impact on the labor market, and pilot new innovations to rapidly retrain and help workers thrive in an AI-driven economy._”
Our three‑stage AI adoption program for credit unions bakes in staff training and human‑in‑the‑loop operations. Phase 2, in particular, centers on data integration, the human element, explainability, and security—with staff enablement and human supervision at every step.
The United States must lead the creation of the world’s largest and highest quality AI-ready scientific datasets, while maintaining respect for individual rights and ensuring civil liberties, privacy, and confidentiality protections.
We engineered a secure, scalable, and AI-ready data lake for a fintech institution. We used Delta Lake with SCD Type 2 to retain timestamped historicals, integrated Azure Synapse for high‑performance querying, implemented Azure Key Vault for credential security, and built horizontally scalable ETL pipelines, explicitly designed to power ML and analytics.
In a rapid prototype for BigFilter, a fact‑checking startup, we indexed Wikipedia and Semantic Scholar and applied a segmented, retrieval‑augmented architecture to capture, normalize, and surface high‑quality evidence, creating an AI‑ready corpus for downstream evaluation and research use.
Today, the inner workings of frontier AI systems are poorly understood. Technologists know how LLMs work at a high level, but often cannot explain why a model produced a specific output. This can make it hard to predict the behavior of any specific AI system. This lack of predictability, in turn, can make it challenging to use advanced AI in defense, national security, or other applications where lives are at stake. The United States will be better able to use AI systems to their fullest potential in high-stakes national security domains if we make fundamental breakthroughs on these research problems.
Our BigFilter prototype uses a segmented architecture that exposes each reasoning stage so developers can see the why behind each reasoning step and audit behavior over time.
For an NFL team operations chatbot, we integrated Langfuse alongside Datadog for granular insight into the underlying LLM’s internal operation, enabling detailed analysis of reasoning, prompt performance, and more. We also added caching and precomputation (LangChain/AWS Bedrock) to cut latency from minutes to seconds, and adopted LangGraph to manage multi‑step agent flows.
One risk of AI that has become apparent to many Americans is malicious deepfakes, whether they be audio recordings, videos, or photos. (...) In particular, AI-generated media may present novel challenges to the legal system. For example, fake evidence could be used to attempt to deny justice to both plaintiffs and defendants. The Administration must give the courts and law enforcement the tools they need to overcome these new challenges.
While not a deepfake forensic detector, our BigFilter prototype addresses the epistemic half of synthetic media: verifying claims connected to multimedia artifacts. It retrieves and summarizes supporting and contradicting evidence with explicit citations and an auditable chain of reasoning—useful for triage, discovery, and rapid diligence in legal or newsroom workflows. Built in three months to validate feasibility and cost.
If your engineering charter is deeply technical, time‑constrained, and integration‑heavy, we can help you convert policy‑level goals into shipped capability: hardened tooling, AI‑ready data, explainable pipelines, and measurable ROI. Let’s talk about your roadmap and where we can unlock the next order of impact.
The US AI Action Plan, titled Winning the Race: America’s AI Action Plan, is a federal strategy released by the White House to secure America’s leadership in artificial intelligence. It outlines a series of recommendations and investments to accelerate innovation, strengthen infrastructure, and ensure responsible, secure AI adoption across industries.
The Action Plan is built on three main pillars: accelerating innovation, building infrastructure, and leading in international AI diplomacy and security. Together, these form the foundation for developing world-class AI capabilities while safeguarding economic, workforce, and national interests.
Janea Systems’ portfolio demonstrates practical alignment with the Plan’s recommendations. From open-source contributions like JECQ and PyTorch improvements, to enterprise AI adoption frameworks, to explainable and auditable AI systems like BigFilter, Janea Systems has already delivered projects that map directly to federal priorities.
Ready to discuss your software engineering needs with our team of experts?