...

Speedup in Software Engineering: How AI Boosts Front-End and Back-End Development at Janea Systems

May 29, 2025

By Hubert Brychczynski

  • Artificial Intelligence,

  • Frontend Engineering,

  • Backend Engineering,

  • Generative AI,

  • Software Engineering

...

Generative AI: The Technology We Love to Hate

Here’s an unpopular opinion: everybody is using large language models. Few admit it. Fewer still endorse it.

Translators routinely query ChatGPT about tricky phrasing. Content creators (like myself) use it to refine their work (as I did for this article - talking about you, ChatGPT!). And software engineers - including those at Janea Systems - have been leveraging generative AI to streamline their tasks for quite some time.

Yet, people often decry, dismiss, or ridicule large language models - despite almost certainly using them when no-one’s watching. We’re eager to optimize our workloads with AI, quick to mock its mistakes, and reluctant to share our original work with it.

The Great Unknown: How Much Does It Help?

The ambivalence toward AI stems, in part, from its very design. Large language models are probabilistic. Almost every public LLM interface (with the telling exception of Grok) warns users to double-check outputs for accuracy. Meanwhile, a growing body of research - driven largely by journalists and scientists - challenges industry claims about LLM reliability, exposing just how often even the best models hallucinate.

That being said, both you and I know from experience: sometimes, large language models are genuinely useful.

The real question is: when, and how?

This is what we set out to determine in our recent experiment.

The Experiment’s Two Phases

The experiment consisted of two phases:

  • Phase 1 tested AI’s ability to solve software engineering tasks on its own, with minimal human input.
  • This blog series focuses on Phase 2: measuring engineer performance with and without AI assistance across five domains - front end, back end, machine learning, data, and DevOps.

What we found is enough to fill two posts:

  • This part will cover front-end and back-end engineering.
  • Next time, we’ll discuss machine learning, data, and DevOps.

Here’s what you can expect to learn:

  • How Janea Systems uses AI to boost engineer productivity;
  • Where improvements are strongest;
  • Which engineering skills and use cases correlate with better outcomes;
  • Where extra vigilance is needed.

Key Takeaways

  • On average, engineers using AI completed tasks 30% faster than without it.
  • Front-end and back-end engineers saw the largest improvements in performance - 66.94% and 55.93%, respectively.
  • Domain expertise, engineering tool proficiency, and skill in prompt engineering correlated with better outcomes when using AI for coding assistance.
  • Across all domains, AI primarily accelerated work by: (1) kickstarting projects with boilerplate code and starter templates; (2) referencing relevant documentation and best practices; and (3) serving as a sounding board for explanations and brainstorming.
  • At the same time, engineers consistently reported similar grievances: AI suggestions were often erroneous, broken, illogical, inscrutable, outdated, non-standard, generic, or overly complicated - issues that tended to diminish with greater proficiency in prompt engineering.
  • In some cases, inconsistency in AI-generated output required engineers to spend additional time double-checking it.

Experiment Design

We selected four expert-to-senior-level engineers per domain. Each engineer solved four domain-specific problems: two unaided and two with AI assistance.

Afterward, engineers filled out quantitative and qualitative surveys, providing data on time spent, tool usage, and their personal experiences with AI assistance. They also assessed their domain expertise, the skill set required for each task, and familiarity with prompt engineering techniques.

The Tasks

Front-end and back-end engineers grappled with the following tasks:

Front-End Engineering

  1. A multi-step customer onboarding form.
  2. A dynamic, infinite-scroll product grid supporting search, filtering, sorting, and interactive display customization.
  3. A drag-and-drop task management Kanban board.
  4. An interactive Hangman game.

Back-End Engineering

  1. A unified RESTful API for aggregating data from multiple internal sources in a standardized JSON format.
  2. A microservice that sends emails or Slack notifications based on specific events from a message queue in Kafka or RabbitMQ.
  3. A back-end endpoint that returns requested parameters from a database in real time.
  4. A scraping API that retrieves product options from Amazon and eBay and organizes them by price in an SQL database.

Results on Average

On average across all domains, engineers finished tasks approximately 3.2 hours faster with AI, translating to a 30% reduction in completion time—from 9.56 hours to 6.36 hours per task.

Engineers praised AI’s ease of use but consistently noted that AI-generated solutions required modification before they could serve as viable proofs of concept.

Result Breakdown

Task Acceleration

Figure 1 illustrates the domain-by-domain speedup. The largest gains occurred in front-end and back-end tasks—66.94% and 55.93%, respectively.

performance-improvement.png

Fig. 1: Task performance improvement across domains

AI Solution Viability

Figure 2 shows the proportion of AI-generated solutions that worked “out of the box.” Back-end engineering led with a 100% success rate, followed by front-end engineering at 87.5%.

viability.png

Fig. 2: Percentage of AI-generated solutions working out of the box

AI Solution Tweaking

A functional solution isn’t always a polished one. Back-end solutions may have worked immediately, but engineers still spent time refining them.

Figure 3 reflects engineer self-assessment of time spent improving AI-generated solutions, where “1” indicates extensive time spent and “5” indicates minimal time.

time.png

Fig. 3: Time spent improving AI-generated solutions

Figure 4 reflects engineer self-assessment of the number of changes made to AI-generated solutions, where “1” indicates few changes and “5” indicates many.

changes.png

Fig. 4: Number of changes made to AI-generated solutions

Engineer Experience Per Domain

The Impact of Prompt Engineering and Domain Expertise

Higher proficiency in prompt engineering and greater domain expertise correlated with better outcomes when using AI for coding tasks. However, only one in four engineers had studied prompt engineering prior to the experiment.

Table 1 presents average self-reported assessments of expertise, tool proficiency, and prompt engineering familiarity across all domains.

self-report.png

Table 1: Engineer self-assessment

Front-End Plus AI: 66.94% Faster

Front-end engineers used AI for research, reference, guidance, and rapid prototyping. They found AI particularly useful for tasks involving visual elements or interactive components, such as brainstorming UI patterns or explaining complex DOM manipulations.

However, AI-generated solutions often violated accessibility and performance best practices, introduced inconsistencies and unnecessary complexity, or created unintended dependencies, which required careful verification and refactoring.

Back-End Plus AI: 55.93% Faster

Back-end engineers used AI as an advanced autocomplete and conversational reference. The models quickly produced code suggestions, reference snippets, and structural templates, saving time on boilerplate, debugging, and manual research.

When AI-generated code contained errors, engineers often struggled to trace the root causes, since they had neither authored nor fully reasoned through the code themselves. Moreover, some AI-generated solutions proved outdated or inconsistent with internal standards, introducing minor compatibility issues.

Our Verdict: AI Is a Catapult for Front-End and Back-End Engineering

The numbers speak for themselves: AI gives front-end and back-end engineers at Janea Systems a staggering advantage. Performance gains of 66.94% and 55.93% mean we can deliver projects much faster than AI skeptics. And we have the data to prove it.

What about the other domains?

Stay tuned for the next article to find out.


Janea Systems: Building the Future with Human-AI Synergy

As an AI-turbocharged software engineering company, we harness human-AI synergy for superior, accelerated software product development, research and prototyping. Here are three recent examples:

AI Fact-Checking Tool

We developed an exploratory prototype of an LLM-powered fact-checking tool, using a segmented architecture and off-the-shelf components to deliver rapid results in just three months.

An LLM-Based Chatbot

We are actively optimizing a cutting-edge chatbot platform that helps sport scouts find and acquire talent more effectively.

Power Toys

We integrated the Semantic Kernel feature to enable generative AI for the Advanced Paste tool, resolving 25k issues and reaching 118k stars on GitHub.

From prototype to production - let’s put AI to work for you.

Related Blogs

Let's talk about your project

600 1st Ave Ste 330 #11630

Seattle, WA 98104

Janea Systems © 2025

  • Memurai

  • Privacy Policy

  • Cookies

Let's talk about your project

Ready to discuss your software engineering needs with our team of experts?