By Atul Parte and Sudarshan Rajput
Abstract:
Organizations are rapidly investing in artificial intelligence with the expectation of faster delivery and increased efficiency, yet many find that speed does not automatically follow adoption. Drawing on a yearlong modernization effort in a complex, high‑risk legacy environment, this article argues that AI should be treated not as an acceleration tool but as a new team member requiring structured onboarding, clear roles, and integrated workflows. By defining scope before introducing automation, designing scalable processes from the outset, and managing the human–AI relationship with transparency and feedback loops, the team achieved more than 25 percent acceleration while avoiding operational disruptions. The experience underscores that sustainable gains come from aligning people, processes, and AI — shifting focus from raw automation speed to business‑ready outcomes. Effective leadership, cultural adaptation, and collaboration between humans and AI are essential to realizing AI’s full value in digital transformation.
Across industries, leaders are investing heavily in artificial intelligence with the expectation that it will accelerate delivery, improve quality, and drive efficiency. Yet many executives are discovering that speed does not come automatically with AI adoption.
From our experience, the reason is simple: AI is not an instant accelerator toolset; it is a new team member. Like any new colleague, it must be introduced, trained, and integrated into existing workflows. Likewise, teams must learn how to work with AI — understanding its capabilities, its limitations, and the level of oversight it requires.
Acceleration happens only when both sides adapt to each other.
Over the past year, we used AI extensively in the delivery of modernization of a complex legacy system with significant financial and regulatory exposure in the E&U clean energy space. Through that experience, we achieved more than 25+% acceleration in modernization efforts — while avoiding the system shocks that often follow large-scale go-lives.
We accomplished this by taking a different approach from the outset. We refused to treat AI as a simple acceleration tool. Instead, we planned around it as if a brilliant new colleague were joining the team. In hindsight, that distinction made the difference between a troubled implementation and a successful transformation.
Start with Scope, Not Speed
In one modernization program involving a legacy codebase, our objective was to refactor code from an older version to a modern version while maintaining business continuity.
We did not start the project and then add the plans to automate the code as an afterthought. We started by defining business scope and success. The team mapped the codebase, identified refactoring impacts, and categorized challenges into solution buckets such as compilation errors, refactoring needs, and redesign requirements.
This structure allowed the work to be divided into clear, parallel solution streams. Only then did we introduce AI into the process. A customized AI-based converter was used to process code files and generate modern equivalents. The converted code was then reviewed, tested, and integrated into the sprint cycle.
By sequencing the work this way — human alignment first, AI execution second with clear line of sight to the outcome— we ensured that automation served the process rather than overwhelming it.
Plan for scale from the Outset
To operationalize AI’s contributions without disrupting the broader development workflow, we created a structured, repeatable model for automated code conversion.
A typical weekly cycle included:
Scrum board analysis: Identify features scheduled for conversion and map them to code files.
File collection: Gather all relevant files into an intake folder.
AI processing: Run files through the model for conversion.
Quality check: Conduct a high-level review of the AI-generated output.
Developer review: Midweek, assign converted files to developers for detailed evaluation.
Issue resolution: Address reported issues and update the Scrum board.
Iteration: Repeat the process for the next batch.
Initially, the broader developer community expressed resistance. Many were concerned that AI might disrupt established workflows or introduce opaque decision-making. In some cases, team members rejected AI output simply because it differed from their own approach.
By confining AI complexity to a dedicated sub-team — acting as an internal service desk and support function — we addressed concerns directly. This not only shielded developers from the technical intricacies of large models but ensured that output was objectively evaluated before adoption. This helped keep everyone focused on delivery.
Over time, this approach transformed skepticism into confidence and produced a smoother, more predictable workflow blending AI-driven acceleration with human oversight.
Manage the Human–AI Relationship
The deeper insight is organizational, not technical. AI should be treated as a capable but inexperienced colleague — able to handle repetitive tasks at scale but still in need of context, feedback, and supervision.
Leaders should focus on three key practices:
Define roles and boundaries. Clarify what AI handles, what humans handle, and how decisions are reviewed.
Create feedback loops. Encourage developers to flag issues, validate outputs, and refine AI usage.
Foster trust. Transparency about how AI operates — and how its results are verified — is essential for adoption.
When teams understand that AI is there to augment, not replace, their expertise, collaboration becomes natural.
Measure Success Beyond Output Efficiencies
Throughout the refactoring effort, success was measured not by how quickly AI converted code, but by whether the resulting code integrated seamlessly and delivered real business value.
This shift — from automation speed to operational success — is what distinguishes organizations that truly leverage AI from those that merely deploy it.
AI’s output is only as valuable as its integration into the business. Leaders must look beyond technical metrics and assess whether AI-enabled processes improve outcomes for customers, employees, and the enterprise as a whole.
The Leadership Imperative
Introducing AI into an organization is not a plug-and-play exercise. It is a leadership challenge requiring deliberate planning, cross-functional coordination, and cultural adaptation.
Leaders must:
Onboard AI as they would a team member by providing structure, context, and data.
Onboard teams to AI by helping people understand how to interpret, question, and build on AI outputs.
Manage collaboration by resolving conflicts between human judgment and machine output quickly and using those moments to refine processes.
Our experience shows that when organizations treat AI as a strategic collaborator — not a shortcut — they create the conditions for sustainable acceleration driven by bottom‑up enthusiasm. Productivity gains become repeatable, innovation becomes systemic, and teams become more adaptive.
The Bottom Line
Buying AI subscriptions does not accelerate work by itself. Acceleration emerges when AI and human teams learn to work together — when trust is built, conflicts are managed, and workflows are redesigned around collaboration rather than substitution.
Introducing AI to the team, and the team to AI, is not just good change management. It is the foundation of the next phase of digital transformation
About the Authors
Atul Parte is an Associate Partner at IBM, where he leads digital transformation initiatives focused on GenAI adption in enterprise modernization with a large E&U client
Sudarshan Rajput is Program Manager at IBM, specializing in enterprise modernization and AI-enabled engineering delivery.