Thursday, August 7, 2025

Software Development at dawn of AI

 

Contemporary software development process is fairly manual with sprinkles of automation. As our current technological landscape permits, automation is limited to repetitive and rule based tasks. The prime examples of existing automation are:


  • Static Code analysis
  • Behavior Driven Development (BDD) generating Behavior / Functional Tests from a rule bound acceptance criteria
  • Auto execution of Unit Tests at each build
  • Auto execution of Tests (pre-written as test script) after deployment in sub prod environments
  • Auto deployment using CI/CD Pipeline
  • Executing Regression test suit

All of the automation listed above have following characteristics:

  • In each project/product development environment, deployment environments, availability of tool set, technical landscape, and product under consideration itself are unique, so act of creating automation is not automated;
  • Automation is hand crafted by engineers; and
  • Fragmented automation

Following areas of software development are not automated at all (or most of it):


  • Requirement Management
  • Technical Architecture and Design
  • Technical Architecture and Design review (for ease in change, accuracy, efficiency & utilization of underlying resources, future proofing (sic!), compliance & regulatory requirements, security and performance requirements, policy compliance, and adhering to best practices)
  • UX Design
  • UX Design review (for usability, ease in change, accuracy, efficiency & utilization of underlying resources, future proofing (sic!), compliance & regulatory requirements, security and performance requirements, policy compliance, and adhering to best practices)
  • Code writing (recently code generation is started gaining traction)
  • Unit Tests writing
  • Code and Unit Tests review (except for static code analysis)
  • Writing Functional tests
  • Executing Functional tests
  • Writing Integration / End-to-End tests (except for Behavior tests under BDD regime)
  • Executing Integration / End-to-End tests (
  • Writing Performance / Load  tests
  • Executing Performance / Load tests
  • Writing Security tests
  • Executing Security tests
  • Technical Documentation (Most of the IDEs generate inline  documentation but at low level, not explaining Architecture and Design)
  • Project Management (Agile, hybrid, waterfall)
  • User Documentation (Training material)
  • Automation of automatic tasks

With the arrival of GenAI and contemporary developments in machine learning a paradigm shift is taking root in code writing - usage of AI Assistants Agents (e.g.  Copilot from GitHub) for code generation within IDE.  Code generation by AI tools has started acting as fellow developer who can write code from well-crafted prompts.

The tool chain for software development tasks are fragmented and require highly specialized persons and in large number due to

  • Complexities of underlying business requirements;  
  • Available platforms, tools & technologies;
  • Dependency on underlying infrastructure; and
  • Inter-dependencies of above.

The next level of automation in software development will be a framework of interconnected AI Agents, AI Assistants, and AI bots. This automation will drastically change software development team composition and put pressure on team’s skill set.

A fully AI immersed software development framework will have collaborative AI Agents, Assistants and Bots:


  • Collating and  compiling evolving requirements;
  • Developing & maintaining Technical Architecture & Design and continuously reviewing it;
  • Developing and maintaining UX Design and continuously reviewing it;
  • Generating Code and unit testing in accordance with agreed upon Technical Architecture & Design, UX Design, and Requirements;
  • Continuous reviewing the generated code (and unit tests) and giving feedback to code generation AI toolset ;
  • Generating functional & non-functional test cases and test scripts for automatic execution;
  • Refactoring the Technical Architecture & Design and code to incorporate results of testing and evolving requirements;
  • Configuring CI/CD pipeline and deploying it; and in the last;
  • Deploying the code in non-production environment; and
  • Deploying the code in production environment and validating it.

The envisioned AI immersed software development will have following characteristics:


  • Software development team will consists of two types of team members – Human and AI members
  • Each Human team member will have at least one companion – specialized AI Agent or Assistant
  • Rule based automations will be taken-care by AI Bots
  • Human team members will focus on human interactions (e.g. eliciting requirements)
  • Human team members will also focus on jobs which does not have table-top environment (e.g. Defining Technical Architecture, defining UX Design, defining rules for AI Bots, etc.)
  • Majority of Human team members will be performing requirement elicitation, defining & reviewing Technical Architecture, defining & reviewing UX Design, Rules for CI/CD pipeline, validation in production environment, deploying AI tool set (Agents, Assistants, and Bots)
  • TDD is fading and soon be past
  • BDD will drastically change to accommodate emerging AI toolset
  • A new SDLC will emerge where different phases will merge into each other (not like Agile) and some will fade to give way to new ones
  • A new team structure is emerging which is Business/System Analyst, Sr. Technical Developer, and Sr. Quality Analyst heavy who can elicit requirements and provide oversight over AI toolset
  • Project Management (whether Agile, Hybrid or Waterfall) will change drastically. A new way of managing SDLC cycle is emerging to accommodate new SDLC and team structure  

 In future articles, I will be exploring different phases of SDLC, team structure, software development methodologies, and metrics evolving in response of AI overtake of software development.

Saturday, August 2, 2025

SDLC at dawn of AI

 

The Software Development Life Cycle (SDLC) is undergoing a significant transformation with the rise of Generative AI (GenAI). Here's a high-level breakdown of how GenAI is reshaping each phase of the SDLC:

1. Requirement Gathering

Before GenAI:

  • Relied heavily on manual stakeholder interviews, documentation, and business analysis.

With GenAI:

  • Natural Language Processing (NLP) can convert unstructured inputs (conversations, emails, or meeting transcripts) into structured requirements. Tools like AI-powered requirement elicitation platforms can identify gaps or inconsistencies early.
  • Chat bots powered by GenAI can interact with stakeholders to refine needs in real-time.
  • Visual requirement generation (mockups, user stories) from simple text prompts.
  • Example: Chat bots or AI assistants draft user stories and validate requirements with stakeholders in real time.

2. Design

Before GenAI:

  • UI/UX design was manual and iterative.
  • Technical Architecture and Design required deep human expertise.

With GenAI:

  • GenAI can auto-generate wireframes, UI mockups, and other interaction modes design (e.g. touch and verbal) from textual inputs.
  • It can suggest architecture blueprints based on system goals and constraints.
  • AI-assisted design review tools highlight flaws and recommend best practices.
  • Example (UX): Tools like AI-driven wireframing (e.g., Uizard) or architecture suggestion systems create prototypes, while AI validates designs against best practices or compliance standards.
  • Contemporary AI powered tools are not capable enough to generate technical architecture and design to fit into an existing landscape (available or desired technologies/tools and infrastructure). Human ingenuity will dominate this area in near to mid term future.

3. Development

Before GenAI:

  • Manual coding, relying on human speed and quality.
  • Code generation from scratch.
  • Unit tests to be created by humans

With GenAI:

  • GenAI acts as a pair programmer which can generate code and unit tests based upon prompt provided. This prompt can be a story (directly picked up from Jira) or dev provided.
  • GenAI is writing boilerplate code and also refactoring the code.
  • Auto-generation of entire modules based on specifications or API contracts.
  • Code translation across languages/platforms (e.g., from Python to Java).

·         TDD (Test Driven Development) is becoming obsolete due to Code First policy of auto generation.

·         The current breed of code generation tools are still maturing in terms of following architecture and design agreed upon in Design phase.

·         Contemporary code generation tool set is good enough to generate code and unit test cases for most of the scenarios in isolation ( at story level not at Epic/Feature level)

·         Example: GenAI-powered coding assistants (e.g., GitHub Copilot, Tabnine) write, review, and optimize code, boosting developer productivity.

4. Testing

Before GenAI:

  • Test cases manually written.
  • Test coverage often incomplete.
  • In case of BDD (Behavior Driven Development) tests are auto generated from acceptance criteria of Stories/Features/Epics.
  • Mostly hand crafted tests cases are automated tests as after thought

With GenAI:

  • Test cases are auto-generated from code or requirements. This enhances code coverage drastically.
  • Tests are auto generated for automation tool (e.g. Selenium) to execute.
  • GenAI identifies edge cases and potential failure points with help of human intervention

·         AI tools predict high-risk code areas, generate edge-case scenarios, or simulate user behavior  and real world scenarios for load testing. Self-healing test scripts adapt to code changes.

  • For Performance / Load and Security test perspective, GenAI still to catch up.
  • Example: Tools like Testim or Mabl use AI to create and maintain automated tests, reducing test maintenance overhead.

5. Deployment

Before GenAI:

  • Either Manual deployment or automation using CI/CD pipeline
  • CI/CD pipelines manually configured and monitored.

With GenAI:

  • AI suggests optimal deployment strategies and rollout plans (e.g., blue-green, canary).

·         AI-driven DevOps tools (e.g., Jenkins with AI plugins) monitor deployments, flag potential failures, and suggest optimal configurations for cloud environments.

  • Predictive analytics for post-deployment risk and rollback needs.

6. Maintenance & Support

Before GenAI:

  • Bug detection and monitoring tools require manual rule configurations.
  • Root cause analysis takes time.
  • Prioritization of support tickets and action on them is manual.
  • Some automatic monitoring of application logs but action on exception is human heavy.
  • User feedback collection, analysis, and action are human heavy.

With GenAI:

  • Prioritization of support tickets will remain in human domain in near future but as GenAI start seeping deeper into SDLC, action on prioritized tickets will start falling into hands (sic!) of GenAI.
  • Automatic monitoring of application logs with automatic action (up to an extent) as GenAI becomes pervasive in software development and maintenance domain.
  • User feedback analysis will become automatic soon and integrate with auto actions backed by GenAI.

7. Application / System Monitoring

Before GenAI:

  • Monitoring tools require manual rule configurations.
  • Root cause analysis takes time.
  • Actions on analysis is manual heavy

With GenAI:

  • Logs and telemetry auto-analyzed to detect issues proactively.
  • Root cause analysis is becoming faster
  • GenAI assisting in root cause analysis by mapping symptoms to code defects.
  • Self-healing systems and GenAI-assisted hotfixes.
  • New era of proactive and preventive maintenance is soon to start.
  • Example: AI-driven observability platforms (e.g., Dynatrace) use predictive analytics to prevent outages.

Cross-Cutting Changes

  • Faster Iterations: GenAI reduces time-to-market by automating repetitive tasks, enabling rapid prototyping and continuous delivery.
  • Shift in Roles:
    • Overlapping roles of Business Analysts and Developers. Dev team is mastering prompt writing skill.
    • Senior developers and testers focus more on oversight, creative problem-solving, and AI-tool integration rather than manual coding or testing.
  • Improved Quality: AI-driven analytics enhance code quality, security, and performance by catching issues early and suggesting optimizations.
  • Ethical and Governance Challenges: Teams must address AI biases, ensure transparency in AI-generated outputs, and comply with regulations, adding new governance steps to the SDLC.
  • Collaboration: AI tools enable better cross-functional collaboration by translating technical concepts for non-technical stakeholders.
  • Documentation: AI generates and updates technical and user documentation.
  • Collaboration: Natural language interfaces bridge gaps between technical and non-technical teams.
  • Security: AI scans code for vulnerabilities and recommends patches.

Challenges and Considerations

  • Over-reliance Risk: Blind trust in AI outputs can lead to errors if not validated by humans.
  • Skill Gaps: Teams need training to effectively use GenAI tools and interpret their outputs.
  • Cost and Integration: Adopting GenAI requires investment in tools and infrastructure, plus integration with existing SDLC workflows.
  • Data Privacy: AI tools processing sensitive data (e.g., requirements or code) must comply with privacy laws like GDPR.

SDLC Phase

Before GenAI

After GenAI

Remarks

Requirements

Manual Interviews and documentation to be converted into requirement (User stories and Epics)

AI assisted requirement extractions and formatting from emails, chats, meeting recordings, and documentation.

Business & System Analysts will be adding edge cases and reviewing the requirements collected for inconsistencies and missing scenarios.

Design

Heavy manual process involving technical architecture & design and UX

·   AI generated mock ups from properly crafted prompts for UX.

·   Technical architecture will be generated deeper than mere scaffolding with help of GenAI tools.

·   At present GenAI is not mature enough to generate technical architect and design neither it is capable enough to create UX end to end.

·   For a foreseeable future, human intervention will dominate this area.

Development

Developers are witting the code and creating unit tests.

·   GenAI assisted tools are generating code and unit tests.

·   TDD is dying its own death due to Code First policy of GenAI assisted code generation.

·   Some brave heart teams are generating code and unit tests directly from Stories/Epics from requirement management tools.

·   The current breed of code generation tools are still maturing in terms of following architecture and design agreed upon in Design phase.

·   Contemporary code generation tool set is good enough to generate code and unit test cases for most of the scenarios in isolation ( at story level not at Epic/Feature level)

Testing

Combination of manual and automatic tests creation and execution.

GenAI and BDD tools integration are generating tests (beyond Acceptance criteria) directly from requirements (Epic/Feature/Story) for the functional perspective.

·   Contemporary GenAI assisted tool set is creating functional/behavioral test very effectively.

·   For Performance / Load and Security test perspective, GenAI still to catch up.

Deployment

·   Combination of manual and automation using CI/CD pipeline.

·   Current automation can execute unit test cases upon build and functional/behavioral test after deployment automatically.

Combination of manual and automation using CI/CD pipeline is continuing.

 

Maintenance and Support

·   Prioritization of support tickets and action on them is manual

·   Prioritization of support tickets will remain in human domain in near future but as GenAI start seeping deeper into SDLC, action on prioritized tickets will start falling into hands (sic!) of GenAI.

·   In medium term as AI start taking over wider aspects of business and SDLC, prioritization will start falling into AI domain with human oversight.

Monitoring

·   Automatic monitoring only in case of exception humans are taken into loop.

·   Current regime of automatic monitoring and loop in humans in case of exception is continuing

·   As GenAI start becoming mature in system and infrastructure monitoring, human intervention will decline.

 

In Summary

GenAI compresses SDLC cycles, democratizes access to development, enhances productivity, but requires new governance, validation strategies, and human-AI collaboration models.