From Curiosity to Capability: What My AI Learning Journey Has Actually Delivered

Introduction

Over the past few weeks, I’ve been deliberately exploring how AI can support real-world software delivery—not as a novelty, but as a practical tool embedded into day-to-day engineering work.

This hasn’t been about generating code for the sake of it. Instead, it’s been about applying AI across different types of ownership, complexity, and lifecycle stages of software. What’s emerged is a clearer picture: AI is most valuable not when it replaces development, but when it accelerates understanding, improves quality, and reduces friction.

Here’s how that has played out in practice.


1. Improving Code I Already Own

The easiest place to start was with code I had written myself—tools that are already in use and solving real problems.

Because I understood the intent and behaviour of these applications, AI became a powerful second pair of eyes. I used it to:

  • Generate unit tests for areas that had little or no coverage
  • Identify gaps in build and deployment pipelines
  • Highlight potential security concerns
  • Suggest refactoring opportunities for readability and maintainability

The result wasn’t dramatic rewrites. Instead, it was steady, compounding improvement:

  • Higher confidence in changes due to better test coverage
  • More reliable builds and deployments
  • Reduced time spent on manual code reviews
  • Cleaner, more maintainable codebases

This is where AI felt most immediately productive—augmenting existing knowledge rather than trying to replace it.


2. Making Sense of Legacy Systems

A more interesting challenge came from systems I hadn’t written—particularly those with limited documentation and where historical knowledge had faded over time.

Here, AI acted less like a coding assistant and more like a translation layer.

I used it to:

  • Analyse unfamiliar code and explain intent
  • Generate technical documentation from existing implementations
  • Identify outdated dependencies and suggest upgrade paths
  • Propose test strategies for systems that had none

What would typically take days of manual exploration could be accelerated significantly. More importantly, it reduced the risk of “guesswork engineering”—making changes without fully understanding the system.

Key outcomes included:

  • Faster onboarding into legacy systems
  • Improved stability through better understanding
  • Reduced reliance on tribal knowledge
  • A foundation for future modernisation

This was one of the most valuable use cases: turning unknown systems into known ones.


3. Tailoring the Tools I Use Every Day

Another unexpected benefit came from applying AI to open-source tools I use regularly.

Instead of working around limitations or minor frustrations, I was able to:

  • Explore the codebase quickly
  • Identify the root cause of issues
  • Implement targeted fixes or enhancements

AI reduced the barrier to entry for modifying third-party code. Tasks that might previously have felt too time-consuming or complex became achievable in a short space of time.

The impact here was subtle but meaningful:

  • Smoother day-to-day workflows
  • Faster resolution of small but persistent issues
  • Greater control over the tools I rely on

It shifted the mindset from “adapting to tools” to “adapting tools to fit the way I work.”


4. Reviving Older, Customer-Facing Solutions

Some of the most challenging scenarios involved older solutions that are still in use but no longer actively developed.

These tend to surface through support cases or escalations, often requiring rapid understanding and targeted fixes.

AI proved particularly useful in:

  • Interpreting older coding patterns and structures
  • Diagnosing issues from limited context
  • Suggesting safe, minimal changes to resolve problems
  • Documenting behaviour that had never been formally captured

This led to:

  • Faster resolution times for complex issues
  • Fewer repeat incidents
  • Increased confidence when working in fragile codebases

Rather than treating these systems as untouchable, AI made them accessible again.


5. Enhancing Actively Maintained Solutions

Finally, I applied the same approaches to modern, actively maintained solutions—where expectations around quality, security, and consistency are much higher.

In this space, AI supported:

  • Continuous improvement of test coverage
  • Ongoing security reviews
  • Documentation generation and updates
  • Ensuring alignment with current platform standards

The key difference here is that AI becomes part of the development lifecycle, not just a one-off tool.

Benefits included:

  • More consistent quality across releases
  • Faster delivery of enhancements
  • Improved confidence in production changes
  • Better alignment with evolving standards

This is where AI starts to feel like a long-term capability rather than a short-term productivity boost.


What I’ve Learned

Across all of these scenarios, a few consistent themes have emerged.

1. AI Accelerates Understanding More Than It Replaces Thinking

The biggest gains came from reducing the time it takes to understand code—not from blindly generating it.

2. Context Still Matters

AI is most effective when you can validate its output. The better your understanding of the system, the more value you get.

3. Small Improvements Compound Quickly

Adding tests, improving pipelines, and tightening security might seem incremental, but together they significantly improve delivery confidence.

4. Legacy Work Is Where AI Shines

The less documented and more complex a system is, the more impact AI can have in making it accessible again.

5. It Changes What Feels “Worth Doing”

Tasks that previously felt too time-consuming—like fixing minor issues in third-party tools or documenting old systems—suddenly become viable.


Where This Goes Next

This is still early in the journey, but the direction is clear.

AI is not just a coding assistant—it’s becoming a core part of how we:

  • Understand systems
  • Maintain quality
  • Reduce risk
  • Deliver improvements faster

The next step is to move from individual use to repeatable patterns—embedding these approaches into team workflows, standards, and expectations.

Because the real value isn’t just in what AI can do—it’s in how consistently we can apply it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.