David, a Welsh Microsoft Guy
Back to Blog
7 August 2025

The speed of AI, the discipline of human oversight: how I built a migration assessment tool in just seven days.

ai
generative-ai
leadership
microsoft
azure
The speed of AI, the discipline of human oversight: how I built a migration assessment tool in just seven days.

Recently I've just ticked over to 3 years at IBM, and how time flies! Normally I would have done a bit of a recap of this year, but rather I've been pretty quiet over the last couple of weeks.

This is because over the course of the last seven or so days I had a choice, send a SME off for weeks and weeks of assessing and documenting something, or take a bold approach that would result in re-usability down the road. The choice was clear to me, so I went from zero, to having a functioning capability that exported data from a popular integration services SaaS platform via API, ingested into a local SQLite database & then identified and rationalised this data ready for assessment & reporting.

From this I was able to identify all the integrations, delineated between production assets and those which are in production, but not actively used (and therefore require further investigation) and then took this information to built out an Azure cost model for production, that accounted for the organisation’s busy periods as well as differentiating between complex and simple integrations, using the Azure Retail Services API. Yes, I essentially created a dynamic, self-contained Azure Pricing Calculator for the identified in-scope Azure services, using my data!

In doing this, I thought it would be useful to share my learnings working with Visual Studio Code and Claude Sonnet 4 in agent mode, so here goes!

  • Claude Sonnet 4 is a joy to work with, seriously, it was - it just worked, no drama, it got what I was trying to achieve and went and executed against it.
  • Whatever you do in test will inevitably break in production, I did a whole bunch of testing in a separate enviornment with exemplar integrations, where things appeared fine, ths was until I got to production where the size and complexity of things meant almost a complete re-write at one point!
  • Set guardrails and continually reinforce them - otherwise you wind up with millions of Test_, debug_, and other scripts (I was just working locally for this project). As it works through issues it will also create “new” versions of whatever you’re working on. While sometimes helpful, cleaning up afterwards was a nightmare.
  • Get it to explain to you what it’s going to do before letting it do it. It’s great at what it does, but it can go off on tangents. Keeping the human in the loop here is critical, as is being systematic in describing what you want, reasoning over it, and only then letting it run.
  • In my scenario, where I was pulling data from APIs, the human in the loop needs to identify all the relevant ones and make sure you can get the data you need. The AI won’t do this - and worse, if it can’t get data it will manually assign values. Not what you need when working with costings. DONT_STORE_HARDCODED_CURRENCY_VALUES became a common mantra!
  • When you run out of credits for Claude, you fall back to GPT 4 models. Whilst still good, it’s definitely wasn't the same experience. The ability to reason over multiple sets of files and systematically work through things just isn’t at the same level. For this reason alone, I’ve upgraded my GitHub Copilot subscription to the Plus tier.
  • Whilst we tend to think in terms of code and applications, I stumbled across the ability of the AI to create dashboards. This was a gamechanger, completely transforming the reports from .doc/.xlsx/.md files that needed manual formatting into something - if I might say - was both gorgeous (My exact prompt was: “create me a gorgeous report that…”) and dynamically data driven, bringing data to life is super important and this worked brilliantly.
  • Good architecture skills are still in demand - being able to visualise what was needed, both from the technology and business outcome perspectives, is still super important. Yes, you could just “tell” it to do stuff, but to work well you still need good separation of concerns. A lot of my natural architect tendencies made this a modular, structured capability rather than a tangle of scripts. That discipline meant I could adapt, extend, and roll back parts of the solution without losing sight of the bigger picture - and that’s something AI alone won’t give you.
  • Finally, be prepared to be frustrated. Even with all this, it went off the rails a few times and I had to roll back to a backed-up version and start again.

Plus this is just the start, now I have the data and the target Azure services, the next step is a Bicep script to scaffold everything that's needed, in turn enabling consultants to just focus on implementing business logic, configuring the services required and ensuring we get to value as quickly as possible

Now, I've posted before about how I'm not a developer and quite frankly, I probably will never be - but (in my humble opinion), I'm a good architect and these tools are exponentially increasing my impact, which is for sure not a terrible position to be in. However, what it has ever so painfully highlighted, is if you are not understanding how you can use these tools for your benefit, others will - but that said, these tools are not magic bullets, nor are they going to make you an expert in something you aren't today - so keep learning, start small, get confident, then use appropriately!

#ibm #automation #growth #claude #azure #integration #alwayslearning #vscode

Continue exploring

Explore the topic graph

Comments