Welsh Microsoft Guy
Back to Blog
26 February 2026

Dave's Notes - Microsoft AI Tour for Partners, London 2026

ai
event-notes
microsoft-ai-tour
microsoft
generative-ai
Dave's Notes - Microsoft AI Tour for Partners, London 2026

I spent the day at the Azure AI Tour in London. The day started with a somewhat tortuous queue(!) and once in, the realisation that finding/meeting folk will be a thing of chance rather than intent given the quantity of folk attending!! This also explains the image I've provided, where we as separate folk sent pictures to each other whilst we were there in different sessions, stiched together by ChatGPT - dont't look too closely!🤣

So, for those that don't know - the AI Tour 'proper' happened on Tuesday this week with attendance from many, many organisations using Microsoft Technologies. Today however, was all about Microsoft partners and the keynote was clear - We as partners are here to convert the momentum and excitement that is seen in the market into tangible execution. As a partner, this emphasis on execution over experimentation resonates. The gap between platform capability and production-grade delivery is where most programmes succeed or stall. That translation from platform capability into governed delivery is where I see we are spending most of our time with customers.

Having reflected on the day, the AI Tour is not about an awareness campaign or shiny new features any more, rather it was a signal that Microsoft sees that organisations are actively moving beyond curiosity and are now seeking real value from AI. The statistics shared by Darren Hardman reinforced that. In the space of a year, organisations reporting a clear AI strategy have moved from 46 per cent to 84 per cent. Agentic adoption has moved from roughly one in ten to six in ten. The detail behind those numbers will vary, but the trajectory is obvious. AI has moved onto the operating agenda.

Microsoft structured this through the Frontier Success Framework. Enrich, Reinvent, Reshape and Bend the curve. Enrich focusses on employee experience, illustrated through construction use cases where copilots are embedded into frontline workflows. Reinvent is centred on customer engagement, with central and local government and NHS scenarios demonstrating conversational and decision support capabilities in public services. Reshape seeks to reinvent core business processes, with financial services examples showing intelligence inserted directly into operational flows. Finally Bend the curve was positioned around innovation velocity, with the London Stock Exchange illustrating how AI can materially change the shape of what a business offers, not simply optimise existing tasks.

The language then moved from idea to reality, from curiosity to confidence, from pilots to platforms, from experiments to embedded capability. Highlighting a move away from isolated proofs of concept towards governed, scalable platforms. This was supported by a broader operating construct built around AI in the flow of human ambition, ubiquitous innovation and observability at every layer, underpinned by intelligence and trust. Work IQ, Foundry IQ and Fabric IQ were positioned as enabling layers, with Agent 365 and an Agent Factory approach providing a path to industrialisation.

With that framing in place, the subsequent sessions I attended explored aspects of these further. Indeed, the thread that came through most strongly for me was that the interesting work is moving away from things like “how do we call a model?” and towards “how do we make estates, platforms and operating models directly consumable by agents, without weakening governance?” A lot of the content across the day can be summarised as re-tooling existing enterprise substrates so that an agent has predictable, structured ways to discover context, invoke actions, and remain bounded by identity, policy and telemetry.

The data platform sessions landed this particularly well. The pattern for PostgreSQL was simple but consequential: an agent orchestrates, the model reasons, and structured tools perform deterministic work against the database, including SQL, vector search and graph retrieval. The detail that matters is not “LLM meets database”, it is the effort being put into schema awareness, discoverability and retrieval mechanisms so that structured data is addressable without the prompt being a brittle proxy for the schema. The move from bespoke tool wiring to MCP is really a move towards standardisation. If Postgres can expose tools such as list tables or generate query in a consistent way, you reduce one-off integration code and you make it materially easier to secure, observe and evolve over time.

Fabric then extended the same idea at platform scale. The OneLake narrative, mirroring, shortcuts and open access APIs are all, in effect, trying to solve the same problem: getting organisations out of repeated bespoke integration work and towards a logical unification model where data can be governed and consumed consistently even when the sources remain heterogeneous. The implications are practical. If you can mirror from operational sources and reduce data movement friction, you can shift more effort towards modelling, quality and governance. If you cannot, you burn that effort repeatedly on pipelines and duplication.

What I found useful in the Fabric Real-Time Intelligence content was that Microsoft did not present streaming as a separate specialised track. It was shown as something that sits naturally inside the same workspace model, with artefacts like anomaly detectors treated as first-class objects alongside the stream itself. That matters when you think about how these things are run. If anomaly detection is embedded into the same control plane, under the same RBAC model, surfaced in the same UI, you reduce the surface area of “special” tooling that only a subset of the organisation can operate. The trade-off is maturity. Some of these capabilities are still preview, which has direct implications for SLA, supportability and how far you take it into material decisioning.

Copilot in Fabric and Power BI was at its most interesting when it shifted away from “chat with your data” marketing and into “prep data for AI” mechanics. The ability to simplify schemas, hide fields, define verified answers, and add domain-specific instructions is not a cosmetic feature. It is a recognition that conversational interfaces amplify ambiguity. If executives can ask free-form questions across multiple reports and semantic models, then semantic design becomes a production concern, not a reporting nicety. Weak naming, unclear measures, inconsistent relationships and implicit business rules get exposed very quickly. In that sense, Copilot is a forcing function for better semantic hygiene, but it does not do the hygiene for you.

The Azure Copilot sessions were the clearest example of Microsoft trying to keep this “agent-ready” shift inside existing governance boundaries. The “How AI tools work” architecture slide did the heavy lifting. Copilot sits in the portal experience, pulls context from what you are doing, routes through orchestration that includes grounding and filtering, and then invokes plugins that ultimately call Azure services under the user’s identity. The boundary is RBAC. Copilot is not positioned as a privileged control plane. It is an interaction layer over ARM and other APIs, constrained by the permissions of the signed-in user. That design choice is what makes enterprise adoption plausible, because it maps AI-driven interaction back onto established identity and policy constructs.

Agents then push that further because they bring intent-to-action into scope. At that point, the operating questions become more important than the UX. Who can create agents, who can run them, and under what context? What approvals are required for actions? How is audit handled when an AI-assisted conversation results in a write operation? If the demos were deliberately clean, the implications are not. Once you compress the effort of operational actions, you also compress the margin for error unless policy, least privilege and logging are strong.

The session that tied most of the day together for me was the security and operationalisation content, particularly the use of API Management as an AI gateway. Treating Azure OpenAI as a backend behind APIM is not a small implementation detail. It is the point at which AI stops being an experiment and becomes part of the API estate. Token limits, quotas, telemetry, and content safety enforced at ingress gives you an operating envelope that you can reason about. Without that, cost becomes reactive and governance becomes fragmented across application teams. With it, you can apply consistent policies, measure usage with meaningful dimensions, and block unsafe requests at the boundary with auditable behaviour.

The later shift into MCP via APIM is also more significant than it looks on first glance. If APIM-managed REST APIs can be exposed as MCP servers, you can convert an existing governed API estate into structured tools for agents without bypassing your ingress controls. That turns “agents need tools” from a bespoke integration exercise into an adapter pattern over what most organisations already run. The maturity caveat still applies, but the direction is clear. Microsoft are trying to make agent consumption a standard capability layered onto the existing control fabric, not a parallel integration universe.

The final demo, a data agent fronted by APIM, was the cleanest articulation of separation of concerns I saw all day. The agent handled reasoning and orchestration. When it needed to act, it invoked a tool. The tool was an API exposed via APIM. APIM enforced authentication, throttling, quota and content policies. The backend executed deterministic data operations. In practical terms, that is how you keep probabilistic reasoning close to the user experience while keeping deterministic execution and control surfaces in the places we already know how to govern.

Across the day, the pattern held up in multiple places. Data platforms are being redesigned to be tool-enabled substrates for agents. Copilot is becoming a consistent interaction surface across portal, data workloads, developer tooling and even the shell. The discipline, however, remains familiar. Identity governs. APIs execute. Observability and quotas define the operating envelope. Content safety belongs at the boundary. Preview features need explicit risk decisions before they become production dependencies.

If there is a single takeaway I am taking back internally, it is that the technology is no longer the hard part. The differentiator is the operating model: RBAC strategy that is treated as foundational, policy enforcement that actually enforces, semantic models that are curated for conversational consumption, and engineering controls that stop acceleration turning into uncontrolled variability. AI in Azure is increasingly integrated across the control plane, data plane, developer plane and execution plane. That is not a feature rollout. It is an interaction model shift, and it will reward teams who invest in the unglamorous parts early.

So if you are working through what this means for your control plane, data platform or DevOps model, I’m always happy to compare notes as is anyone here in the Microsoft Practice at IBM.

#ibm #microsoft #aitour

Continue exploring

Explore the topic graph

Comments