





















































We hope you had fun at Easter. A lot changed while you were away. So here is a quick round-up of what happened this week and what this means for you.
LLM Expert Insights,
Packt
🧩 EXPERT INSIGHTS (By Paul Singh)
Can I Be Your (Multi-) Agent? A2A Explained.
In my book, Generative AI for Cloud Solutions, I detailed the "multi-agent" concept before the concept became popular and predicted its significance in 2025. This idea has proven to be quite accurate.
As a quick overview, AI agents (agentic workloads) interact with their environment (take action), collect data, and perform tasks autonomously to achieve predetermined goals. They learn, adapt, and make decisions based on the information they gather, without constant human intervention. Ideally, you can (and should) also allow human intervention into the system, called human-in-the-loop.
For multiple agents running to address a problem or provide a service, there is the new Agent-to-Agent or Agent2Agent (A2A) communication protocol that standardize inter-Agentic communications. A2A is a lightweight JSON-RPC protocol that lets agents across clouds swap context instead of code or credentials, all over HTTPS. As one can imagine, we already live in a world where agents are created on a daily basis. Soon, AI Agents will not only be communicating autonomously but also creating other agents on a daily (or perhaps every minute) basis!
It is a bit daunting to imagine this; however, we need to create a framework on how these Agents work and communicate together, and hence the birth of A2A.
Google: Earlier this month, at Google Cloud Next 2025, Google made the following key AI announcements on Agents and Agent/Multi-Agent Ecosystems: Agent2Agent (A2A) protocol, a new open protocol intended to help enterprises support multi-agent systems, along with the Agent Development Kit (ADK), a new open-source AI framework now in preview that is designed to simplify building multi-agent systems while maintaining control over agent behavior. ADK supports Model Control Protocol (MCP), an open standard introduced by Anthropic and adopted by OpenAI, with the goal of standardizing how AI applications connect with external tools, data sources, and systems. We'll do a deeper dive into MCP in an upcoming issue.
They also announced Agent Garden—a collection of pre-built agent samples, tools, and connectors available in ADK. Agent Garden also has an Agent Engine, used for deploying AI agents with enterprise-grade controls, and an Agent Designer, a no-code tool in Agentspace that allows anyone to create custom agents.
Microsoft: In parallel, Microsoft offers both Semantic Kernel and AutoGen—two open-source frameworks for developing and orchestrating multiple AI agents. So, while Google’s ADK is new, Azure already has a mature, flexible, and production-hardened agent development stack available today. Microsoft’s multi-agent ecosystem is built on common principles, such as a standardized declarative workflow for multi-agent orchestration. For developers looking to experiment, AutoGen provides a playground for rapid innovation. You can drop the Semantic Kernel into your Azure AI Foundry stack or Google Agentspace Enterprise for instant, secure, async interoperability with any A2A-compliant agent, regardless of modality.
For those ready to scale, Semantic Kernel offers the stability required for production applications. A recent blog by one of my colleagues, Evan Matson, describes Integrating Semantic Kernel Python with Google's A2A Protocol | Azure AI Foundry Blog .
If this excites you and you are beginning to explore GenAI, my book Generative AI for Cloud Solutions offers you a quick start guide. This book enables you to gain a foundational understanding of interplay between LLMs and ChatGPT, and how to develop efficient and scalable solutions on the cloud.
Grab a copy of Generative AI for Cloud Solutions written by Paul Singh
Architect modern AI LLMs in secure, scalable, and ethical cloud environments.
Satya Nadella bats for Copilot as UI for AI
In his recent LinkedIn post, Nadella highlighted four cool Copilot features: Agents, Notebooks, and Create. The update introduces intelligent Agents like Researcher and Analyst along with a new Agent Store to expand these capabilities with partner offerings, to help create custom agents in Copilot Studio. You can use Notebooks to centralize diverse project data. Enhanced Search can search across all apps, including third-party platforms, providing comprehensive answers and source data. The Create function can transform presentations into videos and generate images from prompts. Check out these updates here.
NVIDIA’s NeMo is now generally available
Nvidia has now made NeMo microservices empower enterprises to create AI agents that boost employee productivity. These tools facilitate model customization, curation, and evaluation, enabling continuous optimization. Learn about NeMo here.
UI-TARS-1.5 is an open-source multimodal agent that excels in GUI interaction. It performs exceptionally well in computer, browser, and phone use, demonstrating human-like perception. Enhanced reasoning through reinforcement learning allows it to generalize effectively in web browsing and gameplay. Here is a quick round up on TARS’ capabilities and performance.
After the success of the image generation feature with ChatGPT, OpenAI has made it available as the gpt-image-1 API. SamA, in one of his X posts, informed the community that the API version enables you to control moderation sensitivity which is not available in the ChatGPT version. OpenAI is working with several businesses like Canva, HubSpot, and GoDaddy, among others, to use this API for several use cases. Find out more here.
o3 is more accurate but hallucinates a lot
OpenAI recently released the o3 and o4-mini System Card. This card details the capabilities and safety evaluations of OpenAI's new models, which combine advanced reasoning with tool capabilities. The results show that the advanced ChatGPT models, o3 and o4-mini, hallucinate more than older models, despite improved reasoning. OpenAI doesn’t know why, nor do they know how to solve this. You can check out the results here.
MIT researchers have created Periodic Table of Machine Learning, a framework connecting over 20 classical algorithms. This framework reveals how existing algorithms are connected, identifies gaps, and enables discovery of new algorithms by unifying existing approaches. The periodic table offers a toolkit for designing novel AI algorithms more efficiently. You can read the research paper on this framework here.
📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.
If you have any comments or feedback, just reply back to this email.
Thanks for reading and have a great day!
That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️
We would love to know what you thought—your feedback helps us keep leveling up.
Thanks for reading,
The AI_Distilled Team
(Curated by humans. Powered by curiosity.)