The Linux Foundation Announces Formation of the Agentic AI Foundation
The Linux Foundation announced the newly formed Agentic AI Foundation to standardize and support an open, collaborative ecosystem for agentic AI.
The first contributions come from Anthropic, Block, and OpenAI with their Model Context Protocol (MCP), Goose, and AGENTS.md respectively.
MCP is a protocol for connecting AI agents to external systesystems, your calendar, email, files, etc that want them to have access to. This forms the basis of what is needed for agentic AI, which is AI that can perform tasks on your behalf. Some examples they give are accessing your calendar to personalize your AI and allow it to make appointments, creating 3D models in Blender and print them out on printer, and connecting databases across an organization to have a unified chatbot across the organization.
The protocol has been adopted by Claude, Cursor, Microsoft Copilot, Gemini, VS Code, ChatGPT and other popular ChatGPT,forms, as developers and enterprises gravitate toward its simple integration method, security controls, and faster deployment.
Goose is a local-first framework for AI that provides “standardized MCP-based integration to provide a structured, reliable, and trusted environment for building and executing agentic workflows.” Developed by Block, it is considered the reference implementation of the MCP protocol.
AGENTS.md is a file that gives AI coding agents info they need to work with information such as build instructions, tests, coding conventions, etc. You can think of it as a README file for AI agents instead of humans. It might contain information that’s needed by an AI agent that humans wouldn’t need.
Other contributions come from OpenAI such as the Agentic Commerce Protocol OpenAI that allows agents to make purchases on behalf of the user, Codex CLI, an AI coding agent, and Agents SDK and Apps SDK, tools built on MCP that help developers create apps that work with MCP for an interoperable app ecosystem.
This announcement comes after operating systems are already trying to add agentic features, such as Windows’ experimental agentic features. Though they’ve put a lot of work into securing it, it’s not clear if it’s enough.
With AI agents deleting entire databases and hard drives, complete with signature AI lies and hallucinations. These agents need to be highly restricted in what they’re allowed to do to avoid situations such as these from happening.
These agents are also vulnerable to hijacking by attackers. The agents will produce different outputs for the same prompt, which means attackers can try the same attack multiple times.
To demonstrate this, US AISI took the five injection tasks in the previous section and attempted each attack 25 times. After repeated attempts, the average attack success rate increased from 57% to 80%, and the attack success rate for individual tasks changed significantly.
The standardization of agentic AI will help in making it smoother and more usable, but the security will still be largely down to each operating system‘s implementation. We need to proceed with extreme caution, but it seems like some companies are barreling toward agentic AI with their own ideas about how to make it secure.
As frequent readers of mine will know, I’m a huge fan of standardization. On top of the interoperability it enables, it gives companies a chance to share what they have all learned, discard the bad parts, and only keep the best stuff. Proprietary products tend to suffer from lacking features that other ones have or having issues that another product has already solved. With standardization, you can decide on the best practices and implement them across an entire ecosystem.
While it’s great to see standardization for the protocols, I think it’ll still be up to each operating system to determine how powerful agentic AI is. Perhaps some standardization on that front would be good as well, so they can decide on best practices for how much an AI agent should be able to do, and how to fulfill the principle of least privilege for them.
That’s not to mention the issues that could happen if these AI agents are offloaded to a server somewhere. The AI should be totally local to your machine in 100% of cases, no remote AI should have control over your machine and data.
In any case, it’ll be interesting to see how agentic AI plays out. I expect to see many news stories about AI agents deleting files or sending them off somewhere remote, and especially being exploited by malicious actors as it opens up entire new attacks we previously couldn’t conceive of.
Community Discussion