Back to Writings

Introducing mcp-apps-sdk

·Vincent Liew and General Intelligence Labs Team
MCPApps SDKopen-sourceAI

OpenAI has recently introduced a major new capability: apps integrated directly into ChatGPT. These apps allow users to seamlessly interact with external services, such as Figma, Spotify, and Zillow, without leaving the chat. Within a conversation, users can perform actions like booking a ride, ordering food, checking availability, or tracking a delivery, all through interactive app interfaces that feel native to ChatGPT.

ChatGPT using the Zillow app to display real estate listings directly within the conversation.ChatGPT using the Zillow app to display real estate listings directly within the conversation.

The Apps SDK works alongside the open-standard Model Context Protocol (MCP), which defines how ChatGPT communicates with external services. With MCP, your app's backend acts as a server that advertises "tools" (including input/output schemas and metadata) and responds to model-initiated calls. The Apps SDK layers on top of this by adding a browser-embedded UI: developers can pair their MCP tools with interactive web components that render inline in ChatGPT, enabling a richer experience (dashboards, forms, previews) compared to simply returning text or static images.

To understand how developers build and run these apps, it helps to look at how an Apps SDK app is structured under the hood. An app built with the Apps SDK combines two main pieces: a backend that speaks MCP, and a frontend web component that provides the in-chat experience. The frontend is typically a React-based component that runs inside a secure iframe within the ChatGPT interface. It communicates with the host page through the window.openai JavaScript API, allowing it to receive updates and send user actions back to ChatGPT. When a user triggers the app in conversation, ChatGPT passes structured data from the chat to the app's MCP server, which can perform logic, like querying an API or fetching data, and return structured results. Those results are then rendered by the web component as an interactive UI, so the user can explore, edit, or confirm actions without ever leaving the chat.

Currently, apps built with the Apps SDK can only run inside ChatGPT, a closed-source environment. This limits developers who want to support these apps in their own AI products or chat interfaces and makes testing cumbersome: end-to-end testing today requires using ChatGPT itself. To address this, we built mcp-apps-sdk, an open-source library that allows Apps SDK apps to run anywhere. With it, developers can embed the same apps designed for ChatGPT directly into their own chatbots, assistants, or AI platforms, and test them locally, no internet connection or external dependencies required. We believe this will help create a more open, flexible, and interoperable ecosystem where Apps SDK apps can truly run anywhere.

Using our mcp-apps-sdk to display OpenAI's example solar system app inside a simple chat interface implemented using ai-sdkUsing our mcp-apps-sdk to display OpenAI's example solar system app inside a simple chat interface implemented using ai-sdk

mcp-apps-sdk is designed to be lightweight and easy to integrate. Its core component, AssistantAppEmbed, handles rendering an Apps SDK app's UI inside your own chat or web interface. It takes care of creating the iframe, wiring up communication, and synchronizing data between your app and the embedded MCP tool. With just a few lines of React, you can drop in a fully functional Apps SDK app, complete with interactive UI and context exchange, anywhere in your product. We're excited to see what developers build as the ecosystem of MCP-based apps grows beyond ChatGPT.

Join our talk in AG2 community: https://luma.com/8adoambl?tk=3Zrn7W