ChatGPT Workspace Agents: OpenAI's Enterprise Agent Platform
On April 22, 2026, OpenAI announced ChatGPT Workspace Agents — and so did Google (Gemini Enterprise Agent Platform) and Anthropic (Claude Routines). The timing was not coincidental. Enterprise agentic AI is the next major battleground, and all three companies chose the same week to plant their flags.
This guide breaks down what Workspace Agents actually are, what they can do today, how to set them up, and how they stack up against the competition. Whether you're evaluating them for your team or trying to understand the broader enterprise AI shift, here's what you need to know.
Why This Matters
Custom GPTs — OpenAI's previous answer to enterprise customization — were fundamentally single-user tools. You could configure one, share a link, and people could use it, but there was no shared memory, no team-level access control, no ability to trigger the agent automatically, and no cloud persistence between sessions.
Workspace Agents fix all of that. They're designed to work the way enterprise software should: shared by a team, running in the cloud, triggered on a schedule or by events, and governed by admin controls. It's less of a product update and more of an architectural shift in how OpenAI thinks about ChatGPT's role inside organizations.
For developers and IT decision-makers, the implication is real: AI agents are moving from experimental add-ons to managed enterprise infrastructure, and the governance, compliance, and integration questions that come with that are now front and center.
What Are Workspace Agents?
Workspace Agents are shared, long-running agents that run inside your organization's ChatGPT workspace. Unlike a personal GPT or a one-off chat session, a Workspace Agent:
- Runs in the cloud — it keeps working even when no one is actively in the chat window
- Is shared across a team — build it once, deploy it to everyone who needs it
- Connects to third-party tools — Slack, Google Drive, Gmail, Salesforce, Notion, and more
- Runs on triggers or schedules — respond to incoming Slack messages, run reports every Monday at 8am, process new form submissions automatically
- Maintains organizational guardrails — admins control which tools agents can access and who can build or modify them
Under the hood, they're powered by Codex, OpenAI's code-generation system, which handles the planning, decision-making, and multi-step reasoning that makes longer workflows possible.
The research preview launched on April 22, 2026 for ChatGPT Business, Enterprise, Edu, and Teachers plans. Free until May 6, 2026, after which credit-based pricing takes effect.
Core Concepts
Triggers: How Agents Start Working
Every Workspace Agent needs a trigger — the event that tells it to begin a task. OpenAI currently supports:
Human-triggered — A teammate sends the agent a message (directly in ChatGPT or via Slack). The agent interprets the request, takes action, and reports back.
Schedule-triggered — The agent runs at a fixed time or interval. Useful for recurring tasks: daily summaries, weekly reports, automated data pulls.
Event-triggered — The roadmap (not yet live at launch) includes triggers based on incoming messages, form submissions, and webhook events from connected tools.
In the builder, you set the trigger under the Schedule section and optionally configure a Slack channel where the agent listens for incoming work.
Tools and Integrations
At launch, Workspace Agents ship with native connectors for:
- Slack — both as an interface (users talk to the agent via Slack) and as a trigger channel
- Google Workspace — Gmail, Drive, Docs, Sheets, and Calendar
- Salesforce — read/write CRM data, update records, generate reports
- Notion — read from and write to your Notion workspace
- Atlassian Rovo — Jira and Confluence integration via Atlassian's agent platform
- Microsoft apps — OneDrive, Teams, Outlook (via Microsoft connectors)
You can also add custom MCPs (Model Context Protocol servers) through the agent builder, which means any tool that exposes an MCP interface can be wired into a Workspace Agent. This is significant: it puts OpenAI's agent infrastructure on the same interoperability track as Anthropic and Google, both of whom have invested heavily in MCP.
Beyond external tools, agents have access to your organization's Company Knowledge — internal files and documents that admins upload to the workspace — and can use custom skills to perform specific, repeatable subtasks.
Persistence and Memory
Unlike a standard chat session, a Workspace Agent retains context across runs. It can remember the outcome of a previous task and use that to inform the next one. This matters for workflows that span multiple steps over time — "check whether the Q1 report was filed last week, and if so, prepare the Q2 template."
OpenAI has not published detailed technical specs for how agent memory is stored or scoped, but the practical behavior is that agents maintain working state within a workflow chain.
How to Build a Workspace Agent
Building a Workspace Agent doesn't require code. Here's the general flow:
Step 1 — Access the builder
In ChatGPT, click Agents in the left sidebar (available to eligible plans). Select New Agent.
Step 2 — Describe the workflow
Type a plain-English description of what you want the agent to do. ChatGPT will guide you through a conversational setup — asking clarifying questions and suggesting appropriate tools and configurations.
Step 3 — Add tools and data sources
Connect the integrations your workflow needs. Each connector requires authorization through your organization's connected apps. Admins must pre-approve which connectors are available to agent builders.
Step 4 — Set the trigger
Choose how the agent starts: manual (on demand), scheduled (cron-style timing), or Slack-based (listens to a specified channel).
Step 5 — Configure instructions
Write the agent's system instructions — the persistent context it carries into every run. This is analogous to the system prompt in a standard API call, but persists across all instances of that agent.
Step 6 — Test and iterate
Run the agent in test mode. The conversation history shows exactly what actions it took and why, which helps you refine instructions without guesswork. OpenAI recommends treating this as an iterative loop: run, review, adjust, repeat.
Step 7 — Share with your team
Once satisfied, share the agent with specific user groups or the entire workspace. Teammates access it directly in ChatGPT or via the Slack integration.
Enterprise Controls and Compliance
Workspace Agents come with a governance layer that Custom GPTs never had.
Admin controls — ChatGPT Enterprise and Edu admins can:
- Restrict which tools and connectors agents can access, per user group
- Control who can build, share, and modify agents
- Suspend any agent immediately if a problem arises
- View all agents deployed across the organization in the admin console (coming soon)
Compliance API — Gives admins programmatic visibility into every agent's configuration, update history, and run log. This feeds into existing audit workflows and third-party compliance tooling. For organizations under GDPR, HIPAA, or SOC 2 obligations, this is a hard requirement — and OpenAI built it in rather than leaving it as an afterthought.
Immutable audit logs — Every agent action is logged. Run histories can't be deleted by users.
This is meaningfully different from what was available with Custom GPTs, which had essentially no enterprise audit surface.
Pricing
The credit-based model begins May 6, 2026. OpenAI hasn't published per-credit pricing at time of writing, but here's what's confirmed:
- Free until May 6, 2026 (research preview period)
- Token-based credit consumption — credits map to model activity rather than per-message
- No minimum commitment once paid tier begins
- Eligible ChatGPT Business workspaces can earn up to $500 in credits through a launch promotion tied to Codex seat adoption
For organizations evaluating total cost, the credit model means cost scales with usage rather than seat count — which benefits low-frequency workflows but can surprise teams running agents at high volume.
How It Compares
| Feature | ChatGPT Workspace Agents | Gemini Enterprise Agent Platform | Claude Routines (Anthropic) |
|---|---|---|---|
| Launch Date | April 22, 2026 | April 22, 2026 | April 2026 |
| Primary Audience | Non-technical teams, business users | Developer teams + Google Cloud shops | Developers, power users |
| Execution Model | Cloud (Codex-powered) | Cloud (Vertex AI / Agent Runtime) | Anthropic's servers (async) |
| Key Integrations | Slack, GSuite, Salesforce, Notion | Google Workspace, 200+ model connectors | MCP-based (tool-agnostic) |
| Scheduling | Yes (cron-style) | Yes (multi-day runtime) | Yes |
| MCP Support | Yes (custom MCPs) | Yes (native) | Yes (native — originated MCP) |
| Governance / Audit | Compliance API, immutable logs | Agent Identity, Registry, Gateway | Standard enterprise controls |
| Coding / Dev Focus | Secondary (Codex available separately) | Secondary | Primary (Claude Code + Agent SDK) |
| Best Fit | Teams already on ChatGPT Enterprise | Google Cloud-first orgs | Developer-led orgs, technical workflows |
The clearest pattern across all three platforms: each company is extending where it already has distribution. OpenAI wins where ChatGPT is already the dominant interface. Google wins where Google Workspace is the backbone. Anthropic wins where developers drive the decision.
Practical Applications
Sales team: weekly pipeline report
An agent connects to Salesforce, pulls open opportunities updated in the past 7 days, formats them into a summary, and posts to the #sales Slack channel every Monday at 8am. No analyst involvement needed.
Engineering: PR review digest
The agent monitors a GitHub repo (via custom MCP), compiles a daily list of open pull requests with status and reviewer assignment, and posts to the engineering Slack channel each morning.
Customer success: ticket triage
The agent monitors an incoming Zendesk queue (via MCP), categorizes tickets by priority and topic, drafts initial responses for tier-1 issues, and escalates complex ones with a summary note to the team's Slack channel.
Finance: expense report aggregation
The agent pulls expense data from Google Sheets weekly, cross-references against budget allocations, flags anomalies, and sends a formatted digest to the finance team via email.
Each of these workflows would previously have required either custom code, a dedicated automation platform (Zapier, Make, n8n), or significant manual work. Workspace Agents bring them into a single interface with no code required.
Common Mistakes to Avoid
Skipping admin setup before agent building
Agent builders can only connect to tools that admins have pre-approved. If you try to add Salesforce as a connector before your admin has enabled it, the build will fail. Set up admin permissions first.
Writing vague instructions
Workspace Agents interpret instructions at runtime. Vague instructions ("handle customer issues") produce inconsistent behavior. Specific instructions ("read the latest 10 Zendesk tickets from the past 24 hours and categorize each as billing, technical, or general") produce consistent results.
Assuming agents handle errors gracefully by default
If a connected tool returns an error or an expected data source is empty, agents will often report back to the triggering channel — but they don't retry automatically. Design workflows with a fallback instruction: "If the Salesforce query returns no results, post 'No new opportunities this week' to #sales."
Ignoring the compliance API for sensitive workflows
If your agent touches customer data or financial records, configure compliance API monitoring before deploying. Don't leave audit setup as a post-launch task.
Treating agents as chatbots
Workspace Agents are not conversational tools — they're workflow executors. The mental model shift matters: you're configuring a process, not training a chatbot. Instructions should describe the task, not a persona.
FAQ
Q: Are Workspace Agents available on ChatGPT Plus or Pro?
No. At launch (April 22, 2026), Workspace Agents are available in research preview for ChatGPT Business, Enterprise, Edu, and Teachers plans only. Consumer tiers (Free, Plus, Pro) are excluded from the initial rollout.
Q: Can I use a Workspace Agent with my own API key or model?
Not at launch. Workspace Agents run on OpenAI's infrastructure using Codex. There's no bring-your-own-model option in the current preview.
Q: How do Workspace Agents differ from the OpenAI Agents SDK?
They're aimed at different audiences. The OpenAI Agents SDK is a developer framework for building custom agent applications programmatically — you write Python code and call the API. Workspace Agents are a no-code product built into ChatGPT for non-technical business users. The SDK gives more control; Workspace Agents give more accessibility.
Q: Can Workspace Agents interact with each other?
Not yet. Multi-agent orchestration between Workspace Agents isn't part of the launch feature set. OpenAI's roadmap hints at expanded agent-to-agent capabilities, but nothing is confirmed. For multi-agent coordination today, the Agents SDK or third-party orchestration frameworks are the path forward.
Q: What happens after the free trial ends on May 6?
Credit-based pricing begins. Credits are consumed based on token usage rather than per-message. OpenAI hasn't published a public rate card yet, but has confirmed there's no minimum commitment — you pay for what you use.
Q: How does this compare to Zapier or Make for automation?
Zapier and Make are trigger-action tools: if X happens, do Y. Workspace Agents handle tasks that require reasoning and judgment — interpreting ambiguous input, making decisions mid-workflow, handling variations in data format. The two aren't mutually exclusive. Many teams will use traditional automation for deterministic flows and Workspace Agents for workflows that need language understanding.
Key Takeaways
- ChatGPT Workspace Agents replace Custom GPTs for enterprise use cases — shared, persistent, governable, and connected to real business tools.
- April 22, 2026 marked an industry-wide shift: OpenAI, Google, and Anthropic all launched enterprise agent products simultaneously, signaling that agentic AI is becoming standard enterprise infrastructure.
- The Slack integration is a significant UX advantage — most enterprise teams already live in Slack, and Workspace Agents slot directly into that workflow without requiring anyone to change tools.
- MCP support means the tool ecosystem is open-ended and will grow as more vendors build MCP connectors.
- Enterprise governance is built in — Compliance API, immutable audit logs, and granular admin controls make this deployable in regulated environments in a way Custom GPTs never were.
- Credit pricing scales with usage — good for light workflows, worth monitoring closely for high-volume deployments.
For teams already on ChatGPT Enterprise or Business, Workspace Agents are worth exploring now while they're free. The free period ends May 6, 2026 — that's a useful window to test actual workflows and estimate credit consumption before committing to paid usage.
ChatGPT Workspace Agents are the most accessible enterprise agent product on the market right now — no code required, Slack-native, and backed by real governance tooling. They won't replace developer-built workflows, but they fill the gap between "someone should automate that" and "our engineering team has capacity for it." If your organization runs on ChatGPT Enterprise, the free trial period is reason enough to start experimenting today.
Related reading: OpenAI Agents SDK April 2026 — the developer framework powering production agent applications · Google Gemini Enterprise Agent Platform — the competing platform that launched the same day · AI Agent Frameworks Compared — broader comparison of agent orchestration options for developers
Prefer a deep-dive walkthrough? Watch the full video on YouTube.
Need content like this
for your blog?
We run AI-powered technical blogs. Start with a free 3-article pilot.