An operating system isn't defined by what you see. It's defined by what sits beneath. An OS coordinates compute, manages storage, brokers connectivity, hosts applications, and arbitrates trust between the user and everything else the machine can touch. For forty years, those layers have been owned by Windows, macOS, Linux, iOS, and Android. Interfaces on top changed (GUI to mobile to browser) but the architecture underneath stayed recognizably the same. AI is now accruing those same layers, in roughly the same order, at unusual speed. The distinguishing feature is that the thing being orchestrated is no longer a static application. It is an agent: a system that plans, decides, acts, and adapts with minimal human direction.
The compute layer came first. Frontier models reason, plan, and sequence multi-step work; the cognitive equivalent of a CPU scheduler. The storage layer followed: context windows stretched from thousands to millions of tokens, and agents gained persistent access to local files and cloud drives. Connectivity arrived through MCP, which Anthropic open-sourced in late 2024 as a common interface between agents and external tools. An application layer is now forming on top, built from plugins, skills, and sub-agents that package agentic capabilities into reusable, composable units.
Anthropic's Claude Cowork, launched in January 2026, turns Claude into a desktop agent that reads, modifies, and organizes local files, connects to productivity apps through MCP, and runs multi-step tasks inside a sandboxed virtual machine. Point it at a folder, describe the goal, and it plans and executes (including browser control) without being coordinated step by step.
OpenAI is pursuing the same agentic stack from the browser inward. ChatGPT Atlas embeds an agent inside a Chromium browser that can view pages, click, and complete tasks across logged-in sessions. OpenAI recently announced a unified desktop "superapp" merging ChatGPT, Codex, and Atlas into a single application, with Codex running parallel coding agents in the background while Atlas handles the web layer.
Perplexity Computer, launched in February 2026, takes a different architectural bet. Rather than fronting a single model with an interface, Computer orchestrates across different AI models, spinning up specialized sub-agents to handle the components of a workflow and routing each task to whichever model is best suited for it. Perplexity describes it as a general-purpose digital worker capable of executing workflows that run for hours or even months, connecting to over four hundred applications through OAuth. A companion "Personal Computer" product, released at Perplexity's Ask 2026 inaugural developer conference, bridges the cloud agent to a local device for persistent access to files and sessions.
The most provocative entrant is the open-source OpenClaw project, paired with NVIDIA's NemoClaw stack. Jensen Huang's framing at launch was direct: if Mac and Windows are the operating systems for the personal computer, OpenClaw aims to be the operating system for personal AI. NemoClaw bundles OpenClaw with NVIDIA's OpenShell runtime and Nemotron models to run always-on, self-evolving agents on local hardware with policy-enforced guardrails.
The product here is not a chatbot or an application; it is a distribution layer for autonomous agents. Each of these systems is competing for the same architectural position: the layer that agents live on, and through which users interact with everything else.
Alten Capital invests in technology services businesses. Reach out to explore potential partnerships.