Claude Can Use Your Computer Now. That's a Stranger Sentence Than It Sounds.

Anthropic shipped something this week that’s easy to gloss over if you’re already deep in the AI agent world, and genuinely unsettling if you’re not: Claude can use your computer. Not metaphorically. It can move your mouse, click buttons, open applications, fill in spreadsheets, navigate your browser. Anything you’d do sitting at your desk, it can do while you’re not.

Read that again slowly if you need to. It’s a strange thing to get used to.

What it actually is

The feature is called computer use, and it lives inside two products — Claude Cowork (for general knowledge work) and Claude Code (for development). It’s a research preview, Mac only, available to Pro and Max subscribers.

How to turn it on

It’s off by default. To enable it:

  1. Open Claude Desktop and go to Settings > General
  2. Find the computer use toggle and switch it on
  3. Grant two macOS permissions when prompted: Accessibility (so Claude can click, type, and scroll) and Screen Recording (so Claude can see your screen)
  4. Keep your Mac awake and Claude Desktop running

That’s it. The first time Claude needs to access a new application during a session, it’ll ask permission. You approve or deny per-app, per-session.

For Dispatch (the mobile companion), pair your phone through the Claude app. Then you can message Claude from your phone and it operates your Mac remotely.

How it decides what to do

The implementation is more thoughtful than the headline suggests. Claude follows a hierarchy when executing tasks:

First, it checks for direct integrations — connectors to services like Slack, Google Calendar, or GitHub. If a connector exists, it uses the API. Fast, reliable, no screen-scraping necessary.

Second, if no connector exists, it opens Chrome and navigates the web interface. Most SaaS tools have a web UI, and Claude can operate them like a person would.

Third, and only as a last resort, it falls to system-level control — moving the mouse, clicking buttons, reading what’s on screen. This is the part that feels like science fiction and the part most prone to mistakes.

This ordering matters. It means Claude isn’t randomly clicking around your desktop as a first instinct. It’s trying the most reliable path before resorting to the most fragile one. But the fact that the third tier exists at all is what makes this different from everything that came before.

Dispatch, or: your phone becomes a remote control

The companion feature, Dispatch, lets you message Claude from your phone and have it execute tasks on your Mac. You’re on the train, you realize you forgot to pull the analytics report. You tell Claude. When you get home, it’s done.

Dispatch also supports recurring tasks — “every weekday at 8am, check email for urgent items and summarize them.” Your computer becomes a worker that shows up before you do.

The security model is worth noting: your files and data stay on your machine. Only the chat messages travel through the network, encrypted. Claude operates locally. Whether that’s reassuring or more unsettling is a matter of perspective.

The hierarchy of trust

What’s interesting about computer use isn’t the technology — screen control has existed in various forms for years. What’s interesting is the trust model it implies.

We’ve been gradually expanding what we let AI agents do, and each expansion felt manageable because it was scoped. Autocomplete suggests a line of code. A chatbot answers a question. Claude Code edits files in your repo, but asks permission first. Each step had a clear boundary.

Computer use doesn’t have the same kind of boundary. “Anything you’d do sitting at your desk” is not a scope — it’s the absence of one. Anthropic has added permission gates (Claude asks before accessing new applications) and prompt injection detection, and they’re running Cowork in a sandboxed VM. These are real safeguards. But the conceptual shift is larger than any individual guardrail.

When an agent edits a file, you can diff it. When an agent runs a command, you can read the output. When an agent clicks through your applications, navigates your browser, and fills in forms — the audit trail gets fuzzier. You’re trusting not just that the agent will do the right thing, but that you’ll be able to tell what it did.

Who this is actually for

The honest answer is that computer use solves a real problem for a specific kind of work: the long tail of applications that don’t have APIs.

If your job involves pulling data from one web app, reformatting it, and pasting it into another — and neither app has a proper integration — you’ve been doing that manually. Computer use automates the manual part. Not elegantly. Not through a clean API. Through the same messy, click-through-the-UI process you’ve been doing yourself.

This is genuinely useful. Dashboard-to-spreadsheet pipelines. Filling out forms in legacy systems. Testing GUIs that resist automation. The work nobody wants to do but somebody has to.

For developers specifically, the integration with Claude Code means autonomous pull request workflows — the agent makes changes, runs tests, pushes the PR, all without you sitting there approving each step. Whether you’re comfortable with that depends on your codebase, your test coverage, and your personal threshold for automation anxiety.

The market context tells its own story

Anthropic isn’t doing this in a vacuum. Perplexity has its own computer agent. Meta launched Manus. The underlying Mac minis powering these services are reportedly out of stock. Everyone is racing toward the same destination: an AI that can operate a computer the way a person does.

When multiple companies converge on the same idea simultaneously, it usually means the idea is either obviously right or obviously lucrative. Sometimes both. The race itself is informative — it means the major players believe desktop agents are not a novelty feature but a core product surface. This is where they think the next wave of value is.

That belief might be correct. But “everyone is doing it” has never been a particularly good argument for anything, and the speed of the race doesn’t leave much room for figuring out the parts that need figuring out.

What sits unanswered

Anthropic is being more careful than most about this. The permission model is opt-in. The hierarchy of access prioritizes safe paths. They explicitly recommend against using it with sensitive data during the preview. These are responsible choices.

But computer use raises questions that responsible engineering alone can’t resolve:

What happens when an agent operating your computer encounters something unexpected — a dialog box it doesn’t recognize, a workflow that’s changed since it last ran, a permission prompt from another application? Humans improvise in these moments. Agents either handle it or they don’t, and the failure modes are less predictable than a crashed script.

What does the error look like? When a command fails, you get an error message. When an agent clicks the wrong button in the wrong application, you get… what? A spreadsheet with wrong numbers? An email sent to the wrong person? The feedback loop is longer and less obvious.

And the question nobody in the industry wants to dwell on: what does work look like when the computer doesn’t need you sitting in front of it? That’s not a technology question. It’s a human one. And it’s the kind of question that tends to get answered in retrospect, long after the features shipped.

Where this leaves us

Computer use is a research preview. It will get better. The connectors will expand. The screen control will get more reliable. The permission model will mature. In a year, the rough edges that make it feel experimental today will be smoothed away, and we’ll wonder how we managed without it.

Or it’ll turn out that desktop automation through screen control is fundamentally too fragile for production use, and the industry will pivot to something else. History is full of technologies that felt inevitable right up until they weren’t.

Either way, the line between “tool you use” and “agent that acts on your behalf” just got thinner. That’s worth noticing, even if you’re not sure yet what to make of it.