Story

I built a screen-aware desktop assistant; now it can write and use your computer

luthiraabeykoon Friday, January 02, 2026

I posted Julie here a few days ago as a weekend prototype: an open-source desktop assistant that lives as a tiny overlay and uses your screen as context (instead of copy/paste, tab switching, etc.)

Update: I just shipped Julie v1.0, and the big change is that it’s no longer only “answer questions about my screen.” It can now run agents (writing/coding) and a computer-use mode via a CUA toolkit. ((https://tryjulie.vercel.app/))

What that means in practice:

- General AI assistant, it hears what you hear, sees what you see, and gives you real-time answers for any question instantly. - Writing agent: draft/rewrite in your voice, then iterate with you while staying in the overlay (no new workspace). - Coding agent: help you implement/refactor with multi-step edits, while you keep your editor as the “source of truth.” - Computer-use agent: when you want, it can take the “next step” (click/type/navigate) instead of just telling you what to do.

The goal is still the same: don’t break my flow. I want the assistant to feel like a tiny utility that helps for 20 seconds and disappears, not a second life you manage.

A few implementation notes/constraints (calling these out because I’m sure people will ask):

- It’s opt-in for permissions (screen + accessibility/automation) and meant to be used with you watching, not silently running. - The UI is intentionally minimal; I’m trying hard not to turn it into a full chat app with tabs/settings/feeds.

Repo + installers are here: https://github.com/Luthiraa/julie

Would love feedback on two things: 1. If you’ve built/used computer-use agents: what safety/UX patterns actually feel acceptable day-to-day? 2. What’s the one workflow you’d want this to do end-to-end without context switching?

4 2
Read on Hacker News Comments 2