Ask stories

whoishiring about 15 hours ago

Ask HN: Who is hiring? (January 2026)

Please state the location and include REMOTE for remote work, REMOTE (US) or similar if the country is restricted, and ONSITE when remote work is not an option.

Please only post if you personally are part of the hiring company—no recruiting firms or job boards. One post per company. If it isn't a household name, explain what your company does.

Please only post if you are actively filling a position and are committed to responding to applicants.

Commenters: please don't reply to job posts to complain about something. It's off topic here.

Readers: please only email if you are personally interested in the job.

Searchers: try https://dheerajck.github.io/hnwhoishiring/, http://nchelluri.github.io/hnjobs/, https://hnresumetojobs.com, https://hnhired.fly.dev, https://kennytilton.github.io/whoishiring/, https://hnjobs.emilburzo.com, or this (unofficial) Chrome extension: https://chromewebstore.google.com/detail/hn-hiring-pro/mpfal....

Don't miss this other fine thread: Who wants to be hired? https://news.ycombinator.com/item?id=46466073

273 174
whoishiring about 15 hours ago

Ask HN: Who wants to be hired? (January 2026)

Share your information if you are looking for work. Please use this format:

  Location:
  Remote:
  Willing to relocate:
  Technologies:
  Résumé/CV:
  Email:
Please only post if you are personally looking for work. Agencies, recruiters, job boards, and so on, are off topic here.

Readers: please only email these addresses to discuss work opportunities.

There's a site for searching these posts at https://www.wantstobehired.com.

98 191
mariogintili about 16 hours ago

Tell HN: I'm having the worst career winter of my life

SWE with 10+ years of experience, I've shipped great products and worked commercially with Ruby/Rails, Node.js, TypeScript, and Golang.

I'm open to learning new languages.

I'm UK-based and have been struggling to secure a good remote role for an extended period.

I'm hardworking and bring substantial experience and strong execution skills. I can also handle management functions.

Is anyone else going through the same? Any help understanding why this is happening would be greatly appreciated.

Github https://github.com/shellandbull

Linkedin https://www.linkedin.com/in/mario-gintili-software-engineer/

Email code.mario.gintili [at] gmail [dot] com

72 104
schappim 3 days ago

Tell HN: Happy New Year

438 206
Quinzel about 7 hours ago

Ask HN: Is there any way to lock apps on iPhone?

What I mean, is there any way to lock apps on your phone so you can’t even open them until after a set time has elapsed?

I have apps on my phone that I want to reduce the amount I use them because I end up wasting too much time on them. I then delete them. Only to then download them again later. Then I think, I won’t use this app today, and then accidentally mindlessly open the app while I’m having a moment of boredom and then I’m wasting time again.

What I thought would be cool, is if I could lock the time wasting apps, for say… 8 hours (or whatever time I wanna choose), and then it’s forced down time, it can’t be unlocked any other way than just waiting out the time. It could help break a time wasting habit. Because consciously, I want to use all my time wisely, but I forget these good intentions often.

Does any feature like this exist? If not, is there any way to build this feature into an app to put on your phone to help with regulating phone time?

It might be a dumb idea… but I thought I’d ask the question :)

2 2
mintsuku 1 day ago

How to use AI to augment learning without losing critical thinking skills?

Lately I have been using AI more in my day to day learning. I typically use it to generate boilerplate code, ask it to explain some concept I’m having trouble grasping in an easier way and fact checking what it says while asking for deeper clarification (why is something done this way, what are others ways it can be done, comparing and contrasting etc. I basically use it as a tutor. I don’t use it to really “do” anything for me. I program most everything by hand and anything that has to do with problem solving I do myself.

But I won’t lie and say I’m not scared of becoming reliant on AI. I think the way I’m using it is pretty good. Improve learning while continue to apply knowledge myself. But I’d like to know where I can improve my AI usage and how you guys are using AI in your workflow. It’s giving me a great deal of anxiety when I read articles about how AI will kill critical thinking skills and what not. I don’t want to avoid it. But I don’t want it to make me stupid.

17 12
WorldDev about 10 hours ago

iOS: Apps persist data after full deletion

I noticed that apps retain info after being deleted. That means we are tracked by the app even after we deleted it.

For example, if I delete whatsapp or instagram, choosing to delete all data, then restart the phone and reinstall the app, it will automatically know my account.

So there is clearly a persistence mechanism that it uses.

I tried to understand which one.

- UIDevice.identifierForVendor Apple clearly states this identifier is changed as soon as all apps from the same vendor are deleted. that's what I tested, so this identifier is not the culprit

- DCDevice.generateToken This only stores 2 bits on the device, so not enough to store a username

- Keychain services (password) I checked in the password app, no password was saved for these app

- iCloud Keychain I turned off this feature

Does anyone know the technical way apps persist data even after total deletion? One of the big appeals of Apple to me is privacy, so I'd like to understand this...

5 4
luthiraabeykoon about 12 hours ago

I built a screen-aware desktop assistant; now it can write and use your computer

I posted Julie here a few days ago as a weekend prototype: an open-source desktop assistant that lives as a tiny overlay and uses your screen as context (instead of copy/paste, tab switching, etc.)

Update: I just shipped Julie v1.0, and the big change is that it’s no longer only “answer questions about my screen.” It can now run agents (writing/coding) and a computer-use mode via a CUA toolkit. ((https://tryjulie.vercel.app/))

What that means in practice:

- General AI assistant, it hears what you hear, sees what you see, and gives you real-time answers for any question instantly. - Writing agent: draft/rewrite in your voice, then iterate with you while staying in the overlay (no new workspace). - Coding agent: help you implement/refactor with multi-step edits, while you keep your editor as the “source of truth.” - Computer-use agent: when you want, it can take the “next step” (click/type/navigate) instead of just telling you what to do.

The goal is still the same: don’t break my flow. I want the assistant to feel like a tiny utility that helps for 20 seconds and disappears, not a second life you manage.

A few implementation notes/constraints (calling these out because I’m sure people will ask):

- It’s opt-in for permissions (screen + accessibility/automation) and meant to be used with you watching, not silently running. - The UI is intentionally minimal; I’m trying hard not to turn it into a full chat app with tabs/settings/feeds.

Repo + installers are here: https://github.com/Luthiraa/julie

Would love feedback on two things: 1. If you’ve built/used computer-use agents: what safety/UX patterns actually feel acceptable day-to-day? 2. What’s the one workflow you’d want this to do end-to-end without context switching?

4 3
vedmakk 1 day ago

Ask HN: When do we expose "Humans as Tools" so LLM agents can call us on demand?

Serious question.

We're building agentic LLM systems that can plan, reason, and call tools via MCP. Today those tools are APIs. But many real-world tasks still require humans.

So… why not expose humans as tools?

Imagine TaskRabbit or Fiverr running MCP servers where an LLM agent can:

- Call a human for judgment, creativity, or physical actions

- Pass structured inputs

- Receive structured outputs back into its loop

At that point, humans become just another dependency in an agent's toolchain. Though slower, more expensive, but occasionally necessary.

Yes, this sounds dystopian. Yes, it treats humans as "servants for AI." Thats kind of the point. It already happens manually... this just formalizes the interface.

Questions I'm genuinely curious about:

- Is this inevitable once agents become default software actors? (As of basically now?)

- What breaks first: economics, safety, human dignity or regulation?

- Would marketplaces ever embrace being "human execution layers" for AI?

Not sure if this is the future or a cursed idea we should actively prevent... but it feels uncomfortably plausible.

42 30
Haeuserschlucht about 19 hours ago

Ask HN: Who is using local LLMs in a production environment here?

I'm asking because it seems that nobody really does that. Yes, there are some projects here and there, but ultimately everybody just jumps over to cloud LLMs. Everything is cloud. People pay for GPU usage somewhere in the middle of nowhere. But nobody really uses local LLMs long term. They say, "Well, it's so great. Local LLMs work on small devices they even work on your mobile phone."

I have to say there's one exception for me and that's Whisper. I actually do use Whisper a lot. But I just don't use local LLMs. They're just really, really bad compared to cloud GPUs.

And I don't know why, because for me it seems that having a speech-to-text model is much more challenging to create than just a model that creates text.

But it seems that they really cannot remove the differences and have it run on consumer computers. And so I also go back to cloud LLMs, all privacy aside.

8 3
kwar13 8 days ago

Ask HN: What did you read in 2025?

I mostly read newspapers and technical journals, but two books that I read that made an impression: "The Changing World Order" and "The Gulag Archipelago".

336 444
udit_50 about 21 hours ago

I optimised my vibe coding tech stack cost to $0

Since vibe coding came into existence, I have been experimenting with building products a lot. Some of my products were consumer facing and some.. well, internal clones of expensive software. However, since beginning, I knew one big thing - the vibe stack was expensive.

I initially tried a lot of tools - Bolt, v0, Replit, Lovable, etc. out of which Replit game me the best results (yes, I can be biased due to my selection of applications). But I often paid anywhere from $25-$200/mo. Other costs like API, models, etc. made monthly bills upward of $300/mo. Was it cost effective when compared to hiring a developer? Yes. Was it value for money? NO.

So, over the months, I optimised by complete stack to be either free (or minimal cost) for internal use or stay at a much lean cost for consumer-facing products.

Here's how the whole stack looks today -

- IDE - Google's AntiGravity (100% free + higher access if you use student ID) --> https://antigravity.google/

AI Documentation - SuperDocs (100% free & open source) --> https://superdocs.cloud/

Database - Supabase (Nano plan free, enough for basic needs) --> http://supabase.com/

Authentication - Stack Auth (Free upto 10K users) --> http://stack-auth.com/

LLM (AI Model) - OpenRouter or Gemini via AI Studio for testing and a custom tuned model by Unsloth AI for production. (You can fine-tune models using Unsloth literal in a Google Colab Notebook) --> http://openrouter.ai/ OR http://unsloth.ai/ OR http://aistudio.google.com/

Version Maintenance/Distribution - Github/Gitlab (both totally free and open source) --> http://github.com/ OR http://gitlab.com/

Faster Deployment - Vercel (Free Tier Enough for Hobbyists) --> https://vercel.com

Analytics - PostHog, Microsoft Clarity & Google Analytics (All 3 are free and independent for different tracking, I recommend using all of them) --> Https://posthog.com OR http://clarity.microsoft.com/ OR http://analytics.google.com/

That's the list devs! I know I might have missed something. If yes, just ask me up or list them up in the comments. If you have any questions related to something specific, ask me up as well.

8 9
sandhyavinjam 1 day ago

Security breaks during partial failures – design notes from distributed systems

TL;DR: Many security mechanisms fail not during attacks, but during partial outages. This post documents early design notes for a failure-aware security framework for distributed systems.

The problem

In production distributed systems, security often breaks when things are half working:

auth services degrade → retries explode

fallback paths widen access

recovery logic becomes the attack surface

Nothing is “exploited”, yet the system becomes unsafe.

Most security models assume stable components and clean failures. Real systems don’t behave that way.

Design assumptions

We assume:

correlated failures

retries are adversarial

timeouts are unsafe defaults

recovery paths matter as much as steady-state logic

We don’t assume:

global consistency

perfect identity

reliable clocks

centralized enforcement

Framework ideas (high level)

This work explores four ideas:

1. Failure-aware trust

Trust degrades under failure, not just compromise

Access narrows automatically during partial outages

2. Security invariants at runtime

Invariants are continuously enforced

Violations trigger containment, not alerts

3. Retry-safe security primitives

Idempotent, monotonic, side-effect bounded

Retries can’t escalate privilege

4. Security as observable state

Trust level, degradation, and containment are visible

If you can’t observe it, you can’t secure it

What this is not

Not zero trust marketing

Not compliance

Not a finished system

It’s an attempt to treat failure as the normal case, not an exception.

Why publish this early?

Because many real failures:

don’t fit clean research papers

happen during incidents, not attacks

are invisible outside production systems

We’re sharing design notes to get feedback before formalizing or evaluating further.

Feedback welcome

If you’ve seen security regressions during outages or retries causing unsafe behavior, I’d like to hear about it.

This is ongoing work. No claims of novelty or completeness.

7 1
yresting 5 days ago

Ask HN: Loneliness at 19, how to cope?

I am a college student and for my entire life I have been lonely. This is probably taken a very heavy toll on my mental health but that’s another story. I’ve never been able to make friends and keep meaningful connections that last a long time. In fact I’d go as far as saying I have never had a friend, and I currently don’t have any. My phone is empty, when I go to school nobody talks to me and when I do find people who seem to have some kind of interest in me, it usually doesn’t last very long since they don’t prioritize whatever we have. As far as I’m aware I am tolerable to be around. People find me funny and when I do talk to people we have decent conversations (though small talk tends to bore me). However that doesn’t lead anywhere and doesn’t bring me any kind of comfort or fulfillment. I’ve attributed my lack of friends to something that places all the blame on me. Maybe I’m ugly, maybe I’m not funny enough, maybe I’m dumb. I don’t know if that’s the right approach. But I’ve tried so many different things, I’ve read so many different books and yet I still can’t get anyone to even bother to ask me how my day was or care to actually do something and hang out with me when I ask if they’d like too.

What am I supposed to do? Be lonely and without any kind of company and human connection my entire life?

62 108
EntropyGrid about 22 hours ago

A quantum-resistant RNG powered by collective human entropy

Hi HN,

I’m not a professional developer, but I’ve been obsessed with the idea of "Human Entropy." With the rise of quantum computing, I started wondering: Can we create a random sequence that no machine can predict because its source is the unpredictable nature of human behavior?

I built this web app using Flutter and Firebase. It's a simple idea: users perform actions on the web client, and those unique interaction hashes are sent to a secure Firestore pool. A server-side Cloud Function (hidden from the client) then joins these hashes daily to create a massive, non-deterministic random string.

Key Security Measures:

App Check: To prevent bot-driven entropy.

One-way Write: Users can only append to the pool; they can't read or modify existing data.

Hidden Logic: The actual concatenation happens in the cloud, so the core logic isn't exposed in the frontend.

It’s still a work in progress and currently supported by a small community of 15 people. I’d love to get your feedback on the logic and, more importantly, have you contribute your own "entropy" to the pool.

URL: https://entropygrid.net

Looking forward to a brutal but honest technical discussion!

4 0
chistev 2 days ago

Ask HN: How did you learn to code?

30 77
__patio about 12 hours ago

Ask HN: What do you plan to read in 2026?

I'm curious what books, papers, or long-form articles are on people's lists this year. Technical books? New fields? Old favorites?

4 2
Vishal19111999 1 day ago

Ask HN: Building a tool to ensure things get done on time

I used to run an agency and faced this issue of context overload and missing on tasks.

So thinking of building an app that can fetch data from Slack, Jira, Meetings, Email and put together a self populating todo list with all the important information at one place.

It would also have auto tracking and followups.

I would love to know whether this resonates with you or any other similar problem that you face.

Any inputs would be helpful.

3 2
sirnicolaz 4 days ago

Ask HN: Any example of successful vibe-coded product?

Many people talk about vibe-coding and about the different ways to use this development "methodology" successfully. I wonder though if anyone really managed to push to production anything that has been fully or almost fully created through LLM assisted coding. Do you have anything to share, whether you or someone else created it? Possibly something more complex than a static webpage.

79 130
zfoong 2 days ago

Ask HN: What is the best microVMs for AI agents?

Three weeks ago, we just launched an open-source computer-use agent: https://github.com/zfoong/WhiteCollarAgent

However, we are currently looking for self-hosted and easy-to-set-up microVM solutions for the agent's GUI mode. The idea is to let agents operate in an isolated environment for its GUI operation, like web-browsing, launching an app, and using the app, etc.

Anyone with any experience with microVM, feel free to let me know in the comments. Many thanks!

9 9
garylauchina about 21 hours ago

I'm building a 30k‑line V12 codebase solo with a "team" of 4 AIs

I’m a solo developer working on a “complex systems measurement” project that has grown to over 30k lines of code and is now at V12. Every line so far has been written by one person (me), with the research notes and design docs in a separate repo: https://github.com/Garylauchina/Prometheus-Research.

I’ve been using Cursor heavily along the way. The models are genuinely good and the local code they generate is often excellent, but on a large, evolving codebase I kept running into the same problem: context limits caused subtle architectural drift. The AI would write clean functions that were globally wrong, quietly breaking earlier design decisions and long‑range invariants.

What finally helped was to stop treating “AI” as a single assistant and instead treat different models as different team members with clear roles and constraints.

My current setup looks like this:

Perplexity + ChatGPT → “product / research brains” I use them for requirements, trade‑offs, and high‑level architecture sketches. They live outside the IDE and exist to clarify what I’m building and why before any code is touched.

Cursor, window 1 (GPT‑5.2) → “architect” This instance is not allowed to write production code. It is responsible for architecture and module boundaries, writing design notes and developer guides, defining interfaces and contracts, and reviewing diffs. I treat it like a senior engineer whose main output is prose: mini‑RFCs, comments, and checklists.

Cursor, window 2 (Sonnet 4.5) → “programmer” This one only implements tasks described by the architect: specific files, functions, and refactors, following explicit written instructions and style rules. It doesn’t get to redesign the system; it just writes the code.

The key rule is: architect always goes first. Every non‑trivial change starts as text (design notes, constraints, examples), then the “programmer” instance turns that into code.

This simple separation fixed a lot of the weirdness I was seeing with a single, all‑purpose assistant. There is much less logical drift, because the global structure is repeatedly restated in natural language. The programmer only ever sees local tasks framed inside that structure, so it’s harder for it to invent a new accidental architecture. The codebase, despite being tens of thousands of lines, feels more coherent than earlier, smaller iterations.

It also changed how I think about Cursor. Many of my earlier “Cursor is dumb” moments turned out to be workflow problems: I was asking one agent, under tight context limits, to remember architecture, requirements, and low‑level implementation all at once. Once I split those responsibilities across different models and forced everything through written instructions, the same tools started to look a lot more capable.

This isn’t a Cursor ad, and it’s not an anti‑Cursor rant either. It’s just one way to make these tools work on a large solo project by treating them like a small team instead of a single magical pair‑programmer.

One downside of this setup: at my current pace, Cursor is happily charging me something like $100 a day. If anyone from Cursor is reading this – is there a “solo dev building absurdly large systems” discount tier I’m missing?

8 7
keepamovin 1 day ago

Ask HN: Why is Apple's voice transcription hilariously bad?

Why is Apple’s voice transcription so hilariously bad?

Even 2–3 years ago, OpenAI’s Whisper models delivered better, near-instant voice transcription offline — and the model was only about ~500 MB. With that context, it’s hard to understand how Apple’s transcription, which runs online on powerful servers, performs so poorly today.

Here are real examples from using the iOS native app just now:

- “BigQuery update” → “bakery update”

- “GitHub” → “get her”

- “CI build” → “CI bill”

- “GitHub support” → “get her support”

These aren’t obscure terms — they’re extremely common words in software, spoken clearly in casual contexts. The accuracy gap feels especially stark compared to what was already possible years ago, even fully offline.

Is this primarily a model-quality issue, a streaming/segmentation problem, aggressive post-processing, or something architectural in Apple’s speech stack? What are the real technical limitations, and why hasn’t it improved despite modern hardware and cloud processing?

7 4
joshcsimmons 1 day ago

Ask HN: How Are You Handling Auth in 2026?

Supabase used to be my go-to but wondering if there are any easier out of the box solutions I haven't looked into. I'm investigating Clerk and have asked LLMs but curious to get the real take on what's working and what's easy from devs that actually have skin in the game.

11 14
meysamazad 1 day ago

Ask HN: Is being hungry enough to win?

do you believe some people are just naturally lucky and everything they touch turns to gold?

or are you a firm believer that with enough resilience and perseverance you'll finally make it?

how much would you say luck or natural privileges make their way into someone's success?

do you think you could build a successful business above and beyond 6-7 figures just by working hard and not giving up?

7 6
iluxu 1 day ago

I built a public skill registry and MCP server so Codex can install new skills

Hi HN,

I’ve been working on a simple idea: instead of hard-coding capabilities into Codex-like agents, let them install skills on demand, from a public registry.

The setup is intentionally minimal:

A public skill registry (JSON index + signed artifacts)

A CLI (npx codex-skill install <skill>) for humans

A Model Context Protocol (MCP) server so agents can:

search skills

fetch manifests

verify artifacts

install workflows programmatically

As a concrete example, the first skill is a theming skill for frontend projects (static HTML or Next.js + shadcn/ui). An agent can install the skill, apply a theme, and produce a clean diff in under a minute.

What this enables:

Agents that evolve without redeploying the core model

A neutral, inspectable “app store” for agent skills

Deterministic workflows (install → apply → diff → verify)

Humans and agents using the same install path

This is early and intentionally boring technically. The goal is to see if a shared skill ecosystem for agents actually makes sense in practice.

Happy to hear thoughts, criticism, or similar experiments people have tried.

Repo / demo links in comments if relevant.

3 0
yakattak 4 days ago

Ask HN: Does reading HN make you happy?

Times change, and as they change so do communities you interact with. I used to like coming to HN because the discussions were often far away from the stresses of the world (politics, local news tragedies, etc.)

Lately though its article after article on LLMs. Pro LLM or Anti LLM. These discussions come closer to the stresses of the world than the typical HN post did historically. Well, at least to me.

They often quickly become “AI is bad” or “AI is amazing” in the discussions. I want to mention, I’m not pro or against either way. There doesn’t need to be sides to pick.

Do these posts that dominate the top make you happy? For me it’s turned HN into a place that stresses.

PS. I’m not asking for coping mechanisms, I’ve already cut my time here reading down a bunch :)

47 38
makemethrowaway about 10 hours ago

Ask HN: What tech job would let me get away with the least real work possible?

Same as the popular question from 2021: https://news.ycombinator.com/item?id=26721951 I'm asking again as a lot has changed in the past few years especially w.r.t LLMs, coding agents etc.

copy pasting from the op: "I'm an average developer looking for ways to work as little as humanely possible."

- I really don't care about the product I work on. I just want to do some task/project and checkout. - Fully remote will be ideal. - Salary can be on the low end. - I feel the world currently is too hyper-capitalistic and I don't think I fit in well. On top of that my country has a billion+ people and everything seems like battle for scraps.

Unless I hit home with some indie hacking/side project, I don't think this will be possible. I believe there must some niche apps/plugin/extension/ssydev roles for some crm/cms etc that might fit the bill.

Few point to note: - No, I'm not that depressed. I'm just deeply unhappy with the current state of things. - No, I'm not giving up on life. - It may look like I'm not be good fit for tech/programming jobs. But I still like tech and solving tech problems. I just dont want my life to revolve around it. - It could be that im not challenged well in my job. But I'm not sure whether I'd like to be drowned in work as well.

Thanks for any advice or hostile/dismissive comments you provide I appreciate it.

59 48
preciousoo 4 days ago

Ask HN: How to do a Personal Cybersecurity audit

I am acutely aware that if I were targeted by a non sophisticated actor (like a very motivated hacker, or a phone/laptop thief with programming knowledge), I would be toast if they figured out, e.g my windows password, as that is the key to my Chrome keychain, for e.g, which allows them into a pandora's box of accounts.

Even more likely, if I were to get a laptop stolen while unlocked, they could get access to my primary email(s), which could lead them to getting access to accounts via password reset. There were a lot of similar other failure points I used to keep enumerated mentally, but now there's too many to count. The biggest ones are email access however.

Is there a process or method I can use to enumerate/track and fix those kids of failure points in my personal cybersecurity?

24 12
kaifahmad1 2 days ago

Semantica – Open-source semantic layer and GraphRAG framework

Hi HN,

I’m sharing Semantica, an MIT-licensed open-source framework for building semantic layers and knowledge engineering systems for AI.

Many RAG and agent systems fail not due to model quality, but due to the semantic gap — unstructured, inconsistent data without explicit entities, rules, or relationships. Vector-only approaches often hallucinate or fail silently under real-world data.

Semantica focuses on transforming messy data into reasoning-ready semantic knowledge.

Core capabilities: - Universal ingestion (PDF, DOCX, HTML, JSON, CSV, databases, APIs) - Automated entity and relationship extraction - Knowledge graph construction with entity resolution - Automated ontology generation and validation - GraphRAG (hybrid vector + graph retrieval, multi-hop reasoning) - Persistent semantic memory for AI agents - Conflict detection, deduplication, and provenance tracking

Project links: Docs: https://hawksight-ai.github.io/semantica/ GitHub: https://github.com/Hawksight-AI/semantica

I’d appreciate feedback from people working on knowledge graphs, GraphRAG, agent memory, or production RAG reliability.

Happy to discuss design trade-offs or answer technical questions.

8 0
muratsu 6 days ago

Ask HN: How to go back to listening to MP3s?

I have been a paying Spotify customer for many years now. Thanks to the yearly wrapped event, I am reminded how my use pattern is listening to a limited amount of tracks on repeat.

I'm curious if any of you has made the switch back to listening to mp3s? If you did, which apps are you using?

9 26