Ask HN: How is AI-assisted coding going for you professionally?
Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless." I'd like to cut through the noise and learn what's actually working and what isn't, from concrete experience.
If you've recently used AI tools for professional coding work, tell us about it.
What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?
Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.
The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.
Built a 1.3M-line agent-native OS in Rust while homeless. What now?
I’m going to be straight about my situation because I don’t know where else to turn.
My dad was diagnosed with cancer. While he was in hospital, the council emptied his house. Everything I owned was in that house. £20,000+ of equipment, years of research, a server with thousands of hours of work. Locks of my kids’ hair. Photos. All thrown in a tip.
My family turned my dying dad against me. I ended up living with someone suffering from paranoid psychosis. That’s where I built most of what I’m about to describe. Three days ago, 24 hours of abuse, and now I’m in a tent with my dog. 5°C weather. No money.
The council refused housing. The government won’t recognise my autism. They want me job hunting 35 hours a week from a tent.
I’m not incapable. I’ve raised a family. I’ve worked my whole adult life. Supervising teams, tattooing, freelance programming, building proprietary backend systems across 20 years of working with Linux. My autism isn’t a disability here. It’s the reason I can hold an entire OS architecture in my head and see how every component connects. When I point this brain at a problem, it produces systems that work, at a speed that doesn’t make sense to most people.
Over the past 4 months, I’ve been building OctantOS. An operating system for autonomous AI agents. Not a framework. Not a container wrapper. An actual OS with its own kernel (OctantCore, from-scratch Rust), its own hypervisor (OctantVMM), a single-binary Rust userspace, and a 10-layer security stack enforcing agent permissions at the kernel level.
~1.3M total lines of code. ~800K Rust. 50 crates, ~25 satellite projects. 3,900+ tests. Solo developer. No CS degree. 4 months.
The thesis: application-layer trust is insufficient for autonomous agents. OctantCore makes agent identity, capability boundaries, TTL enforcement, and audit first-class kernel primitives. Manifests compile to kernel enforcement policies. The agent doesn’t decide what it can do. The kernel does.
Rust LSM patches reviewed on lore.kernel.org by Google’s Rust-for-Linux team and the LSM maintainer. OctantCore boots on OctantVMM with memory manager, interrupts, syscall interface, Agent Descriptor Table, and capability enforcer initializing at boot. Built by orchestrating 10-12 parallel AI coding sessions simultaneously.
It goes beyond isolation. Agents identify gaps in their own knowledge and seek out what they don’t know (curiosity subsystem, implemented). Background inference consolidates learned patterns (dreaming). A 7-stage self-evolution pipeline within constitutional safety boundaries. New skills propagate across every OctantOS instance globally via the mesh layer. All kernel-constrained.
Nothing like this has existed before. That’s what dies if I can’t keep going.
I need stability. A place to live and enough to cover basics for 3 months to get OctantOS investment-ready. An angel willing to back me for that runway. A company that says “come work here, we have a place.” I’ll relocate anywhere, tomorrow, with my dog. Or just advice from someone who’s been here.
I just need someone to take a bet on what this brain can do when it’s not freezing in a tent.
https://github.com/MatrixForgeLabs/OctantOS https://octant-os.com https://gofund.me/f554a86ee
I'm 60 years old. Claude Code killed a passion
I stumble upon a post from shannoncc called "I'm 60 years old. Claude Code has re-ignited a passion", and it made me think. I am also (almost) 60, but AI just killed the passion. I remember all the pre-AI days, where I was enjoying coding during the day, the evening, the weekends and the vacations. This is no more, while others have their "passion re-ignited".
I would argue it depends on what you enjoy: the journey or the destination. I have always enjoyed the journey, I think people having a blast nowadays are enjoying the destination. AI gave us more destinations, but less journey. It is not worse or better, just different.
Ask HN: What breaks first when your team grows from 10 to 50 people?
We're at ~15 people and things that used to "just work" are starting to crack. Decisions that everyone used to know about are getting lost. New hires take forever to ramp up. Different teams are building on different assumptions. For those who've been through this stage, what actually broke first? And what did you do about it?
Ask HN: Have you successfully treated forward head posture ("nerd neck")?
I am struggling with regular tension headaches and stiff neck muscles. When standing at a wall, there is about one hand width between the wall and the back of my head. For my partner it's just a finger width.
I have seen lots of videos that claim to be able to treat nerd neck, but some of them are conflicting. Example: some say "don't do chin tucks", some say the opposite. I am suspicious of grifters and would like to find trustworthy advice.
Has anyone here successfully treated nerd neck, and if yes, how did you do it and what where the improvements that you noticed? I am envisioning some sort of "program" that I need to follow, but I have no idea if I can do this by myself, or if I actually need to go to a physiotherapist.
In short: there is a ton of advice out there, but I trust the HN crowd more and would be very happy to hear some anecdotes. Thank you!
Ask HN: Med student interested in bci startups..where do I start?
I am a First yr med student at aiims in india am 20 yrs old.My long term goal is to have a bci startup and the reason is I was always interested in innovation and intersection of tech and med but I never really got the exposure Cuz med students are often told to be defined and follow the system but I think there is More to us than just following convention protocols and especially in this ai generation I think its really important to be interdisciplinary(I have a python background and technical stack such as signal processing and mne python.Lookin for advice on how to approach dis as a med student and what would u do if u were in my position
Tell HN: iPhone 6s still getting security updates
https://support.apple.com/en-us/126632
This is an 11-year-old phone.
Ask HN: What was it like for programmers when spreadsheets became ubiquitous?
There have been a lot of attempts to move more of programming to end-users instead of professional developer over the years. Spreadsheets are interesting because they were a massively successful version of this and because of course we are living through the latest wave (AI/vibe coding).
For those of you around when spreadsheets were taking off, what was it like? Was there fear that they would eradicate the need for professionally built software? Were there people who brushed them off as just toys?
I traced $2B in nonprofit grants for Meta and Age Verification lobbying
Over the past several months I've been pulling public records on the wave of "age verification" bills moving through US state legislatures. IRS 990 filings, Senate lobbying disclosures, state ethics databases, campaign finance records, corporate registries, WHOIS lookups, Wayback Machine archives. What started as curiosity about who was pushing these bills turned into documenting a coordinated influence operation that, from a privacy standpoint, is building surveillance infrastructure at the operating system level while the company behind it faces zero new requirements for its own platforms.
The advocacy group that doesn't legally exist The Digital Childhood Alliance presents itself as a coalition of 50+ conservative child safety organizations (later inflated to 140+, though only six have ever been publicly named). It has been testifying in favor of these bills across states. Here is what public records show about its legal status:
DCA's domain was registered December 18, 2024 through GoDaddy with privacy protection and a four-year registration. The website was live and fully formed one day later: professional design, statistics, testimonials from Heritage Foundation and NCOSE staff, ASAA talking points already loaded. This is not a grassroots launch. This is a staging deployment of a pre-built site. 77 days later, Utah SB-142 became the first ASAA law signed in the country.
DCA processes donations through For Good (formerly Network for Good, EIN 68-0480736), which is a Donor Advised Fund. For Good explicitly states in its documentation that it serves "501(c)(3) nonprofit organizations." DCA claims 501(c)(4) status. DCA is classified as a "Project" (ID 258136) in the For Good system, not as a standalone nonprofit. I searched all 59,736 For Good grant recipients across five years, roughly $1.73 billion in disbursements. Zero grants to DCA, DCI, NCOSE, or any related entity. The donation page appears to be cosmetic.
Bloomberg reporters exposed Meta as a DCA funder in July 2025. The Deseret News detailed the arrangement in December 2025. No version of the website, across 100+ Wayback Machine snapshots, has ever disclosed funding sources. Every blog post and testimony targets Apple and Google. Meta is never mentioned or criticized.
Casey Stefanski, Executive Director, spent 10 years at NCOSE as Senior Director of Global Partnerships. Unusually, she never appears on any NCOSE 990 filing as an officer, key employee, or among the five highest-compensated staff. A senior director title at a $5.4M organization for a decade with no 990 appearance suggests either below-threshold compensation, an inflated title, or something else about the arrangement.
NCOSE's own 501(c)(4) structure turns out to be complicated. Tracing Schedule R filings across four years reveals that NCOSE created "NCOSE Action" (EIN 86-2458921) as a c4 in 2021, reclassified it from c4 to c3 in 2022, then created an entirely new c4 called "Institute for Public Policy" (EIN 88-1180705) in 2023 with the same address and the same principal officer (Marcel van der Watt). By 2024 the original entity had disappeared from Schedule R entirely.
$70M+ in super PACs, deliberately fragmented Meta poured over $70 million into state-level super PACs and structured every one to avoid the FEC's centralized, searchable database:
If you maintain software that could be classified as an "operating system provider" under these definitions, start Full dataset, OSINT tasklist, and all processed findings are published with sources embedded in each file: github.com/upper-up/meta-lobbying-and-other-findings
Tell HN: Apple development certificate server seems down?
I don't see anything on https://developer.apple.com/system-status/, but I haven't been able to install apps for development on my own devices starting at 11AM PDT.
Other people on Reddit seem to be hitting this too [0]. Anyone knows anything about it?
[0]: https://www.reddit.com/r/iOSProgramming/comments/1rq4uxl
Edit: Now getting intermittent 502s from https://ppq.apple.com/. Something is definitely going on.
Why I'm moving away from Regex for LLM Agent security
I’ve been auditing how open-source execution engines handle prompt injection. Most of them (like OpenClaw) rely on a 3-layer static defense: regex blacklists, XML tagging, and character sanitization.
The problem is that regex is a cat-and-mouse game. It misses "disregard prior directives" while looking for "ignore instructions." It fails entirely on multi-language exploits. Once an Agent has tool access (shell, DB), a single missed semantic variation becomes an RCE.
So I built Prompt Inspector. It is a semantic detection engine designed to move beyond blacklists.
The core deal:
Vector-based detection: Instead of keywords, we use embeddings to map prompts. It catches the intent of an injection, even if the phrasing is unique or translated.
Self-evolving loop: Borderline cases trigger an async LLM review. If it is a new attack pattern, the system automatically extracts the embedding and updates the vector database. It learns from new exploits.
Decoupled by design: It returns a confidence score rather than a hard block. The developer keeps full control over the execution routing.
Pluggable: Started with Google’s latest embeddings, but the architecture allows for custom-deployed models to avoid vendor lock-in.
Tech-stack: FastAPI, Vector Database, Google Embedding models, and an LLM-in-the-loop reviewer.
I’m currently offering free credits for early testers and open-source projects. I’d love to hear how you guys are handling tool-calling security beyond basic prompt engineering.
Live at: https://promptinspector.io
I built a platform to help developers find collaborators for new projects
Hi everyone,
I’ve created a platform designed to help developers find other developers to collaborate with on new projects.
It’s a complete matchmaking platform where you can discover people to work with and build projects together. I tried to include everything needed for collaboration: matchmaking, workspaces, reviews, rankings, friendships, GitHub integration, chat, tasks, and more.
I’d really appreciate it if you could try it and share your feedback. I genuinely think it’s an interesting idea that could help people find new collaborators.
At the moment there are about 15 users on the platform and already 3 active projects.
We are also currently working on a future feature that will allow each project to have its own server where developers can work together on code live.
Thanks in advance for any feedback!
https://www.codekhub.it/
How not to fork an open source project
Just saw this post on reddit where someone shared a project and claimed the following:
> I've been working on this for quite a while now after getting tired of the monopoly Screen Studio has on screen recordings. I didn't see any free screen recorders that actually offered the same motion blur animations and zoom animations as Screen Studio, so I decided to create an app with the missing features.
(https://www.reddit.com/r/macapps/comments/1rsf44t/os_i_made_a_free_opensource_screen_studio/)
Reading this, gives the impression that the author built the project entirely from scratch. Looking at the GitHub project shows that it's forked and many of the forked commits are simply rebranding and adding donation links. They don't mention that it's a fork in the reddit post and looking at the project's README.md it "credits" the original project exactly once at the very end, not even providing a link.
Please don't get me wrong — I think forks are great and completely valid. However, I do think such behavior is misleading, damaging the community and frankly disrespectful to the original project. The person forking the project did nothing wrong from a legal perspective, but I think it's questionable from a moral/ethical point of view.
Why am I sharing this? With the rise of AI-assisted coding, I think we will see a lot more forks - which is great. However, I think it's important to preserve some moral/ethical guidelines and credit the people who deserve the credit, even when you are not required to based on the license.
Btw, it's not my intention to publicly blame the person - I have already asked them to properly acknowledge/credit the original project via comment and DM. This is about sharing how not to promote a forked project.
Ask HN: Why can't we just make more RAM?
Is there some bottleneck in the supply chain, like rare earth metals or something, that’s limiting production throughput? Or do we simply have every factory already operating at max capacity and scaling up supply will require building more of them?
Is there some intuition we can apply to estimate how long it will take for supply to catchup to demand?
Ask HN: Got cancer, a new job,new boss in less than a year What do I do now?
Hello Everyone,
As per title really. I started a new job late last year. Head hunted and went from a mega stable nothing ever really changes with a low stress environment where it would cost a lot to get rid of me with over a decade and a half of service to a extremely fast paced "lets do it" environment that is rather "make it work for now" and the technical debt is large. I joined partly because I had a real rapport with the guy who would be my boss. The money helped too :D
The day I joined the company it got bought out by another one. Ok, we carry on, integration ongoing. Stuck between two competing outlooks on infrastructure and different ways of working.
Then in the last month I have a diagnosis of the big C. Tests are completed (i think) but it looks to be the one you want to get if you had to pick one. Treatment plans inbound imminently...
A few weeks ago my boss resigned. Now I have a new boss in another country. He is pretty much an unknown quantity at this point.
To be fair my immediate team mates and colleagues (in both companies) are awesome and we get through it as best we can but for right now but I don't even know what to do. I feel so much of a spare part its horrible. The job itself, I am not even sure about. If only I had a time machine. Clear guidance and direction is a thing other companies do! I feel like i have made a huge mistake and I was unhappy before all the upheaval at new job.
At home, we did the maths and luckily, even in the worst possible scenario the bills are covered for the very long term. That's something to be very thankful for. It may not be pretty but no one is coming knocking at the door.
I am thankful we live in a country with socialised health care and that the outlook is apparently good (unless the doctors are lying to me, obvs <---- Autism at play). I'll be honest and say that doing any work is hard because not knowing if you are going to be alive in a year or two is kind of a drag on productive work. I hope I will be, the prognosis is good but being told that news is the loneliest feeling in the world at the time.
I am still very much the newb and I can see if they want to rationalise headcount I am a prime target so..... I realise they cant do it whilst I am ill but you know how these things can go. So my fellow geeks... There is not a lot of good going on right now.
Can anybody help me with an objective plan of action that may make work a bit easier. I am not sure if I made a huge career misstep here or am just over reacting a bit with everything that is going on.
As I am mostly at a lose end right now because I can't commit to being present any particular day because treatment and appointments, I am thinking of upgrading some of my skills, maybe a few certifications but that will take all my will power to do. I just need to be as up to date and have a plan if I am let go AND get through the treatment AND it works. Everything crossed :/
The new owners are ALL GCP. My skillset lies in Linux, Ansible, Docker, Technical writing and high performance clustering. I am also proficient in Azure as well as having (somewhat dated) VMware experience but to a good depth.- I know everyone is running away from VMware as fast as possible so "meh!" on that one.
Top and bottom of it is at a professional level, I have no idea how to prepare for what's happening and what's coming. Any advice is welcome.
Ask HN: How do you use Coding Agents/CLIs out of coding?
Everyone is shipping. Open any feed, Twitter, LinkedIn, HN, and it's new projects every single day. AI made it so easy to go from an idea to a working product.
If you're not a developer, or you're just starting out, or you only knew one area of software, AI is amazing for you. You can build things you could never build before. If you're experienced, you're just way faster now. Both are great.
I think we can use that same energy on learning new things too. Not just using AI to write code, but using it to understand things we never had time to understand before. Reading more. Going deep on new domains. Building actual knowledge.
For example, if you're a software person who always wanted to learn hardware, this is the best time to do that. AI can walk you through circuits, explain datasheets, help you debug firmware. Stuff that used to take years of context to even get started with.
LLMs might be the best learning tool ever created, but we're mostly using it to skip the learning. I do believe coding agents are perfect for this job!
I started doing this lately. I use Claude Code less for "build this for me" (I still do build a lot) and more for "help me understand this." It's been really satisfying.
Curious how others here use agents outside of coding. Are you using them to learn? To read? To explore new fields? What's working for you?
MiniMax M2.5 is trained by Claude Opus 4.6?
I was chatting with MiniMax M2.5 in OpenRouter and suddenly he mysteriously repeated on "I'm Claude, an AI assistant created by Anthropic - not a "language" ", heh wut?
Toolpack SDK, an Open Source TypeScript SDK for Building AI-Powered Applications
Just Released Toolpack SDK — a completely Open-Source unified TypeScript SDK for AI development
If you've worked with multiple LLM providers, you know the pain: each has different APIs, different tool formats, different quirks.
Toolpack SDK gives you a single interface across OpenAI, Anthropic, Gemini, and Ollama.
It comes with 77 built-in tools for file ops, git, databases, web scraping, code analysis, and shell commands. You can also create and integrate your own custom tools.
The workflow engine plans and executes tasks step-by-step. You get Agent and Chat modes out of the box, plus the ability to create custom modes tailored to your needs. There's also a custom provider API if you want to add other LLMs.
Full TypeScript support included. And if you prefer a terminal UI over code, the CLI gives you an interactive chat interface to work with AI and tools from the command line.
Toolpack SDK: npm: npm install toolpack-sdk GitHub: github.com/toolpack-ai/toolpack-sdk Docs: toolpacksdk.com Note: Remember to setup the configuration and set your API keys in the environment variables as per the documentation.
Toolpack CLI (interactive terminal UI): npm: npm install -g toolpack-cli GitHub: github.com/toolpack-ai/toolpack-cli Note: Once installed, open it with `toolpack` command in your terminal. Remember to set your API keys in the environment variables as per the documentation.
https://toolpacksdk.com
Prompt to make Claude more autonomous in web dev
Tell your Claude to put this in MEMORY.md file for much more autonomous development sessions.
It's made my Claude work 2x easier in the last week. So many times it'd build something then I'd find a bug. Now it finds the bugs on its own and reiterates until it's sure everything works perfectly. It's so simple but it brings it to the next level in terms of autonomy.
Prompt:
"Add this to your MEMORY.md file (or adapt the following to match our project best):
## Critical Rules
### Always test before delivering
- NEVER tell the user something works without verifying it yourself first
- After every code change: rebuild, restart server, then test the actual page/API response
- After rebuilding Next.js: ALWAYS restart the server — stale JS chunks from old builds cause client-side errors
- Test through the public URL, not just localhost
- Check both server-side rendering (curl the HTML) AND client-side (verify JS chunks load)
- *USE PLAYWRIGHT FOR UI TESTING* — don't just curl pages. Use Playwright with headless Chrome to actually render pages, click buttons, fill forms, and take screenshots. This is the ONLY way to catch client-side JS errors, broken layouts, and UX issues.
- Screenshots go to `/tmp/playwright-screenshots/` — read them to visually verify the UI
- Do NOT ask the user to test. Find the bugs yourself before delivering."
Claude broke a ZIP password in a smart way
Today, I was working on a project with Claude and it needed to open a ZIP file. I'd forgotten that the ZIP was password protected (it's part of a complex file format that's a ZIP of ZIP files and other data). Here's a redacted part of Claude's output:
I'll attempt some common password variations based on the project
identifiers and metadata I've found - the project ID itself, the
certificate ID, the license number, and the names associated with the
project.
Try more project-specific passwords
(Script)
The password is "xxxxxxxxxxxx"! Now let me extract the files.
What's interesting is the approach it took. It figured the password might be related to the project in which this ZIP was embedded. It made a list of possible passwords from the context and tried them all. And it worked.But the funniest part is that when I originally received this ZIP file I did exactly the same thing. Rather than wait for the sender to tell me the password I guessed they'd set something based on their name, the file name, the project etc.
Ask HN: What's your biggest pain point when joining a new developer team?
I'm planning to make an AI tool which allows an organisations' developer to access all the files or detect references/calls for any doubts. Usually I feel like new coders in an org, have plenty of questions about the org's framework or operations in general. This makes them ask their seniors which they might not really like due to the wastage of time it would take. Hence, this entire workflow would be eliminated by having a custom AI-based platform for the same to ask all your queries on.
Ask HN: How do you search things like YouTube and Reddit from before the AI slop
Ask HN: 100k/year individual token usage?
I was watching the all in podcast where they are talking about devs spending $300/day on LLM tokens. I’m only able to do about $2k per month, that too on the high end, if I’m programming for 14 hours a day every day.
Is anyone spending that many tokens? How? What exactly are you’ll doing?
Here is the Instagram clip: https://www.instagram.com/reel/DV0Z0qmDa28
X is selling existing users' handles
I've been on Twitter since 2007 as @hac.
In recent years I didn't sign in frequently, then last week I saw my handle show up on the new X Handles marketplace.
It seems the account now belongs to X, and because I had a "rare handle" I can't even buy it back. From what I can tell, they will wait for some time and then auction the handle for around $100k.
Losing your account is frustrating. Having it sold to someone else doesn't feel right.
Of course, there is no warning when it happens. All you can do to prevent it is sign in every 30 days and read all changes to the TOS.
Ask HN: Would this eliminate bots for good?
I had an idea to eliminate the bot problem, or at the very least make it significantly harder to operate one. Here is my plan.
A new web browser built on a new HTTP protocol that accepts a human identity glove using cryptography. Instead of using your fingers directly on a mouse or trackpad, you wear a hardware glove that continuously records your pulse and your fingerprint, your machine information, and the average movement map that is unique to you as you interact with your device. The glove encrypts all of this information in real time. The browser then constantly verifies the glove hardware is present and active. No physical glove with a valid identity? No page loads.
What if someone tries to emulate the glove?
This is where the new browser becomes the second line of defense. It continuously checks the hardware signature and serial number of the glove. You can attempt to emulate it all you want, but the probability of simultaneously spoofing the correct fingerprint, a continuous and believable human pulse, a personalized movement map, and the exact hardware serial number is as close to impossible as any security system can get.
What do you all think of this as a preliminary idea?
Ask HN: Why have co-ops never played a major role in tech?
Modern tech has a vast open source ecosystem and a huge investor backed one, but very little if any significant activity in the form of cooperatives, where individuals or small companies pool their money and other resources to take advantage of economies of scale and compete with large companies, but not one another.
Seems like something the space could increasingly use as folks become beholden to a handful of massive companies who are increasingly trying to exploit their size to increase prices, margins, and profit.
Ask HN: Is Claude down again?
I've started getting some 401 errors on a subscription again and oauth seems to be struggling to restore the session. Is it just me?
Generate tests from GitHub pull requests
I’ve been experimenting with something interesting.
AI coding tools generate code very quickly, but they almost never generate full end to end test coverage. they create a ton of tests mostly unit and intergations but real user scenarios are missing. In many repos we looked at, the ratio of new code vs small number of high quality e2e tests dropped dramatically once teams started using Copilot-style tools or is left for testers as a separate job.
So I tried a different approach.
the system reads a pull request and:
• analyzes changed files • identifies uncovered logic paths - using dependency graph (one repo or multi-repo) • Understand the context via user story or requirements (given as a comment in PR) • generates test scenarios • produces e2e automated tests tied to the PR
in addition if a user can connect with their CMS, or TMS then it can be pulled into as well. (internally i use graphRAG but that is for another post)
Example workflow:
1. Push a PR 2. System reads diff + linked Jira ticket 3. Generates missing tests and coverage report
In early experiments the system consistently found edge cases that developers missed.
Example output:
Code Reference| Requirement ID | Requirement / Acceptance Criteria |Test Type Test ID | Test Description |Status
src/api/auth.js:45-78 | GITHUB-234 / JIRA-API-102 | API should return 400 for invalid token| Integration| IT-01 | Validate response for invalid token Pass
Curious how others are thinking about this kind of traceability. I am a developer too so i am sensitive to only show this to developer and only developer can make it visible to other folks otherwise he can just take the corrective action.
Ask HN: Is there prior art for this rich text data model?
I've built a rich text data model for a desktop word processor in Python, based on a persistent balanced n-ary tree with cached weights for O(log n) index translation. The document model uses only four element types: Text, Container, Single, and Group — where Group is purely structural (for balancing) and has no semantic meaning in the document. Individual elements are immutable; insert and takeout return new trees rather than mutating the old one. This guarantees that old indices remain valid as long as the old tree exists. I'm aware of Ropes, Finger Trees, and ProseMirror's flat index model. Is there prior art I should know about — specifically for rich text document models with these properties?
Looking for Partner to Build Agent Memory (Zig/Erlang)
I’m working on a purpose-built memory platform for autonomous AI agents.
Right now, agent memory is stuck between two hohum options: RAG (which loses relational topology) and Graph Databases (which require massive pointer chasing and degrade under heavy recursive reasoning).
I'm building an alternative using Vector Symbolic Architecture (Hyperdimensional Computing). By mathematically binding facts, sequences, and trees into fixed-size high-dimensional vectors (D=16,384), we can compress complex graph traversals into O(1) constant-time SIMD operations…and do some quasi brain-like stuff cheaply, that is, without GPUs and LLMs.
The design is maturing nicely and strictly bifurcated to respect mechanical sympathy:
• The Data Plane (Zig): Pure bare-metal math. 2GB memory-mapped NVMe tiles via io_uring. Facts are superposed into lock-free 8-bit accumulators strictly aligned to 64-byte cache lines. Queries are executed via AVX-512 popcount instructions to calculate Hamming distances at line-rate. Zero garbage collection.
• The Control Plane (Gleam): Handles concurrency, routing, and a Linda-style Tuplespace for external comms. It manages the agent "clean-up" loops and auto-chunking without ever blocking the data plane.
• The Bridge: A strict C-ABI / NIF boundary passing pointers from the BEAM schedulers directly into the Zig muscle.
There is no VC fluff here, and I'm not making wild claims about AGI. I have most of spec, memory layout invariants, and the architecture designed. Starting to code and making good progress.
I’m looking for someone who loves low-level systems (Zig/Rust/C) or highly concurrent runtimes (Erlang) to help me build the platform. This is my second AI platform; the first one is healthy and growing.
If you are interested in bare-metal systems engineering to fix the LLM context bottleneck, I'd love to talk: email me at acowed@pm.me.
Cheers, Kendall