Author: tlowe11 Posted: Tuesday, March 28, 2023

Launch HN: Metal (YC W23) – Embeddings as a Service

Hey HN! We’re Taylor, James and Sergio – the founders of Metal (https://www.getmetal.io/). You can think of Metal as embeddings as a service. We help developers use embeddings without needing to build out infrastructure, storage, or tooling. Here’s a 2-minute overview: https://www.loom.com/share/39fb6df7fd73469eaf20b37248ceed0f

If you’re unfamiliar with embeddings, they are representations of real world data expressed as a vector, where the position of the vector can be compared to other vectors – thereby deriving meaning from the data. They can be used to create things like semantic search, recommender systems, clustering analysis, classification, and more.

Working at companies like Datadog, Meta, and Spotify, we found it frustrating to build ML apps. Lack of tooling, infrastructure, and proper abstraction made working with ML tedious and slow. To get features out the door we’ve had to build data ingestion pipelines from scratch, manually maintain live customer datasets, build observability to measure drift, manage no-downtime deployments, and the list goes on. It took months to get simple features in front of users and the developer experience was terrible.

OpenAI, Hugging Face and others have brought models to the masses, but the developer experience still needs to be improved. To actually use embeddings, hitting APIs like OpenAI is just one piece of the puzzle. You also need to figure out storage, create indexes, maintain data quality through fine-tuning, manage versions, code operations on top of your data, and create APIs to consume it. All of this friction makes it a pain to ship live applications.

Metal solves these problems by providing an end-to-end platform for embeddings. Here’s how it works:

Data In: You send data to our system via our SDK or API. Data can be text, images, PDFs, or raw embeddings. When data hits our pipeline we preprocess by extracting the text from documents and chunking when necessary. We then generate embeddings using the selected model. If the index has fine-tuning transformation, we transform the embedding into the new vector space so it matches the target data. We then store the embeddings in cold storage for any needed async jobs.

From there we index the embeddings for querying. We use HSNW right now, but are planning to support FLAT indexes as well. We currently index in Redis, but plan to make this configurable and provide more options for datastores.

Data Out: We provide querying endpoints to hit the indexes, finding the ANN. For fine-tuned indexes, we generate embeddings from the base model used and then transform the embedding into the new vector space during the pre-query phase.

Additionally, we provide methods to run clustering jobs on the stored embeddings and visualizations in the UI. We are experimenting with zero-shot classification, by embedding the classes and matching to each embedding in the closest class, allowing us to provide a “classify” method in our SDK. We would love feedback on what other async job types would be useful!

Examples of what users have built so far include embedding product catalogs for improved similarity search, personalized in-app messaging with user behavior clusters, and similarity search on images for content creators.

Metal has a free tier that anyone can use, a developer tier for $20/month, and an enterprise tier with custom pricing. We’re currently building an open source product that will be released soon.

Most importantly, we’re sharing Metal with the HN community because we want to build the best developer experience possible, and the only metric we care about is live apps on prod. We’d love to hear your feedback, experiences with embeddings, and your ideas for how we can improve the product. Looking forward to your comments, thank you!

196
Read on Hacker News Comments 95