Star neuroscientist may have manipulated data to support a major stroke trial
science.org 135 100Seamless: Meta's New Speech Models
ai.meta.com 28 0HTML hacks that shaped the Internet
tedium.co 165 109Code is run more than read
olano.dev 468 226James-Lange Theory
en.wikipedia.org 29 7Exactly How Much Life Is on Earth?
nytimes.com 27 22Marker: Convert PDF to Markdown quickly with high accuracy
github.com 463 74"Useless Ruby sugar": Endless (one-line) methods
zverok.space 6 0Booking.com hackers increase attacks on customers
bbc.co.uk 73 47RavynOS Finesse of macOS. Freedom of FreeBSD
ravynos.com 125 106Is the Turing Test Dead?
spectrum.ieee.org 20 18Advent of Code 2023 is nigh
adventofcode.com 157 80Buggy animation in Atlassian Bitbucket is wasting half a CPU core at all times
thehftguy.com 191 79Segment Anything Model (Sam) Visualized
flowforward.simple.ink 44 10Return to office is 'dead,' Stanford economist says
cnbc.com 141 146Chip machine maker ASML names Christophe Fouquet as new CEO
nltimes.nl 41 18The Persistent Myth That Most Americans Are Miserable at Work
theatlantic.com 15 19Shane MacGowan has died
bbc.com 191 57The Nineteenth-Century Banjo
daily.jstor.org 8 5The Intel 386 processor die: the clock circuit
righto.com 108 9Visual Anagrams: Generating optical illusions with diffusion models
dangeng.github.io 721 62Sandra Day O'Connor, First Woman on the Supreme Court, Is Dead at 93
nytimes.com 8 0Are Open-Source Large Language Models Catching Up?
arxiv.org 292 178Anduril Builds a Tiny, Reusable Fighter Jet That Blows Up Drones
bloomberg.com 16 3Beam Me Out of This Death Trap, Scotty (1980)
iasa-intl.com 79 46Turbo Pascal Turns 40
blog.marcocantu.com 377 221Show HN: Bi-directional sync between Postgres and SQLite
Hi HN,
Today we’re launching PowerSync, a Postgres<>SQLite bi-directional sync engine that enables an offline-first app architecture. It currently supports Flutter, React Native and web (JavaScript) using Wasm SQLite in the browser, with more client SDKs on the way.
Conrad and I (Ralf) have been working on our sync engine since 2009, originally as part of a full-stack app platform. That version of the system is still used in production worldwide and we’ve learnt a lot from its use cases and scaling. About a year ago we started on spinning off PowerSync as a standalone product that is designed to be stack-agnostic.
If you’d like to see a simple demo, check out the pebbles widget on the landing page here: https://www.powersync.com/
We wrote about our architecture and design philosophy here: https://www.powersync.com/blog/introducing-powersync-v1-0-po...
This covers amongst other things how we designed the system for scalable dynamic partial replication, why we use a server authority architecture based on an event log instead of CRDTs for merging changes, and the approach to consistency.
Our docs can be found here: https://docs.powersync.com/
We would love to hear your feedback! - Ralf, Conrad, Kobie, Phillip and team
Show HN: Australian Acoustic Observatory Search
The Australian Acoustic Observatory (https://acousticobservatory.org/) has 360 microphones across the continent, and over 2 million hours of audio. However, none of it is labeled: We want to make this enormous repository useful to researchers. We have found that researchers are often looking for 'hard' signals - specific call-types, birds with very little available training data, and so on. So we built an acoustic-similarity search tool, allowing researchers to provide an example of what they're looking for, which we then match against embeddings from the A2O dataset.
Here's some fun examples!
Laughing Kookaburra: <https://search.acousticobservatory.org/search/index.html?q=h...>
Pacific Koel: <https://search.acousticobservatory.org/search/index.html?q=h...>
Chiming Wedgebill: <https://search.acousticobservatory.org/search/index.html?q=h...>
How it works, in a nutshell: We use audio source separation (<https://blog.research.google/2022/01/separating-birdsong-in-...>) to pull apart the A2O data, and then run an embedding model (<https://arxiv.org/abs/2307.06292>) on each channel of the separated audio to produce a 'fingerprint' of the sound. All of this is put in a vector database with a link back to the original audio. When someone performs a search, we embed their audio, and then match against all of the embeddings in the vector database.
Right now, about 1% of the A2O data is indexed (the first minute of every recording, evenly sampled across the day). We're looking to get initial feedback and will then continue to iterate and expand coverage.
(Oh, and here's a bit of further reading: https://blog.google/intl/en-au/company-news/technology/ai-ec... )