Star neuroscientist may have manipulated data to support a major stroke trialscience.org 135 100
Seamless: Meta's New Speech Modelsai.meta.com 28 0
HTML hacks that shaped the Internettedium.co 165 109
Code is run more than readolano.dev 468 226
James-Lange Theoryen.wikipedia.org 29 7
Exactly How Much Life Is on Earth?nytimes.com 27 22
Marker: Convert PDF to Markdown quickly with high accuracygithub.com 463 74
"Useless Ruby sugar": Endless (one-line) methodszverok.space 6 0
Booking.com hackers increase attacks on customersbbc.co.uk 73 47
RavynOS Finesse of macOS. Freedom of FreeBSDravynos.com 125 106
Is the Turing Test Dead?spectrum.ieee.org 20 18
Advent of Code 2023 is nighadventofcode.com 157 80
Buggy animation in Atlassian Bitbucket is wasting half a CPU core at all timesthehftguy.com 191 79
Segment Anything Model (Sam) Visualizedflowforward.simple.ink 44 10
Return to office is 'dead,' Stanford economist sayscnbc.com 141 146
Chip machine maker ASML names Christophe Fouquet as new CEOnltimes.nl 41 18
The Persistent Myth That Most Americans Are Miserable at Worktheatlantic.com 15 19
Shane MacGowan has diedbbc.com 191 57
The Nineteenth-Century Banjodaily.jstor.org 8 5
The Intel 386 processor die: the clock circuitrighto.com 108 9
Visual Anagrams: Generating optical illusions with diffusion modelsdangeng.github.io 721 62
Sandra Day O'Connor, First Woman on the Supreme Court, Is Dead at 93nytimes.com 8 0
Are Open-Source Large Language Models Catching Up?arxiv.org 292 178
Anduril Builds a Tiny, Reusable Fighter Jet That Blows Up Dronesbloomberg.com 16 3
Beam Me Out of This Death Trap, Scotty (1980)iasa-intl.com 79 46
Turbo Pascal Turns 40blog.marcocantu.com 377 221
Show HN: Bi-directional sync between Postgres and SQLite
Conrad and I (Ralf) have been working on our sync engine since 2009, originally as part of a full-stack app platform. That version of the system is still used in production worldwide and we’ve learnt a lot from its use cases and scaling. About a year ago we started on spinning off PowerSync as a standalone product that is designed to be stack-agnostic.
If you’d like to see a simple demo, check out the pebbles widget on the landing page here: https://www.powersync.com/
We wrote about our architecture and design philosophy here: https://www.powersync.com/blog/introducing-powersync-v1-0-po...
This covers amongst other things how we designed the system for scalable dynamic partial replication, why we use a server authority architecture based on an event log instead of CRDTs for merging changes, and the approach to consistency.
Our docs can be found here: https://docs.powersync.com/
We would love to hear your feedback! - Ralf, Conrad, Kobie, Phillip and team
Show HN: Australian Acoustic Observatory Search
The Australian Acoustic Observatory (https://acousticobservatory.org/) has 360 microphones across the continent, and over 2 million hours of audio. However, none of it is labeled: We want to make this enormous repository useful to researchers. We have found that researchers are often looking for 'hard' signals - specific call-types, birds with very little available training data, and so on. So we built an acoustic-similarity search tool, allowing researchers to provide an example of what they're looking for, which we then match against embeddings from the A2O dataset.
Here's some fun examples!
Laughing Kookaburra: <https://search.acousticobservatory.org/search/index.html?q=h...>
Pacific Koel: <https://search.acousticobservatory.org/search/index.html?q=h...>
Chiming Wedgebill: <https://search.acousticobservatory.org/search/index.html?q=h...>
How it works, in a nutshell: We use audio source separation (<https://blog.research.google/2022/01/separating-birdsong-in-...>) to pull apart the A2O data, and then run an embedding model (<https://arxiv.org/abs/2307.06292>) on each channel of the separated audio to produce a 'fingerprint' of the sound. All of this is put in a vector database with a link back to the original audio. When someone performs a search, we embed their audio, and then match against all of the embeddings in the vector database.
Right now, about 1% of the A2O data is indexed (the first minute of every recording, evenly sampled across the day). We're looking to get initial feedback and will then continue to iterate and expand coverage.
(Oh, and here's a bit of further reading: https://blog.google/intl/en-au/company-news/technology/ai-ec... )