Factorio runs on Apple Silicon

Comments

PaulHoule 5d
I was working for somebody once who seemed to think LeCun was an uninspired grind and I'm like no, LeCun won a contest to make a handwritten digit recognizer for the post office. LeCun wrote a review paper on text classification that got me started building successful text classifiers and still influences my practice. LeCun is one of the few academics who I feel almost personally taught me how to do something challenging.

But the A.I. hype is out of hand. "A.I. Safety" research is the worst of it, as it suggests this technology is so powerful that it's actually dangerous. The other day I was almost to write a comment on HN to a post from lesswrong where they apologized at the beginning of an article critical of the intelligence explosion hypothesis because short of Scientology or the LaRouche Youth Movement it is hard to find a place where independent thought is so unwelcome.

Let's hope "longtermism" and other A.I. hype goes the way of "Web3".

dekhn 5d
I replied to LeCun's claims about their latest protein structure predictor and he immediately got defensive. The problem is that i'm an expert in that realm and he is not. My statements were factual (pointing out real limitations in their system along with the lack of improvement over AlphaFold) and he responded by regurgitating the same misleading claims everybody in ML who doesn't understand biology makes. I've seen this pattern repeatedly.

It's too bad because you really do want leaders who listen to criticism carefully and don't immediately get defensive.

gtmitchell 5d
As a bright-eyed science undergraduate, I went to my first conference thinking how amazing it would be to have all these accomplished and intelligent people in my field all coming together to share their knowledge and make the world a better place.

And my expectations were exceeded by the first speaker. I couldn't wait for 3 full days of this! Then the second speaker got up, and spent his entire presentation telling why the first person was an idiot and totally wrong and his research was garbage because his was better. That's how I found out my field of study was broken into two warring factions, who spent the rest of the conference arguing with each other.

I left the conference somewhat disillusioned, having learned the important life lesson that just because you're a scientist doesn't mean you aren't also a human, with all the lovely human characteristics that entails. And compared to this fellow, the amount of money and fame at stake in my tiny field was miniscule. I can only imagine the kinds of egos you see at play among the scientists in this article.

MikeTheGreat 5d
I came here just to ask: Could someone rewrite the HN title of this post? Currently it feels really clickbait-y.

One of the things I really like about HN is the _lack_ of clickbait titles. Some titles are more informative, some less, but overall I feel like the titles are clear, to the point, and not carefully crafted/engineered to poke the lizard part of my brain in the way that clickbait titles are.

Disclaimer: I haven't read the article so I can't propose a title myself. And with a title like this I'm not going to.

logicalmonster 5d
I'm not super familiar with the state of the art technology in this space and how these demos were presented, but I think all of these conflicts seem like they should be resolved if companies just put gigantic honking disclaimers on the work these AI tools produce.

If you wrote a flashing big red warning, something like the following, couldn't everybody be satisfied? "CAUTION. This technology is still very early and may produce completely incorrect or even dangerous results. Any output by this tool should be considered false and is only suitable for entertainment purposes until expert human judgement verifies the results."

robg 5d
From the title I knew it was about LeCun. Why does it seem like Meta’s culture is riven by sycophants who are unable or unwilling to make work better through criticism?
seydor 5d
This is academic TMZ

Lecun implied on twitter that they 'll get it back. I really hope so

megaman821 5d
I kinda agree with LeCun here. Why can't companies and people just put out cool things that have faults? Now we have a tool that got pulled, not because any concrete harm, only outrage over theoretical harm. It is not the tool, not the people finding faults, but people reaction's that seem like they have gone too far.
LLMscientist 5d
This is a good title because it succinctly captures the issue: LeCun hyped this work by making wildly inaccurate claims and cherry picking model outputs. Go read his original tweets about the model's capabilities. Read Facebook's own characterization of what this model could achieve.

Not only did they exaggerate and hype, but they also didn't even try to solve some of the most glaring issues. The efforts on toxicity mentioned in their paper aren't even mid. They barely put effort into measuring the issue, and definitely didn't make any attempt to mitigate or correct.

Toxicity isn't really the point. Here's the point. If you can't prevent a model from being overtly toxic, then why should I believe you can give any guarantee at all about the model's output? I shouldn't, because you can't.

Galactica is just another a language model. It can be a useful tool. Facebook and LeCun oversold its capabilities and downplayed its issues. If they had just been honest and humble, things would've probably gone very differently.

In some sense, this is good news. The deep learning community -- and generative model work in particular -- is getting a much-needed helping of humble pie.

Hopefully we can continue publishing and hosting models without succumbing to moral panic. But the first step toward that goal is for scientists to be honest about the capabilities and limitations of their models.

----

My account is new so I am rate limited and unable to reply to replies. My response to the general vibes of replies is therefore added to the above post as an edit. Sorry.

Response about toxicitiy:

It's a proxy that they say they care about. I can stop there, but I'll also point out: it's not just "being nice", it's also stuff like overt defense of genocide, instructions for making bombs, etc. These are lines that no company wants their model to cross, and reasonably so. If you can't even protect Meta enough to keep the model online for more than a day or two, then why should I believe you can give any guarantee at all about the model's output in my use case? (And, again, they can't. It's a huge problem with LLLMs)

Response about taking the model down:

I'm not at FB/Meta, but I think I know what happened here.

In the best case, Meta was spending a lot of valuable zero-sum resources (top of the line GPUs) hosting the model. In the worst case they were setting a small fortune on fire at a cloud provider. Even at the largest companies with the most compute, there is internal competition and rationing for the types of GPUs you would need to host a Galactica-sized model. Especially in prototype phase.

An executive decided they would rather pull the plug on model hosting than spend zero-sum resources on a public relations snafu with no clear path to revenue. It was a business decision. The criticism of Galactica and especially the messaging around it was totally fair. The business decision was rational. Welcome to private sector R&D; it works a little different from your academic lab for better and for worse.

tinyhouse 5d
The criticism that Galactica is bad is legit. The criticism that it's dangerous by the Twitter police is not. Some people are professional complainers and somehow got to high ranks in academia. That says a lot about academia in the US.
jonstokes 5d
I don't know LeCun personally, but there's a lot of backstory here that this polemical clickbait is leaving out.

- LeCun has a history of getting mobbed by "AI ethics" types on Twitter, and in the past he was very deferential to these folks, and even left Twitter for a while. I wrote about some of that here: https://www.jonstokes.com/p/googles-colosseum

- The MIT Tech Review, which is the author's main source here apart from Twitter, is techlash rag, and they went through a long phase where they only published anti-AI stuff from the "AI ethics" people. Most of those writers I used to follow there on this topic have since moved on to other pubs, and the EIC responsible for this mess has moved on to run WIRED. But it seems they're still publishing the same kind of stuff even with new staff and management. They have exactly one and only one editorial line on AI in general and LeCun in specific, and that is "lol AI so racist and overhyped!" It's boring and predictable.

- LeCun has a longstanding beef with Marcus, and the two treat each other pretty poorly in public. Marcus seems to have a personal axe to grind with LeCun. Given that Marcus has been leading the mob on this, it's not shocking that LeCun got crappy with him.

- Emily Bender, Grady Booch, and the other folks cited in the MIT Tech Review piece all, to a person, have exactly one line on AI, everywhere at all times and in all circumstances, and it's the same one I mentioned above. You could code a bot with a lookup table to write their tweets about literally anything AI-related.

- Yeah, LeCun is a prickly nerd who gets his back up when certain people with a history of attacking him come after him yet again. He should probably should stay chill.

- "AI so overhyped" is a pose, not an argument, an investment thesis, or a career plan. But hey, you do you.

Anyway, I hate to be defending anything Meta-related, but this article is slanted trash, its sources haters who have only one, incredibly repetitive thing to say about AI, and the author is a hater.

tylerneylon 5d
I studied machine learning at NYU, and from interacting with Yann LeCun, I can say he’s actually a nice guy. Yes, his tweet is grumpy. I still feel as if the implication that Galactica should have been taken down was the worse thing happening here.

I read the MIT Technology Review article, and I was asking myself “what is an example of Galactica making a mistake?” The article could easily have quoted a specific prompt, but doesn’t. It says the model makes mistakes in terms of understanding what’s real/correct or not, but the only concrete example I see in the article is that the model will write about the history of bears in space with the implication that it’s making things up (and I believe the model does make such mistakes). I don’t think it’s a good article because it’s heavy on quoting people who don’t like the work and light on concrete details.

Does the imperfection of a language model really mean the model should not exist? This seems to be what some critics are aiming for.

belval 5d
> LeCun also approvingly links to someone else who writes, in response to AI critic Gary Marcus

The article really fails to explain that LeCun and Marcus have been trading insults for the last few years, it's hardly LeCun snapping at some random person.

stephc_int13 5d
Quite a few modern technologies have followed Moore-like exponential-shaped curves with rapid improvements and commercialisation.

But this relatively common pattern does not seem to work all the time.

Quantum Computing, Nuclear Fusion and AI are good counter examples.

Progress in those areas seems to follow a less than linear curve, much closer to a logarithm than an exponential.

We can only speculate about the reasons.

Are we hitting some kind of ceiling?

ak_111 5d
I think there approach and model are interesting the problem is that they overhyped it to the point that it would be unacceptable academically.

Their abstract says "In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge... these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community."

If I was a reviewer of this paper I would ask them to add (if they haven't so) significant section to the body of the paper highlighting the limitation of the model and the ways it can be misused. Including showing examples of wrong output.

I would then ask them to rewrite the abstract to include something along the lines "We also highlight the limitations of the model including inability to distinguish fact from fiction in several instances and the ways it can be misused and outline some ideas on how these limitation could be mitigated or overcome in the future."

dang 5d
Most of the OP is quotes from the article which discussed a few days ago here:

Why Meta’s latest large language model survived only three days online - https://news.ycombinator.com/item?id=33670124 - Nov 2022 (119 comments)

motohagiography 5d
What this project created was something sophisticated and powerful, but not something people wanted, and they got (rightfully) pilloried for it. Instead of shaking ones fist at the world for rejecting your brilliance, maybe the really smart ones are making the things that others actually desire, and not merely developing techs that give themselves leverage over others and expecting the world to defer to this demonstration of intellectual prowess.

This whole incident was a case study for product management and startup school 101. I've made this exact same category of error in developing products, where I said, "hey, look at this thing I built that may mean you don't have to do what you do anymore!" and then was surprised when people picked it apart for "dumb" reasons that ignored the elegance of having automated some problem away.

If this model were really good, they would have used it to advance a bunch of new ideas in different disciplines before exposing it to the internet. Reality is, working at Meta/Facebook means they are too disconnected from the world they have influenced so heavily to be able to interpret real desire from people who live in it anymore. When you are making products to respond to data and no actual physical customer muse, you're pushing on a rope. I'd suggest the company has reached a stage of being post-product, where all that is left are "solutions," to the institutional customers who want some kind of leverage over their userbase, but no true source of human desire.

chubot 5d
A fundamental problem with Galactica is that it is not able to distinguish truth from falsehood, a basic requirement for a language model designed to generate scientific text

Isn't this the same problem that Github Copilot has?

Fundamentally it has no idea whether code works. It doesn't even know what the problem is.

It just spits out things that are similar to things its seen before, including buggy code from Github repositories.

Not sure why it's so popular. I guess it helps you write status quo code faster (the status quo being buggy and slow) -- I would rather it help us write better code.

adamsmith143 5d
Yann LeCun has lots of faults, certainly with how he treats AI safety in general, but a lot of the criticism he got was saying "He's not qualified to be in the position he is in" which is actually absurd if you know anything about him. Even if you knew nothing other than the fact that he won a Turing Award then he would be qualified for basically any computing/AI/ML job on the planet. \

Also the title of this post is deliberately inflammatory. Should be more like "Head of team that spent months building complex ML system annoyed when people spend undue amounts of time criticizing it."