As motivation for why P != NP, one can think of it as a finite restatement of the Halting Problem  in the form of asking whether a (polynomially sized) Turing machine has an input that will halt in K steps.
If you have too few constraints it's easy to find a solution to the problem. If there are too many constraints it's also easy to show there are no solutions. But in the middle there's a sweet spot in the ratio of constraints to variables that's when the problem becomes fiendishly difficult.
I think you've fundamentally misunderstood the meaning of the P!=NP problem. In very simplified terms, it's not about what a computer can do, but about how long it takes to do something in the worst case.
All an LLM does is guess the next token based on the previous tokens and its training weights. For it to give you a solution to a large SAT problem it'll have to spit out a million character long binary string.
The likelyhood most of that string will be entirely hallucinated is very high. LLMs are not magic.
Deep Learning is already used internally in some SAT solvers to heuristically pick in which direction the search should go.
Everyone is downvoting you because that's not the mathematical explanation, but it's absolutely the informal explanation that the 'pop science' ones were saying.
I most certainly am not.