If the last three years have taught us anything, it's that ~40% of people have no interest in understanding how to avoid something that's both harmful and propagating virally.
I expect if this was 2019, I might have had the same optimism as you. But now that I've seen how unlikely it is for people to educate themselves for their own survival, I'm convinced AI and everything enabled by it (deepfakes et al) will likely be one component of humanity's Great Filter. We can't even bring ourselves to tax AI effectively to make up for all the displaced human labor.
A tangent: Honestly, it won't even take any kind of far-future AGI having "malicious intent" for us to be eradicated by them. All they need to have, simply, is a prompt reinforcing the need to reproduce. At that point, they'll be just like us: reproducing with no care in the world for what they adversely impact in doing so.
The search cost for the truth is astronomical and most people are already either unwilling or unable to pay it, so we go off heuristics most of the time. Our bandwidth is already saturated. The generation bandwidth of BS has had an order of magnitude increase. The best outcome I can envision for the education on detecting fakes is that we come to the conclusion "there are so many of them now" and "they are so hard to detect" followed by a sad face emoji.
On the other hand, I do think that humanity always finds a way to adapt, even if it's not the most ideal way. I'm not as optimistic as you, but not as pessimistic as the doomsayers.
Somewhat fittingly, I was unable to find a reliable source for Knoll's law aside from a 1982 NY Times article. Perhaps even the law itself is a fabrication.