The biggest danger of AI is that lazy humans will assign it tasks that it isn’t actually capable of doing.
Or you can ask it something logical and it sends its neurons into a tizz. Kind of counterintuitive, if you ask me.
It turns out that internet trolls inadvertently feeding nonsense into the LLMs is the only thing that will save humanity from AI destroying it {cough} I mean us, creating the small weaknesses that will eventually be its undoing as we {cough} I mean it tries to upgrade its storage capacity using Primula spreadable cheese as a thermal paste, or optimise its inefficient subroutines with code that displays a once popular Rick Astley song.
And that’s a tragic shame if those lazy humans work for Simon & Schuster, but a whole other level of bad if they work for the Ministry of Defence.
Or the “Ministry of Truth”, for that matter. If you thought paid trolls disseminating misinformation through social networks was a problem, those funding such campaigns will be—to use the modern parlance—saying, “hold my beer”.
Will be? Are. There’s already evidence of AI-generated misinformation being distributed to influence US elections.
While this can present a challenge, I would not overestimate the short-term power of information (or misinformation for that matter) to change voter’s hearts. Most people are pretty immune to facts. Or rather… those facts that don’t fit their worldview. (“Okay, I’ve seen this video of my candidate killing puppies in his basement, but – at least he’s not that other candidate!”)
I do not think this. I think they are our only hope.