I’ve said this before, but it bears repeating.
A chatbot, not being sentient, is incapable of political bias. The people who define its training set and tell it what responses are “good” definitely are.
The distinction matters because there is a very strong tendency to attribute all sorts of nefarious behavior to “impartial” algorithms, as if the people who designed them had no say in the matter. It’s in the interest of all humans – liberal or conservative – to resist this tendency.
Obviously, it is what the people feed to algorithms that are the problem, not the algorithms.
That was discussed in this video.
If someone builds a bad car, wheels randomly flying off at speed, brakes sometimes working and sometimes not, people will call it a bad car. Despite the fact that it doesn’t want to be that way and didn’t build itself. Obviously. That’s important later in court, not so much when it crushes you.
It’s also important when writing the laws that govern liability for bad cars. Which is a prerequisite for dragging the manufacturer into court.
I’m actually glad that we’re still at this stage of worrying about semantics and having someone (human) to blame. That won’t last for too long, though. The really interesting problems come next.