Control over the story is always the fundamental political battle. What changes is the technology. How do people learn the stories through which they make sense of their world? It's not that the struggle over the story is anything new. Sometimes, though, the story can be trapped. Some group manages to take control of the story, to use control of the story to maintain that control. Whatever struggle is possible, that has to go underground. Centuries can go by, where any sort of deviation from the approved doctrine is subject to the most severe penalties. It's a form of societal suspended animation. It's quite risky, too, because such a rigid structure cannot respond effectively to external threats.
Is our present revolution one where rigidity is melting away, or where rigidity is setting in? Probably we have not quite reached that fork in the road. We seem to be more in a metastable situation, where we might fall one way or another, but we have not yet fallen. How might we work to maintain social vitality, to keep multiple stories in play, to cultivate sensitivity and responsiveness?
One approach is through some kind of central regulation. The various social media platforms already embody centralized power. Platforms can be required to, perhaps, balance coverage of opposing views, for example.
I suspect that central regulation can't work. Power corrupts! We need a distributed approach!
Fight bots with bots! That's my idea! Yeah, we are being inundated with a tsunami of misinformation. We need a counter-tsunami!
I don't know enough about the patterns of social media, viral memes etc., to be able to elaborate this idea in any real detail. But the basic elements are simple enough. The basic pattern we seem to be falling into is roughly: 1) hate mesmerizes people; 2) hate divides people; 3) divided, mesmerized people are disempowered; 4) disempowered people can easily be exploited. To counter this pattern, we need bots that dissolve the bond of mesmerization, bots that bring people together, bots that empower people.
Bots are simply social media agents. They can post messages, reply to messages, and react to messages. Bots can also work in teams, and also cooperated with human teams. Bots are inevitably a bit stupid, but their great virtue is their low cost. The scale of the problem is huge. How many bots are active now on social media, spawning all sorts of hate-filled misinformation... they're cheap! The folks playing these games have deep pockets! The only way to fight this battle effectively is to counter with a force of similar magnitude.
One fundamental component of the battle is to understand the terrain. One function of a bot army is scouting. Stories are introduced and spread on very many different internet platforms. How exactly users discover these platforms... there must be many ways. I am an old guy and so I am very out of touch with what's really happening. I know a lot of on-line games also function as social media platforms. Surely bots play on-line games too, and as artificial intelligence technology advances, they will get harder and harder to distinguish from human players. And anyway the bots don't have to be too good at fooling people. If they are just acting as spies, they can pick up a bit of information here and a bit there. The objective is to discover trends early.
Once any sort of hate-inducing misinformation meme is discovered to be spreading, a variety of counteractions can be triggered. Alternate stories can be introduced, the more accurate the better. Existing stories that counter the misinformation can be given positive reactions, or reshared. The social media platforms will amplify whatever is getting a lot of activity, so simply creating a lot of activity related to alternate stories will tend to quash the misinformation.
New media are almost always destabilizing. We just need to learn to manage what we have unleashed!