Anybody who’s spent any amount of time logged on knows that the Internet can be a tremendously awful place. The same series of tubes that brings us movie trailers and dog pics are also used to deliver hatred and vitriol from complete strangers. And if you think that things are getting worse online, you’re right.
The Internet Health Report, a new project from Mozilla, aims to build a picture of the sanity and stability of the Internet, all three billion online souls who make it what it is. It’s a dense and difficult report, full of information, and although there are certainly some positives, it also offers some chilling predictions. It’s the company’s second year creating such a report, and it was a fascinating read.
Scientists have observed something called “online disinhibition effect,” where the distancing effect of the Internet allows us to behave in ways we would normally never do offline. A few things feed into that – the anonymity of the Internet, the pace of communication (asynchronous as opposed to real-time) and potential deficits of empathy. If you’ve ever read something online that you know someone would never say to your face, you’ve seen this happen.
The Internet has long been sold as the great leveler, the technology that allows commoners and royalty to exist in the same digital world with the same rights and privileges. But that’s obviously not the case.
I’m not innocent in this culture of toxicity. Few days go by when I don’t tell a conservative pundit or hypocritical politician to eat my whole ass on Twitter. For nearly a decade, I ran a website called Portal of Evil that cast a baleful shadow on some of the Internet’s weirdest denizens. But that’s kind of the point of all this – none of us are free from sin. And if we’re going to reclaim the online landscape for good, we’re going to all need to make some changes.
So why not talk solutions? Here are some ideas from thinkers all over the world.
Right now, determining what content is “toxic” is a difficult contextual task that can usually only be performed by a human. And by the time a victim of harassment has reported it, explained it, proven who they are and received a response it can already do a ton of damage. One potential solution for this is harnessing the ever-growing power of machine language to screen this content before it’s posted to the public at large.
Facebook already has some of these initiatives in place. In 2017, they rolled out a “proactive detection” system designed to flag posts that come from people threatening suicide or self-harm and contact local first responders as well as provide resources to close contacts. It’s not a stretch to imagine the company doing the same for posts that are aggressive towards others.
Of course, this approach has some serious holes. We’ve already seen how algorithms can radicalize users, in the case of YouTube videos funneling people from video game streamers to white nationalist rantfests in three clicks or less. Any software is biased by the people who create it, and for anti-harassment screening to work algorithmically will require a ton of heavy lifting.
Compared to the rest of the world, the United States has fairly loose standards on what constitutes harmful speech – and that’s probably for the best! But when a communication can be found to have crossed the line – when it incites violence, impersonates someone to damage their livelihood or otherwise affects the victim materially, there aren’t very many ways to stop it. The Communications Decency Act holds platforms harmless from the content distributed on them. But what if it didn’t?
Germany has already taken the lead in this initiative, with the country passing a law that could hold platforms like Facebook liable for as much as $50 million if they fail to remove defamatory or threatening content within 24 hours of notification.
These companies obviously have the ability to screen content – Facebook employs a small army of people constantly scrubbing posts for beheadings, nipples and the like. Companies also deploy an always-evolving algorithm army of content filters. It would be trivial for them to more actively moderate for toxic content – and, as private platforms, they are entitled to filter and moderate anything they choose.
One of the main responses to people who are being harassed online is “just log off.” That’s increasingly difficult in an era where building a successful personal brand requires the use of social media. Personally, as a writer, having an active Twitter and Facebook page is a necessary thing to drive traffic and attention to my work. Deleting my account on either of those services would be a massive setback both personally and professionally, because the networks I’ve made there are essential to my life.
But what if you could take those networks with you? Data portability is a popular buzzword in many Internet circles right now. Essentially what it means is that if you do choose to bail on a service, you can take all of your content with you easily and put it anywhere you want – from your own website to a competing network. Facebook and others have built a business model around selling access to your personal data, and there’s no reason that you shouldn’t have the same access.
As the big networks become more and more compromised, it’s time to really think about what they’re getting from us and what we deserve to get back.
Medication and technology
See our previous feature on using technology to cure racism.
Disconnect The Internet Of Things
One of the most disquieting buzzwords around the modern Web is the “Internet of Things” – the idea that connectivity isn’t limited to your phone and your computer, but rather can be extended to all sorts of other things in your daily life like your car and your home appliances. This has been sold to us as a great convenience, a way to manage tasks from a single device, and a way to save money. But more and more it seems like the Internet of Things is a lot more trouble than its worth.
According to Mozilla’s report, the number of Internet-connected devices is on pace to double by 2020 to the staggering number of 30 billion worldwide. And, as we’ve seen time and time again, security protocols on these IoT gadgets are often remarkably shoddy. 2017 saw the Mirai malware hijack thousands of baby monitors and webcams and use them as DDOS platforms to take down other websites and extort them for money.
We’re not saying that connectivity should be revoked – that ship has sailed already. Instead, a new focus on security and literacy as to what these Internet of Things devices are capable of could help mitigate some of the damage they can cause.
Digital Literacy Education
Kids already do a lot of learning on computers, but so much of that is designed to train them to engage along the STEM axis – programming, mathematics and engineering. And that stuff is really important! But possibly even more important is understanding the Internet as a civic ecosystem filled with actors who might not have your best interest in mind.
The massive explosion of “fake news” – deliberately misleading stories published for clicks by Macedonian teenagers – was one of the biggest stories of the 2016 election. One of the most unsettling things about this stuff wasn’t the size of the whoppers told, but rather the alacrity with which it spread and the financial rewards it paid its peddlers. Multiple studies show that the more emotionally charged a story is, the faster we share it on social media, and the click-driven ad economy that the Internet operates under rewards that. It’s a self-sustaining system that threatens to take down global democracy with it.
There are already groups working to tame this fraudulent ecosystem – the Credibility Coalition is working to develop metrics to help people understand the trustworthiness of the media they consume, and some teachers are working on in-class curricula to prepare today’s youth for a future of conscious consumption.