We all know that misinformation and disinformation are going to be big issues on tomorrow’s Internet.

Many of us have different thoughts about how to fix this! Lots of ink has been spilled describing the impact of regulations like the GDPR, for example, and in the U.S., California leads the way with its own privacy rules.

But there are other ways to work with these issues, too: check out this presentation by David Karger, and think about how some of these tools and processes might allow us to conquer problems with verifying Internet data.

Karger starts with the premise that our web applications have failed to incorporate a representation of trust:

“If we don’t fix that, we are going to be completely drowned in the deluge of misinformation that is headed our way thanks to the evolving AI platforms,” he warns.

Karger gives the usual big-picture examples of disinformation – misleading items about things like vaccines, elections, wars and climate change – but also mentions the more personal problems where people can spread false rumors, post false reviews about someone’s book or artwork, and just generally inject a sense of chaos into their personal lives.

“A lot of people say that this is something that the platforms ought to fix,” he says. “We hear lots of calls for this from government and NGOs. The problem is that (the platforms) can’t (fix it): the scale of misinformation is too large. There’s far more misinformation on the platforms than any platform could possibly look at and fix. … Even if the platform somehow could address this problem, I don’t think that they should, because there’s really no consensus on what is misinformation. Do we want Republicans to decide what it is, or Democrats? Probably neither. And certainly, we don’t want the platforms to decide for themselves. Because none of these is trusted by everybody. And that’s where I want to focus: on trust.”

Too much of trust-based work, Karger suggests, currently involves a focus on facts, arguments and truth – assumedly, he means, as rhetorical concepts – instead of other ways to regard statements and information.

When he talks about MIT professors actually testing environmental data, it’s a good point – MIT itself is on the forefront of many kinds of research and analysis – so maybe in a world of next-generation verification, MIT will be one of those centralized places that people can go and promote through positive assessments!

But there’s more to it than that…

A tool called Trustnet is something that Karger says can be added to a browser as an extension.

“(The Trustnet program) allows anyone to mark any page on the web as accurate or inaccurate,” he says. “And it also lets you build a trust network: you say who it is that you trust. And anybody who trusts a person who has made marks is going to see those marks on the pages that they visit, and also on any pages that link to the pages that they visit.”

He presents a system where links to pages that are untrusted become faded, and you select a menu option called ‘your assessment’ to implement some of these ratings.

He also offers this interesting argument that you don’t necessarily want to see false information, because it’s distracting, but you do want to preserve people’s autonomy, so they should still be able to click into the un-trusted information and view it! Look at this part of the presentation and think about your stance on this.

Thinking about this duality, he also mentions something called ‘Reheadline’ that is being researched now, where misleading headlines could be changed.

Headlines, he pointed out, tend to be less accurate than the articles, and some shady operators know that people generally don’t read the articles, just the headlines.

Crowdsourcing corrections to these problems may be part of what we see on tomorrow’s Internet. Karger talks about the need to “propagate assessment through a trust network.”

“It’s a great research problem,” he says.

And yes, there is demand: Karger cites surveys about people wanting these kinds of tools (watch this part).

Trustnet, he explains, captures fact-checking work for general use.

It also helps deal with ‘filter bubbles’

There’s this worry that people sort of form cults and only talk to each other, and never find out the truth. Well, there are two responses to this: one is an ethical one, which is, people should have autonomy, right? If people choose to only talk to certain groups of people, it’s questionable whether we should prevent that….There’s also a practical problem, which is that no matter how (many) facts and arguments we put to somebody, if they don’t trust us, they’re not going to believe us. … So we can use Trustnet in a positive way here. If we know which people somebody trusts, we can actually try to deliver information to them from trusted sources, where they have a chance of believing it and changing their minds. We can also use the Trustnet to identify failed trust and say, this is somebody you should not be trusting because they assess things wrong too often.”

Karger presents some of the traditional ways that people assess information, and related concepts like heuristics that have kept us sane in the digital world. With AI, he portends, all of this is doomed:

“AI is going to exacerbate all the problems that we have with misinformation,” Karger says. “We believe things that we see in videos, we trust the majority opinions on the web to decide what content is worth seeing, and, belief product reviews and things like that. With AI, All of this is doomed. Large language models are great at writing believable text… Deepfakes can create fake videos. Now, there will be unbounded numbers of AIs that will overwhelm any kind of majority voting.”

Think about it.

He summarizes his point this way:

“We’re currently asking for the platforms to clean up misinformation. They can’t, and they shouldn’t. We need to bring in trust. We need to empower individuals to actually say who they trust, and have that recorded into the tools that we use, so that that trust network and people’s assessments of information can be used to distinguish fact from fiction.”

So, who can you trust?

Maybe the question is: HOW can you trust?

Video: In AI we trust?….