close
close

topicnews · October 17, 2024

Wikipedia – why we should all use it more often

Wikipedia – why we should all use it more often

In fact, Wikipedia is leading the way forward for AI, which can only become a trustworthy tool when it leverages the transparency, evidentiary standards, and human intelligence that have made Wikipedia such a success.

Wikipedia was founded in 2001 and quickly became a convenient resource Fears of online information. Printed encyclopedias were curated volumes whose physical weight seemed to convey their intellectual validity. But suddenly here was a crowdsourced compendium of information that you only needed an IP address, not a PhD, to contribute to.

In some cases the fears were justified. In its early days, members of Congress manipulated their entries for political reasons. Foreign governments have used the site to spread disinformation, such as when Russian government agents edited a Wikipedia entry to falsely claim that Malaysian flight MH17 was shot down by Ukrainian soldiers in 2014. Some factual errors have been circulating on the website for years.

The Guardians of Knowledge strongly condemned the site. The head of the American Library Association compared professors who unleashed students on Wikipedia to nutritionists who recommended a diet of Big Macs. In 2006, Britannica’s president warned that a lack of control would result in Wikipedia becoming “a vast, mediocre mass of uneven, unreliable and often unreadable articles.”

Six years after this proclamation, Britannica ceased publication of its print edition after 244 years. And Wikipedia emerged as the clear winner. But the early impression of Wikipedia as an online zoo run by volunteer vandals remains like a bad middle school nickname.

Wikipedia has grown up. As a study by our research group showed, fact-checkers from the country’s leading news outlets now consult the website. Chances are, your doctor does too. That’s because Wikipedia has built safeguards to protect information over the years – from bots reporting changes from unknown IP addresses to administrators receiving notifications when changes are made to controversial pages.

The claim that “anyone can change” Wikipedia is not true. Try manipulating the entries for “Partition of India,” “Donald Trump,” “Gamergate,” or “Coat of Arms of Lithuania” and you’ll immediately hit a lock icon indicating the page is “protected.” Only high-quality contributors with a proven track record of reliable and accurate editing can get their hands on such posts.

This does not mean that the website is error-free. Far from it. Edit wars can get out of control. Despite the site’s “neutral view” policy, bias can creep in. Yet Wikipedia remains a testament to collective human intelligence, not because it is perfect, but because in so many cases it is “pretty good,” as Harvard professor Yochai Benkler wrote in 2006, “a claim that would have been considered absurd just half a decade ago would have been.” ”

Perhaps most important are two principles that have been ingrained in Wikipedia from the start: transparency and connected information. In both areas, Wikipedia outperforms AI chatbots.

Since the site’s launch, every Wikipedia article has offered contributors the opportunity to discuss and debate everything from titles to evidence. For example, if you go to the “Discussion” page of the “Israel-Hamas War” article, you will see dozens of comments arguing about whether it should be called “Israel-Gaza War,” as well as arguments about Sources for the statistics cited in the entry. Every change made to a Wikipedia page is archived, along with the username of the person who made it, and is visible to everyone.

Wikipedia also requires that claims be attached to external sources; no original research allowed. The entry on the bombings of Hiroshima and Nagasaki, for example, contains over 300 footnotes and a wealth of references to academic literature. For this reason, Wikipedia is a great asset for fact-checkers: it offers a ready-made compilation of sources for further research.

This all looks very different than most AI chatbots. On Wikipedia, editors reveal where information comes from and discuss the advantages and disadvantages of different sources. Large language models like ChatGPT offer answers of unknown origin. You have no idea how the chatbot arrives at the answer it provides.

AI companies may hire real people to rate answers as good or bad. But each AI user remains isolated: What you see when you ask ChatGPT about the origins of, say, the modern Middle East is different from what someone else sees, in part because even slight differences in the wording of the prompts result in different chatbot responses can be in tone or content.

In other words, AI mixes sources of information that should remain traceable while atomizing the people who should be involved in collectively correcting knowledge. A future where claims are removed from sources and human intelligence is replaced by an algorithm is frightening indeed.

Wikipedia offers us two lessons. First, in the midst of a flood of misinformation, we must abandon our reluctance to consult the site. Use it, and use it often. “Where does this information come from?” is one of the most important questions we can ask to avoid being deceived online. On Wikipedia the answer is almost always clear.

Second, the future of AI should be measured by whether it can pass the “Wikipedia test.” The Turing Test measured a computer’s ability to imitate human conversations. The Wikipedia test is designed to measure whether AI exhibits the qualities that make the online encyclopedia trustworthy: transparency, standards of evidence and collective wisdom. While some newer AI products can cite sources, if the test as a whole were to pass today it would fail.

We are under no illusions that AI will disappear. If we did that, we would be like the president of Britannica in 2006. Wikipedia has no illusions about this either. The site has developed a plugin that allows ChatGPT to search the encyclopedia for current information and provide links to articles that expand the search. It’s a small step forward towards AI one day passing the Wikipedia test.

Wikipedia’s secret was to rely on a new innovation, the Internet, while preserving the old-fashioned role of people and community. As we move forward with artificial Intelligence, we should remember that we already have pretty good human intelligence. Let’s not waste it.

Sam Wineburg is co-founder of the Digital inquiry group (DIG) and Professor Emeritus of Education at Stanford University. His latest book is “Verified: How to Think Clearly, Be Less Deceived, and Make Better Decisions.” About what to believe online.” Nadav Ziv is a research associate at DIG and an author.