How do you decide whether Abraham Lincoln really said, “Never trust the Internet”? Well, Lincoln died long before the Internet existed, so the quote must be bogus. Common sense. Simple, right? But teaching a computer such common sense turns out to be not so simple at all.
ChatGPT has become notorious for providing users with garbage information: fictitious court cases, fake quotations, made up news articles. Did the program simply invent the stuff out of thin air; did it find the “fake news” somewhere on the web; or did it combine facts, conjecture, and conspiracy theories from multiple sources to create false information?
Perhaps we can use AI to filter out the most egregious fabrications. ChatGPT itself could vet its input – that is, the information it takes in from the Internet. And independent, AI front-ends could analyze ChatGPT’s output.
“Facts” could be checked by gauging the quality of the source, checking multiple sources for verification, and ensuring that the claims don’t violate basic rules. Consider, for example, the common click-bait claims that this or that celebrity has just died. What are the chances that the only site posting the news is a sponsored webpage that sells herbal hair-growth products? Wouldn’t the news flash be popping up on sites from AP to Reuters?
Or what about that great quotation from an obscure 19th Century U.S. Senator. Can a search engine find it and find it on a reputable site?
Doesn’t that perpetual motion machine, so convincingly described on a “science” website, violate the First Law of Thermodynamics?
The claim that women are paid 75 percent of what men earn for doing the same jobs requires that millions of employers ignore their own self-interest. Why would they leave so much money on the table? Why not hire an entirely female workforce, pay them (say) 80 cents on the dollar, and wipe out the competition?
Does the alleged fact require that countless people are colluding? Consider, for example, the claim that inflation is caused by corporate greed. Really? Hundreds of thousands of firms simultaneously raise their prices and not one of them sees an opportunity to grab market share by underselling the competition?
Is the information logically consistent or is it self-contradictory? For instance, consider the statement, “all property is theft.” Theft implies the existence of property – that is, something must first be owned before it can be stolen, and its ownership must be legitimate otherwise taking it is not theft. Therefore, the statement implies that property can be legitimately owned. But if that’s true, then ownership isn’t necessarily theft. Yet, the phrase explicitly states that all property is theft, which means that no ownership can be legitimate, but then there can be no theft. (On the other hand, the statement, “some property was obtained through theft,” while perfectly true and non-self-contradictory, is not nearly as pithy and is far less likely to get pitchfork-armed mobs into the streets.)
Competing ChatGPT front ends, all using their own proprietary sets of tests and rules could offer their filtering services to users. A user would enter his question to the front end of his choice, which would then submit it to ChatGPT. The front end would then analyze ChatGPT’s response according to its ruleset. If it found discrepancies, it could optionally end the session or confront ChatGPT with those discrepancies in hopes that the ensuing dialog would result in a more reasonable answer. The conversation could be made transparent to the user to better enable him to gauge the reliability of the final response.
In a world in which knowledge is power, people might be willing to pay for the service that best separates the wheat from the AI chaff.
Richard Fulmer worked as a mechanical engineer and a systems analyst in industry. He is now retired and does free-lance writing. He has published some fifty articles and book reviews in free market magazines and blogs. With Robert L. Bradley Jr., Richard wrote the book, Energy: The Master Resource.