major search engines like google and yahoo possess a problem : They can’t discern fact from fiction. The way in which organic Google search is found out now, as an example, a page’s worth is determined largely by its links : In case a page receives many links from external domains, the thinking goes, then that page should be valuable. (Yes, major search engines like google and yahoo analyze a large number of variables, called “ranking factors, ” but linking is really a big one. ) In the dawn of Google, this concept of applying academic citation towards the Web to crowdsource the foremost helpful results made its internet search results better than Yahoo’s database-style approach and Altavista’s paid curation model.
Clearly, using citations like a ranking signal repaid, as it’s a largely reliable method to help users find whatever they want. However, the matter it poses is obvious : Popularity doesn’t equal credibility. Because lots of individuals believe something doesn’t cause it to be so, and merely because many Internet users are sharing an editorial doesn’t cause it to be true.
Nothing illustrated the popularity-truth paradox such as the fake news epidemic that plagued the 2016 presidential campaign. (Fake news, like the New York Times defines it, is “invented from whole cloth, designed to draw in social shares and web traffic by flattering the prejudices of the intended audience. ” ) Some fake-news stories were created by Macedonian kids and recent American college grads just to create a quick buck, however the propaganda’s insidious creep into our social media feeds and search results nevertheless shook the public’s (already-deteriorating ) rely upon media, and may possibly have even swung the election.
The damage wrought by fake news sparked action from Google and Facebook, two of the biggest middlemen between editorial content and audiences. They both pledged to ban fake news sites from using their paid ad networks, thereby removing the main incentive behind fake news. However, both companies possess the capability and responsibility to carry out more by removing propaganda from their organic indices and warning users when they’re navigating near falsehoods. This isn’t to suggest that Google ought to be censoring the web, but that it must be inside a unique position to elevate facts and journalism above lies and sensationalism. Google can’t and shouldn’t scrub the Web for many mentions of the fake news story or conspiracy theory. It’s important to the fact-based community to understand about Holocaust deniers and Sandy Hook truthers, partly to counter their arguments and evidence. As Edward Snowden says, “The response to bad speech is much more speech. ” However, Google may take measures to demote “bad speech” and promote truth.
What might those defensive actions seem like? The SEO team at Huge got together to brainstorm methods to address this whole sordid business and also to track what tech companies happen to be doing to create more nuance to look results. Listed below are our favorite ideas, a number of which Bing is already dedicated to :
· Google has identified problematic domains pushing fake news, for example “NBC. com. co. ” It could use this collection of deceptive domains, culled coming from the paid-ads small portion Google’s index, to levy penalties inside the organic index.
· Studies show users are receptive to “inoculations” against fake news. Basically, warnings affixed to some social media post with questionable content may help users discern fact from fiction. Major search engines like google and yahoo can warn users in regards to the destination page by either using an interstitial page between search results pages (SERPs ) and also the content (like they do for phishing and malware sites ) , or by planting some type of red flag next the SERP link for users to discover before they click.
· If major search engines like google and yahoo were to warn users, who would make a decision what’s fact and what’s “alternative fact”? Google could borrow a tactic announced by Facebook and collaborate with trustworthy, bipartisan sources. We’d nominate the BBC, Snopes, ProPublica, and PolitiFact as initial partners. This is just like a human-editor or “Search Engine Ethicist” idea I’ve kicked around before. If Bing is worried about appearing partisan, the warning can adopt a neutral tone : “This page is controversial, ” or “Opinion, not fact, ” instead of “This page is fake news. ” This isn’t from the realm of possibility ; Google did something similar when it partnered using the Mayo Clinic and Harvard Medical School to generate reliable medical information in reaction to users’ symptom searches.
· Expand the “feedback” functionality employed for regular results and also the “fact check” tag utilized in Google News results. Currently, “feedback” is designed to recommend edits to Answer Card and Knowledge Graph results :
Meanwhile, “Fact Check” is really a label akin to “Blog” or “In-Depth” next to related headlines when previewing a Google News story :
Both of those could be improved to assist Google users flag fake news content. Alongside “feedback” and “fact check” tags could be “other sources for this topic” to assist users navigate different points of view.
· Collaborate with Facebook to talk about a master collection of offending domains. Sure, they compete in lots of areas, but quashing fake news is a thing they could unite behind for the general public good.
If an article’s veracity will probably be included in Google’s algorithm like a ranking factor, what on-page signals might Google look into? SEO analysts look into similar signals when auditing a website, but how might a “trust authority” score work?
· Domain age : Fake news sites will likely have newer domains than legitimate news sites.
· Ads-to-content ratio : Fake news makes money through the use of sensational headlines to drive traffic, after which selling readers’ eyeballs to advertisers. Because of the heavy reliance on ad revenue, propaganda sites tend to get a higher ad-to-content ratio than legitimate publications do.
Illustration by Ana Vasquez
· Neighborhoods : Google has already mapped “neighborhoods” inside the Internet depending on, among other factors, interlinking. E-commerce websites, as an example, reside in one neighborhood, news sites in another. Counterfeiters, spam providers, pornographers, along with other undesirables have neighborhoods, too. Google could expand this same model to publishers of fake news, grouping them into your neighborhood and demoting them accordingly.