Acknowledging the role that misleadingly edited or out-of-context images play in seeding the internet with misinformation, Google is introducing fact-checking labels for some Google image searches. The feature, available starting today, provides a few lines of context with select searches, drawing on services provided by third-party fact-checkers. The tool is powered by publishers themselves, who can now opt to tag images that have been fact-checked using ClaimReview, a method for publishers to communicate to search engines that an image has been verified.
“Photos and videos are an incredible way to help people understand what’s going on in the world,” Google Product Manager Harris Cohen wrote in a blog post announcing the feature. “But the power of visual media has its pitfalls—especially when there are questions surrounding the origin, authenticity or context of an image.”
Google is definitely right about that. Recirculated images tend to pop-up in most major online conspiracy and viral misinformation cycles, and many credulous internet users are content to believe what they can see—even if what they’re seeing was edited or otherwise removed from its context.
In its announcement, Google provided the example of “sharks swimming in street Houston,” a query that pulls up a perennial viral image offender. In the example search, a fact-check from PolitiFact appears below the original image of the same shark silhouette swimming in the ocean. The addition is just a few lines of text rather than anything flashier, like a colorful label that might indicate more clearly that the content has a special status.
According to Google’s announcement, the labels will pop up on “results that come from independent, authoritative sources” that meet its standards. The company notes that the inclusion of fact-checking tags won’t elevate those search results. While more fact-checking and additional context is always a good thing, the new image fact-checking tool only reinforces context from third-party sources already doing this fact-checking work rather than surfacing fact-checking on low-quality websites spreading misinformation.
Google appears content to lean on third-parties for much of this kind of work rather than bringing it in-house, but the company did trial a more aggressive, hands-on misinformation strategy for COVID-19. In March, Google began scrubbing false claims from search results and pointing users to verified public health information in searches and on YouTube.
In spite of running one of the most popular social websites in the world, Google has largely steered clear of engaging in the most fractious current conversations around content moderation and misinformation, exemplified by the ongoing standoff between President Trump and his allies and social networks like Twitter and Facebook.
Those companies have signaled opposite strategies toward moderation in recent weeks, with Twitter making increasingly hands-on decisions about what violates its rules while Facebook intervenes only in the most egregious cases. But even if Google mostly succeeds in staying above the fray, the company faces the same existential threat from political figures who seek to punish social media companies by revoking the legal protections that make their businesses possible.
Still, Google did dip its toes into that ongoing conflict last week, when the company confirmed it had removed the right-wing website ZeroHedge from its ad platform for violating its rules against hate and discriminatory content. The company also issued a warning to The Federalist, another far-right site, for similar violations.