Voters around the world turn to Google and YouTube to find authoritative, reliable information about elections. We take that responsibility seriously, and are committed to maintaining users' trust. As AI evolves rapidly, we are continuing this work with an increased focus on the challenges and opportunities that AI creates.
Providing more context and increasing content provenance within our AI products and across our platforms.
To help people identify content that may seem realistic but is actually AI-generated, we’ve introduced several new tools including SynthID, which directly embeds a digital watermark into AI-generated text, images, audio and video. On Gemini, we direct the user to Google Search for the latest information around election-related queries and we aim to provide more information about AI generated content found across our platforms. For example, on Google Search we have easy ways for people to get more context on the information they see online. People can use features such as “About this result” and “About this image” to help make a more informed decision about the sites they may want to visit and to assess the credibility and context of images they see online.
On YouTube, we require creators to disclose when they’ve created altered or synthetic content that’s realistic. This will include election content, and we may take action against creators who consistently do not disclose this information. We’ll also take the additional step of labeling altered or synthetic election content that doesn’t violate our policies to clearly indicate for viewers that some of the content was altered or generated digitally. This label will be displayed in both the video player and the video description, and will surface regardless of the creator, political viewpoints or language.
Protecting our platforms from abuse to help prevent bad actors from using AI to mislead voters on our products.
This work includes leveraging AI as a tool to enhance our abuse-fighting efforts, enforce our policies at scale, and adapt quickly to new or emerging threats. We were the first tech company to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. This includes ads that were created with the use of AI. This same commitment extends to YouTube which informs viewers when they’re engaging with content made with generative AI.
Partnering with others across the tech industry, governments, and civil society.
Alongside other leading tech companies, we have pledged to help prevent deceptive AI-generated imagery, audio, or video content from interfering with this year’s global elections. For example, the ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. We’re committed to address existing and emerging AI challenges, share research, intelligence and learnings, and collaboratively counter abuse.