We’re committed to supporting democratic processes around the world

For over two decades, Google and YouTube have committed to providing timely and authoritative information to help voters understand, navigate and participate in democratic processes.

Surfacing high-quality information to voters

During elections, people search for information on candidates, voter registration deadlines, polling locations, election results and more. Google Search and YouTube help connect voters to authoritative and reliable information. We work with non-partisan, third-party data partners and directly with governments to aggregate authoritative data directly from local election administrators.

On Search, this means displaying authoritative election information directly on the Search results page so that voters can quickly access it. For example, when people search for topics like “how to vote,” they will find information about voting requirements, voting abroad and more all linking to authoritative sources such as government election entities. On YouTube, we raise high-quality election news and information from authoritative sources in search results and recommendations.

Example of a 2023 Elections Doodle

Partnering with election entities to provide best-in-class resources

We recognize the heightened cybersecurity risks associated with elections, and we have forged partnerships and developed tools to expand our offerings to safeguard against potential threats. Our Advanced Protection Program – our strongest set of cyber protections – is recommended for elected officials, candidates, campaign workers, journalists, election workers and other high-risk individuals. Our longstanding partnerships with Defending Digital Campaigns (DDC) in the US and the International Foundation for Electoral Systems (IFES) globally provide campaigns with the security tools they need to stay safe online, including tools to rapidly configure Google Workspace's security features. In 2023, through partners like DDC, we also distributed 100,000 free Titan Security Keys to high-risk users, and next year, we’ve committed to providing an additional 100,000 of our new Titan Security keys.

Our Google Threat Intelligence team helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. For example, on any given day, the team is tracking more than 270 targeted or government-backed attacker groups from more than 50 countries. We publish our respective findings consistently to keep the public and private sector vigilant and well informed. The team also helps organizations build holistic election security programs and harden their defenses with comprehensive tools, ranging from proactive compromise assessment services to threat intelligence tracking of information operations.

The Google News Initiative also supports and partners with fact-checking organizations and news publishers globally. Our support helps provide essential resources, training, and collaborative platforms which help journalists in their work to create high-quality, impactful fact-checking content.

Three people reading off a cartoon phone display on the wall.

Safeguarding our platforms from abuse

To safeguard our platforms, we have long-standing policies across our surfaces that do not allow hate and harassment, manipulated media, incitement to violence, and demonstrably false claims that could undermine democratic processes. When we identify violations of these policies, we take appropriate action, up to and including manual removal.

We remove content that violates our Community Guidelines across Google and YouTube. For over a decade, we’ve leveraged machine learning classifiers and AI to identify and remove content that violates these policies. And now, with the recent advances in our Large Language Models (LLMs), we’re experimenting with building faster and more adaptable enforcement systems. Early results indicate that this will enable us to remain nimble and take action even more quickly when new threats emerge.

We’re also focused on taking a principled and responsible approach to introducing generative AI products – including AI Overviews and Gemini – where we’ve prioritized testing for safety risks ranging from cybersecurity vulnerabilities to misinformation and fairness. Out of an abundance of caution on such an important topic like elections, we’re restricting the types of election-related queries for which Gemini and AI Overviews will return responses.


Our approach to AI

Democracies rely on free and fair elections. Elections rely on trustworthy information. And nothing builds trust like transparency.

Kent Walker

Voters around the world turn to Google and YouTube to find authoritative, reliable information about elections. We take that responsibility seriously, and are committed to maintaining users' trust. As AI evolves rapidly, we are continuing this work with an increased focus on the challenges and opportunities that AI creates.

Providing more context and increasing content provenance within our AI products and across our platforms. 
To help people identify content that may seem realistic but is actually AI-generated, we’ve introduced several new tools including SynthID, which directly embeds a digital watermark into AI-generated text, images, audio and video. On Gemini, we direct the user to Google Search for the latest information around election-related queries and we aim to provide more information about AI generated content found across our platforms. For example, on Google Search we have easy ways for people to get more context on the information they see online. People can use features such as “About this result” and “About this image” to help make a more informed decision about the sites they may want to visit and to assess the credibility and context of images they see online.

On YouTube, we require creators to disclose when they’ve created altered or synthetic content that’s realistic. This will include election content, and we may take action against creators who consistently do not disclose this information. We’ll also take the additional step of labeling altered or synthetic election content that doesn’t violate our policies to clearly indicate for viewers that some of the content was altered or generated digitally. This label will be displayed in both the video player and the video description, and will surface regardless of the creator, political viewpoints or language.

Protecting our platforms from abuse to help prevent bad actors from using AI to mislead voters on our products.
This work includes leveraging AI as a tool to enhance our abuse-fighting efforts, enforce our policies at scale, and adapt quickly to new or emerging threats. We were the first tech company to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. This includes ads that were created with the use of AI. This same commitment extends to YouTube which informs viewers when they’re engaging with content made with generative AI.

Partnering with others across the tech industry, governments, and civil society.
Alongside other leading tech companies, we have pledged to help prevent deceptive AI-generated imagery, audio, or video content from interfering with this year’s global elections. For example, the ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. We’re committed to address existing and emerging AI challenges, share research, intelligence and learnings, and collaboratively counter abuse.


Recent News

View more

Resources

Read more about our work to support elections around the world