Security Blog
The latest news and insights from Google on security and safety on the Internet
The results are in: Hardcode, the secure coding contest for App Engine
May 10, 2013
Posted by Eduardo Vela Nava, Security Team
This January, Google and SyScan
announced
a secure coding competition open to students from all over the world. While Google’s
Summer of Code
and
Code-in
encourage students to contribute to open source projects, Hardcode was a call for students who wanted to showcase their skills both in software development and security. Given the scope of today’s online threats, we think it’s incredibly important to practice secure coding habits early on. Hundreds of students from 25 countries and across five continents signed up to receive updates and information about the competition, and over 100 teams participated.
During the preliminary online round, teams built applications on Google App Engine that were judged for both functionality and security. Five teams were then selected to participate in the final round at the SyScan 2013 security conference in Singapore, where they had to do the following: fix security bugs from the preliminary round, collaborate to develop an API standard to allow their applications to interoperate, implement the API, and finally, try to hack each other’s applications. To add to the challenge, many of the students balanced the competition with all of their school commitments.
We’re extremely impressed with the caliber of the contestants’ work. Everyone had a lot of fun, and we think these students have a bright future ahead of them. We are pleased to announce the final results of the 2013 Hardcode Competition:
1st Place:
Team 0xC0DEBA5E
Vienna University of Technology, Austria (SGD $20,000)
Daniel Marth (http://proggen.org/)
Lukas Pfeifhofer (https://www.devlabs.pro/)
Benedikt Wedenik
2nd Place:
Team Gridlock
Loyola School, Jamshedpur, India (SGD $15,000)
Aviral Dasgupta (http://www.aviraldg.com/)
3rd Place:
Team CeciliaSec
University of California, Santa Barbara, California, USA (SGD $10,000)
Nathan Crandall
Dane Pitkin
Justin Rushing
Runner-up:
Team AppDaptor
The Hong Kong Polytechnic University, Hong Kong (SGD $5,000)
Lau Chun Wai (http://www.cwlau.com/)
Runner-up:
Team DesiCoders
Birla Institute of Technology & Science, Pilani, India (SGD $5,000)
Yash Agarwal
Vishesh Singhal (http://visheshsinghal.blogspot.com)
Honorable Mention:
Team Saviors of Middle Earth
(withdrew due to school commitments)
Walt Whitman High School, Maryland, USA
Wes Kendrick
Marc Rosen (https://github.com/maz)
William Zhang
A big congratulations to this very talented group of students!
New warnings about potentially malicious binaries
April 17, 2013
Posted by Moheeb Abu Rajab, Security Team
If you use Chrome, you shouldn’t have to work hard to know what Chrome extensions you have installed and enabled. That’s why last December we announced that Chrome (version 25 and beyond) would
disable silent extension installation by default
. In addition to protecting users from unauthorized installations, these measures resulted in noticeable performance improvements in Chrome and improved user experience.
To further safeguard you while browsing the web, we recently added new measures to protect you and your computer. These measures will identify software that violates Chrome’s standard mechanisms for deploying extensions, flagging such binaries as malware. Within a week, you will start seeing Safe Browsing
malicious download warnings
when attempting to download malware identified by this criteria.
This kind of malware commonly tries to get around silent installation blockers by misusing
Chrome’s central management settings
that are intended be used to configure instances of Chrome internally within an organization. In doing so, the installed extensions are enabled by default and cannot be uninstalled or disabled by the user from within Chrome. Other variants include binaries that directly manipulate Chrome preferences in order to silently install and enable extensions bundled with these binaries. Our recent measures expand our capabilities to detect and block these types of malware.
Application developers should adhere to Chrome’s standard mechanisms for extension installation, which include the
Chrome Web Store
,
inline installation
, and the
other deployment options
described in the extensions development documentation.
Google Public DNS Now Supports DNSSEC Validation
March 19, 2013
Posted by Yunhong Gu, Team Lead, Google Public DNS
We
launched
Google Public DNS three years ago to help make the Internet faster and more secure. Today, we are taking a major step towards this security goal: we now fully support DNSSEC (
Domain Name System Security Extensions
) validation on our Google Public DNS resolvers. Previously, we accepted and forwarded DNSSEC-formatted messages but did not perform validation. With this new security feature, we can better protect people from DNS-based attacks and make DNS more secure overall by identifying and rejecting invalid responses from DNSSEC-protected domains.
DNS translates human-readable domain names into IP addresses so that they are accessible by computers. Despite its critical role in Internet applications, the lack of security protection for DNS up to this point meant that a significantly large portion of today’s Internet attacks target the name resolution process, attempting to return the IP addresses of malicious websites to DNS queries. Probably the most common DNS attack is
DNS cache poisoning
, which tries to “pollute” the cache of DNS resolvers (such as Google Public DNS or those provided by most ISPs) by injecting spoofed responses to upstream DNS queries.
To counter cache poisoning attacks, resolvers must be able to verify the authenticity of the response. DNSSEC solves the problem by authenticating DNS responses using digital signatures and public key cryptography. Each DNS zone maintains a set of private/public key pairs, and for each DNS record, a unique digital signature is generated and encrypted using the private key. The corresponding public key is then authenticated via a chain of trust by keys of upper-level zones. DNSSEC effectively prevents response tampering because in practice, signatures are almost impossible to forge without access to private keys. Also, the resolvers will reject responses without correct signatures.
DNSSEC is a critical step towards securing the Internet. By validating data origin and data integrity, DNSSEC complements other Internet security mechanisms, such as SSL. It is worth noting that although we have used web access in the examples above, DNS infrastructure is widely used in many other Internet applications, including email.
Currently Google Public DNS is serving more than 130 billion DNS queries on average (peaking at 150 billion) from more than 70 million unique IP addresses each day. However, only 7% of queries from the client side are DNSSEC-enabled (about 3% requesting validation and 4% requesting DNSSEC data but no validation) and about 1% of DNS responses from the name server side are signed. Overall, DNSSEC is still at an early stage and we hope that our support will help expedite its deployment.
Effective deployment of DNSSEC requires action from both DNS resolvers and authoritative name servers. Resolvers, especially those of ISPs and other public resolvers, need to start validating DNS responses. Meanwhile, domain owners have to sign their domains. Today, about 1/3 of top-level domains have been signed, but most second-level domains remain unsigned. We encourage all involved parties to push DNSSEC deployment and further protect Internet users from DNS-based network intrusions.
For more information about Google Public DNS, please visit:
https://developers.google.com/speed/public-dns
. In particular, more details about our DNSSEC support can be found in the
FAQ
and
Security
pages. Additionally, general specifications of the DNSSEC standard can be found in RFCs
4033
,
4034
,
4035
, and
5155
.
Update
March 21
: We've been listening to your questions and would like to clarify that validation is not yet enabled for non-DNSSEC aware clients. As a first step, we launched DNSSEC validation as an opt-in feature and will only perform validation if clients explicitly request it. We're going to work to minimize the impact of any DNSSEC misconfigurations that could cause connection breakages before we enable validation by default for all clients that have not explicitly opted out.
Update
May 6
: We've enabled DNSSEC validation by default. That means all clients are now protected and responses to all queries will be validated unless clients explicitly opt out.
Videos and articles for hacked site recovery
March 12, 2013
Posted by
Maile Ohye
, Developer Programs Tech Lead
We created a new
Help for hacked sites
informational series to help all levels of site owners understand how they can recover their hacked site. The series includes over a dozen articles and 80+ minutes of informational videos—from the basics of what it means for a site to be hacked to diagnosing specific malware infection types.
“Help for hacked sites” overview: How and why a site is hacked
Over 25% of sites that are hacked may remain compromised
In StopBadware and Commtouch’s 2012
survey of more than 600 webmasters of hacked sites
, 26% of site owners reported that their site was still compromised while 2% completely abandoned their site. We hope that by adding our educational resources to the great tools and information already available from the security community, more hacked sites can restore their unique content and make it safely available to users. The fact remains, however, that the process to recovery requires fairly advanced system administrator skills and knowledge of source code. Without help from others—perhaps their hoster or a trusted expert—many site owners may still struggle to recover.
StopBadware and Commtouch’s 2012 survey results for “What action did you take/are you taking to fix the compromised site?”
Hackers’ tactics are difficult for site owners to detect
Cybercriminals employ various tricks to avoid the site owner’s detection, making recovery difficult for the average site owner. One technique is adding “hidden text” to the site’s page so users don’t see the damage, but search engines still process the content. Often the case for sites hacked with spam, hackers abuse a good site to help their site (commonly pharmaceutical or poker sites) rank in search results.
Both pages are the same, but the page on the right highlights the “hidden text”—in this case, white text on a white background. As explained in
Step 5: Assess the damage (hacked with spam)
, hackers employ these types of tricks to avoid human detection.
In cases of sites hacked to distribute malware, Google provides verified site owners with a sample of infected URLs, often with their malware infection type, such as
Server configuration
(using the server’s configuration file to redirect users to malicious content). In
Help for hacked sites
, Lucas Ballard, a software engineer on our Safe Browsing team, explains how to locate and clean this malware infection type.
Lucas Ballard covers the malware infection type
Server configuration
.
Reminder to keep your site secure
I realize that reminding you to keep your site secure is a bit like my mother yelling “don’t forget to bring a coat!” as I leave her sunny California residence. Like my mother, I can’t help myself. Please remember to:
Be vigilant about keeping software updated
Understand the security practices of all applications, plugins, third-party software, etc., before you install them on your server
Remove unnecessary or unused software
Enforce creation of strong passwords
Keep all devices used to log in to your web server secure (updated operating system and browser)
Make regular, automated backups
An update on our war against account hijackers
February 19, 2013
Posted by Mike Hearn, Google Security Engineer
Have you ever gotten a plea to wire money to a friend stranded at an international airport? An oddly written message from someone you haven’t heard from in ages? Compared to five years ago, more scams, illegal, fraudulent or spammy messages today come from someone you know. Although spam filters have become very powerful—in Gmail, less than 1 percent of spam emails make it into an inbox—these unwanted messages are much more likely to make it through if they come from someone you’ve been in contact with before. As a result, in 2010 spammers started changing their tactics—and we saw a large increase in fraudulent mail sent from Google Accounts. In turn, our security team has developed new ways to keep you safe, and dramatically reduced the amount of these messages.
Spammers’ new trick—hijacking accounts
To improve their chances of beating a spam filter by sending you spam from your contact’s account, the spammer first has to break into that account. This means many spammers are turning into account thieves. Every day, cyber criminals break into websites to steal databases of usernames and passwords—the online “keys” to accounts. They put the databases up for sale on the black market, or use them for their own nefarious purposes. Because many people re-use the same password across different accounts, stolen passwords from one site are often valid on others.
With stolen passwords in hand, attackers attempt to break into accounts across the web and across many different services. We’ve seen a single attacker using stolen passwords to attempt to break into a million different Google accounts every single day, for weeks at a time. A different gang attempted sign-ins at a rate of more than 100 accounts per second. Other services are often more vulnerable to this type of attack, but when someone tries to log into your Google Account, our security system does more than just check that a password is correct.
Legitimate accounts blocked for sending spam:
Our security systems have dramatically reduced the number of Google Accounts used to send spam over the past few years
How Google Security helps protect your account
Every time you sign in to Google, whether via your web browser once a month or an email program that checks for new mail every five minutes, our system performs a complex risk analysis to determine how likely it is that the sign-in really comes from you. In fact, there are more than 120 variables that can factor into how a decision is made.
If a sign-in is deemed suspicious or risky for some reason—maybe it’s coming from a country oceans away from your last sign-in—we ask some simple questions about your account. For example, we may ask for the phone number associated with your account, or for the answer to your security question. These questions are normally hard for a hijacker to solve, but are easy for the real owner. Using security measures like these, we've dramatically reduced the number of compromised accounts by 99.7 percent since the peak of these hijacking attempts in 2011.
Help protect your account
While we do our best to keep spammers at bay, you can help protect your account by making sure you’re using a
strong, unique password
for your Google Account, upgrading your account to
use 2-step verification
, and
updating the recovery options
on your account such as your secondary email address and your phone number. Following these three steps can help prevent your account from being hijacked—this means less spam for your friends and contacts, and improved security and privacy for you.
(Cross-posted from the Official Google Blog)
Calling student coders: Hardcode, the secure coding contest for App Engine
January 10, 2013
Posted by Parisa Tabriz, Security Team
Protecting user security and privacy is a huge responsibility, and software security is a big part of it. Learning about new ways to “break” applications is important, but learning preventative skills to use when “building” software, like secure design and coding practices, is just as critical. To help promote secure development habits, Google is once again partnering with the organizers of
SyScan
to host Hardcode, a secure coding contest on the Google App Engine platform.
Participation will be open to teams of up to 5 full-time students (undergraduate or high school, additional restrictions may apply). Contestants will be asked to develop open source applications that meet a set of functional and security requirements. The contest will consist of two rounds: a qualifying round over the Internet, with broad participation from any team of students, and a final round, to be held during SyScan on April 23-25 in Singapore.
During the qualifying round, teams will be tasked with building an application and describing its security design. A panel of judges will assess all submitted applications and select the top five to compete in the final round.
At SyScan, the five finalist teams will be asked to develop a set of additional features and fix any security flaws identified in their qualifying submission. After two more days of hacking, a panel of judges will rank the projects and select a grand prize winning team that will receive $20,000 Singapore dollars. The 2nd-5th place finalist teams will receive $15,000, $10,000, $5,000, and $5,000 Singapore dollars, respectively.
Hardcode begins on Friday, January 18th. Full contest details will be be announced via our mailing list, so subscribe
there
for more information!
Enhancing digital certificate security
January 3, 2013
Posted by Adam Langley, Software Engineer
Late on December 24, Chrome detected and blocked an unauthorized digital certificate for the "*.google.com" domain. We investigated immediately and found the certificate was issued by an
intermediate certificate authority
(CA) linking back to TURKTRUST, a Turkish certificate authority. Intermediate CA certificates carry the full authority of the CA, so anyone who has one can use it to create a certificate for any website they wish to impersonate.
In response, we updated Chrome’s certificate revocation metadata on December 25 to block that intermediate CA, and then alerted TURKTRUST and other browser vendors. TURKTRUST told us that based on our information, they discovered that, in August 2011, they had mistakenly issued two intermediate CA certificates to organizations that should have instead received regular SSL certificates. On December 26, we pushed another Chrome metadata update to block the second mistaken CA certificate and informed the other browser vendors.
Our actions addressed the immediate problem for our users. Given the severity of the situation, we will update Chrome again in January to no longer indicate Extended Validation status for certificates issued by TURKTRUST, though connections to TURKTRUST-validated HTTPS servers may continue to be allowed.
Since our priority is the security and privacy of our users, we may also decide to take additional action after further discussion and careful consideration.
Helping webmasters with hacked sites
December 12, 2012
Posted by
Oliver Barrett
, Search Quality Team
(Cross-posted from the
Webmaster Central Blog
)
Having your website hacked can be a frustrating experience and we want to do everything we can to help webmasters get their sites cleaned up and prevent compromises from happening again. With this post we wanted to outline two common types of attacks as well as provide clean-up steps and additional resources that webmasters may find helpful.
To best serve our users it’s important that the pages that we link to in our search results are safe to visit. Unfortunately, malicious third-parties may take advantage of legitimate webmasters by hacking their sites to manipulate search engine results or distribute malicious content and spam. We will alert users and webmasters alike by labeling sites we’ve detected as hacked by displaying a “This site may be compromised” warning in our search results:
We want to give webmasters the necessary information to help them clean up their sites as quickly as possible. If you’ve verified your site in Webmaster Tools we’ll also send you a message when we’ve identified your site has been hacked, and when possible give you example URLs.
Occasionally, your site may become compromised to facilitate the distribution of malware. When we recognize that, we’ll identify the site in our search results with a label of “This site may harm your computer” and browsers such as Chrome may display a warning when users attempt to visit. In some cases, we may share more specific information in the Malware section of Webmaster Tools. We also have
specific tips for preventing and removing malware from your site
in our Help Center.
Two common ways malicious third-parties may compromise your site are the following:
Injected Content
Hackers may attempt to influence search engines by injecting links leading to sites they own. These links are often hidden to make it difficult for a webmaster to detect this has occurred. The site may also be compromised in such a way that the content is only displayed when the site is visited by search engine crawlers.
Example of injected pharmaceutical content
If we’re able to detect this, we’ll send a message to your Webmaster Tools account with useful details. If you suspect your site has been compromised in this way, you can check the content your site returns to Google by using the
Fetch as Google
tool. A few good places to look for the source of such behavior of such a compromise are .php files, template files and CMS plugins.
Redirecting Users
Hackers might also try to redirect users to spammy or malicious sites. They may do it to all users or target specific users, such as those coming from search engines or those on mobile devices. If you’re able to access your site when visiting it directly but you experience unexpected redirects when coming from a search engine, it’s very likely your site has been compromised in this manner.
One of the ways hackers accomplish this is by modifying server configuration files (such as Apache’s .htaccess) to serve different content to different users, so it’s a good idea to check your server configuration files for any such modifications.
This malicious behavior can also be accomplished by injecting JavaScript into the source code of your site. The JavaScript may be designed to hide its purpose so it may help to look for terms like “eval”, “decode”, and “escape”.
Cleanup and Prevention
If your site has been compromised, it’s important to not only clean up the changes made to your site but to also address the vulnerability that allowed the compromise to occur. We have instructions for
cleaning your site
and
preventing compromises
while your hosting provider and our
Malware and Hacked sites
forum are great resources if you need more specific advice.
Once you’ve cleaned up your site you should submit a
reconsideration request
that if successful will remove the warning label in our search results.
As always, if you have any questions or feedback, please tell us in the
Webmaster Help Forum
.
Adding OAuth 2.0 support for IMAP/SMTP and XMPP to enhance auth security
September 17, 2012
Posted by Ryan Troll, Application Security Team
(Cross-posted from the
Google Developers Blog
)
Our users and developers take password security seriously and so do we. Passwords alone have weaknesses we all know about, so we’re working over the long term to support additional mechanisms to help protect user information. Over a year ago,
we announced
a recommendation that
OAuth 2.0
become the standard authentication mechanism for our APIs so you can make the safest apps using Google platforms. You can use OAuth 2.0 to build clients and websites that securely access account data and work with our advanced security features, such as
2-step verification
. But our commitment to OAuth 2.0 is not limited to web APIs. Today we’re going a step further by adding OAuth 2.0 support for
IMAP/SMTP
and
XMPP
. Developers using these protocols can now move to OAuth 2.0, and users will experience the benefits of more secure OAuth 2.0 clients.
When clients use OAuth 2.0, they never ask users for passwords. Users have tighter control over what data clients have access to, and clients never see a user's password, making it much harder for a password to be stolen. If a user has their laptop stolen, or has any reason to believe that a client has been compromised, they can revoke the client’s access without impacting anything else that has access to their data.
We are also announcing the deprecation of older authentication mechanisms. If you’re using these you should move to the new OAuth 2.0 APIs.
We are deprecating
XOAUTH for IMAP/SMTP
, as it uses
OAuth 1.0a, which was previously deprecated
. Gmail will continue to support XOAUTH until OAuth 1.0a is shut down, at which time support will be discontinued.
We are also deprecating X-GOOGLE-TOKEN and
SASL PLAIN
for XMPP, as they either accept passwords or rely on
the previously deprecated ClientLogin
. These mechanisms will continue to be supported until ClientLogin is shut down, at which time support for both will be discontinued.
Our team has been working hard since we announced our support of OAuth in 2008 to make it easy for you to create applications that use more secure mechanisms than passwords to protect user information. Check out the
Google Developers Blog
for examples, including the
OAuth 2.0 Playground
and
Service Accounts
, or see
Using OAuth 2.0 to Access Google APIs
.
Content hosting for the modern web
August 29, 2012
Posted by Michal Zalewski, Security Team
Our applications host a variety of web content on behalf of our users, and over the years we learned that even something as simple as serving a profile image can be surprisingly fraught with pitfalls. Today, we wanted to share some of our findings about content hosting, along with the approaches we developed to mitigate the risks.
Historically, all browsers and browser plugins were designed simply to excel at displaying several common types of web content, and to be tolerant of any mistakes made by website owners. In the days of static HTML and simple web applications, giving the owner of the domain authoritative control over how the content is displayed wasn’t of any importance.
It wasn’t until the mid-2000s that we started to notice a problem: a clever attacker could manipulate the browser into interpreting seemingly harmless images or text documents as HTML, Java, or Flash—thus gaining the ability to execute malicious scripts in the
security context
of the application displaying these documents (essentially, a
cross-site scripting flaw
). For all the increasingly sensitive web applications, this was very bad news.
During the past few years, modern browsers began to improve. For example, the browser vendors limited the amount of second-guessing performed on text documents, certain types of images, and unknown MIME types. However, there are many standards-enshrined design decisions—such as ignoring MIME information on any content loaded through
<object>
,
<embed>
, or
<applet>
—that are much more difficult to fix; these practices may lead to vulnerabilities similar to the
GIFAR bug
.
Google’s security team played an active role in investigating and remediating many content sniffing vulnerabilities during this period. In fact, many of the enforcement proposals were first prototyped in Chrome. Even still, the overall progress is slow; for every resolved problem, researchers discover a previously unknown flaw in another browser mechanism. Two recent examples are the Byte Order Mark (BOM) vulnerability reported to us by Masato Kinugawa, or the
MHTML attacks
that we have seen happening in the wild.
For a while, we focused on content sanitization as a possible workaround - but in many cases, we found it to be insufficient. For example, Aleksandr Dobkin managed to construct a purely alphanumeric Flash applet, and in our internal work the Google security team created images that can be forced to include a particular plaintext string in their body, after being scrubbed and recoded in a deterministic way.
In the end, we reacted to this raft of content hosting problems by placing some of the high-risk content in separate, isolated
web origins
—most commonly
*.googleusercontent.com
. There, the “sandboxed” files pose virtually no threat to the applications themselves, or to google.com authentication cookies. For public content, that’s all we need: we may use random or user-specific subdomains, depending on the degree of isolation required between unrelated documents, but otherwise the solution just works.
The situation gets more interesting for non-public documents, however. Copying users’ normal authentication cookies to the “sandbox” domain would defeat the purpose. The natural alternative is to move the secret token used to confer access rights from the
Cookie
header to a value embedded in the URL, and make the token unique to every document instead of keeping it global.
While this solution eliminates many of the
significant design flaws
associated with HTTP cookies, it trades one imperfect authentication mechanism for another. In particular, it’s important to note there are more ways to accidentally leak a capability-bearing URL than there are to accidentally leak cookies; the most notable risk is disclosure through the
Referer
header for any document format capable of including external subresources or of linking to external sites.
In our applications, we take a risk-based approach. Generally speaking, we tend to use three strategies:
In higher risk situations (e.g. documents with elevated risk of URL disclosure), we may couple the URL token scheme with short-lived, document-specific cookies issued for specific subdomains of
googleusercontent.com
. This mechanism, known within Google as
FileComp
, relies on a range of attack mitigation strategies that are too disruptive for Google applications at large, but work well in this highly constrained use case.
In cases where the risk of leaks is limited but responsive access controls are preferable (e.g., embedded images), we may issue URLs bound to a specific user, or ones that expire quickly.
In low-risk scenarios, where usability requirements necessitate a more balanced approach, we may opt for globally valid, longer-lived URLs.
Of course, the research into the security of web browsers continues, and the landscape of web applications is evolving rapidly. We are constantly tweaking our solutions to protect Google users even better, and even the solutions described here may change. Our commitment to making the Internet a safer place, however, will never waver.
Labels
#sharethemicincyber
#supplychain #security #opensource
android
android security
android tr
app security
big data
biometrics
blackhat
C++
chrome
chrome enterprise
chrome security
connected devices
CTF
diversity
encryption
federated learning
fuzzing
Gboard
google play
google play protect
hacking
interoperability
iot security
kubernetes
linux kernel
memory safety
Open Source
pha family highlights
pixel
privacy
private compute core
Rowhammer
rust
Security
security rewards program
sigstore
spyware
supply chain
targeted spyware
tensor
Titan M2
VDP
vulnerabilities
workshop
Archive
2024
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2023
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2022
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2021
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2020
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2019
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2018
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Aug
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Sep
Aug
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
2010
Nov
Oct
Sep
Aug
Jul
May
Apr
Mar
2009
Nov
Oct
Aug
Jul
Jun
Mar
2008
Dec
Nov
Oct
Aug
Jul
May
Feb
2007
Nov
Oct
Sep
Jul
Jun
May
Feed
Follow @google
Follow
Give us feedback in our
Product Forums
.