Security Blog
The latest news and insights from Google on security and safety on the Internet
Beyond annoyance: security risks of unwanted ad injectors
April 16, 2015
Posted by Eric Severance, Software Engineer, Safe Browsing
Last month, we posted about
unwanted ad injectors
, a common side-effect of installing unwanted software. Ad injectors are often annoying, but in some cases, they can jeopardize users’ security as well. Today, we want to shed more light on how ad injector software can hijack even encrypted SSL browser communications.
How ad injectors jeopardize security
In the example below, the ad injector software tampers with the security trust store that your browser uses to establish a secure connection with your Gmail. This can give the injector access to your personal data and make your computer vulnerable to a 'man in the middle' attack.
SSL hijacking is completely invisible to users because hijacked browser sessions appear like any other secure browser session. The screenshot on the left shows a normal connection to Gmail, the one on the right shows the difference when a SSL hijacker is installed.
You may recall the recent SuperFish/Komodia incident.
As has been reported
, the Komodia SSL hijacker did not properly verify secure connections and it was not using keys in a secure way. This type of software puts users at additional risk by making it possible for remote attackers to impersonate web sites and expose users’ private data.
How to stay safe
Safe Browsing
protects users from several classes of unwanted software that expose users to such risk. However, it never hurts to remain cautious when downloading software or browsing the web. When you are visiting a secure site, like your email or online banking site, pay extra attention to any unusual changes to the site’s content. If you notice unusual changes, like extra ads, coupons, or surveys, this may be an indication that your computer is infected with this type of unwanted software. Please, also check out
these tips
to learn how you can stay safe on the web.
For software developers, if your software makes changes to the content of web sites, the safest way to make those changes is through a browser extension. This keeps users’ communications secure by relying on the browser’s security guarantees. Software that attempts to change browser behavior or content by any other means may be flagged as
unwanted software
.
Android Security State of the Union 2014
April 2, 2015
Posted by Adrian Ludwig, Lead Engineer for Android Security
We’re committed to making Android a safe ecosystem for users and developers. That’s why we built Android the way we did—with multiple layers of security in the platform itself and in the services Google provides. In addition to traditional protections like encryption and application sandboxes, these layers use both automated and manual review systems to keep the ecosystem safe from malware, phishing scams, fraud, and spam every day.
Android offers an application-focused platform security model rooted in a strong application sandbox. We also use data to improve security in near real time through a combination of reliable products and trusted services, like Google Play, and Verify Apps. And, because we are an open platform, third-party research and reports help make us stronger and users safer.
But, every now and then we like to check in to see how we’re doing. So, we’ve been working hard on a
report
that analyzes billions (!) of data points gathered every day during 2014 and provides comprehensive and in-depth insight into security of the Android ecosystem. We hope this will help us share our approaches and data-driven decisions with the security community in order to keep users safer and avoid risk.
It’s lengthy, so if you’ve only got a minute, we pulled out a few of the key findings here:
Over 1 billion devices are protected with Google Play which conducts 200 million security scans of devices per day.
Fewer than 1% of Android devices had a Potentially Harmful App (PHA) installed in 2014. Fewer than 0.15% of devices that only install from Google Play had a PHA installed.
The overall worldwide rate of Potentially Harmful Application (PHA) installs decreased by nearly 50% between Q1 and Q4 2014.
SafetyNet checks over 400 million connections per day for potential SSL issues.
Android and Android partners responded to 79 externally reported security issues, and over 25,000 applications in Google Play were updated following security notifications from Google Play.
We want to ensure that Android is a safe place, and this report has helped us take a look at how we did in the past year, and what we can still improve on. In 2015, we have already
announced
that we are being even more proactive in reviewing applications for all types of policy violations within Google Play. Outside of Google Play, we have also increased our efforts to enhance protections for specific higher-risk devices and regions.
As always, we are appreciate feedback on our report and suggestions for how we can improve Android. Contact us at
security@android.com
.
Out with unwanted ad injectors
March 31, 2015
Posted by Nav Jagpal, Software Engineer, Safe Browsing
It’s tough to read the New York Times under these circumstances:
And it’s pretty unpleasant to shop for a Nexus 6 on a search results page that looks like this:
The browsers in the screenshots above have been infected with ‘ad injectors’. Ad injectors are programs that insert new ads, or replace existing ones, into the pages you visit while browsing the web. We’ve received more than 100,000 complaints from Chrome users about ad injection since the beginning of 2015—more than network errors, performance problems, or any other issue.
Injectors are yet another symptom of “
unwanted software
”—programs that are deceptive, difficult to remove, secretly bundled with other downloads, and have other bad qualities. We’ve made
several
recent
announcements about our work to fight unwanted software via
Safe Browsing
, and now we’re sharing some updates on our efforts to protect you from injectors as well.
Unwanted ad injectors: disliked by users, advertisers, and publishers
Unwanted ad injectors aren’t part of a healthy ads ecosystem. They’re part of an environment where bad practices hurt users, advertisers, and publishers alike.
People don’t like ad injectors for several reasons: not only are they intrusive, but people are often tricked into installing ad injectors in the first place, via deceptive advertising, or software “bundles.” Ad injection can also be a security risk, as the
recent “Superfish” incident
showed.
But, ad injectors are problematic for advertisers and publishers as well. Advertisers often don’t know their ads are being injected, which means they don’t have any idea where their ads are running. Publishers, meanwhile, aren’t being compensated for these ads, and more importantly, they unknowingly may be putting their visitors in harm’s way, via spam or malware in the injected ads.
How Google fights unwanted ad injectors
We have a variety of policies that either limit, or entirely prohibit, ad injectors.
In Chrome, any extension hosted in the Chrome Web Store must comply with the
Developer Program Policies
. These require that extensions have a
narrow and easy-to-understand purpose
. We don’t ban injectors altogether—if they want to, people can still choose to install injectors that clearly disclose what they do—but injectors that sneak ads into a user’s browser would certainly violate our policies. We show people familiar red warnings when they are about to download software that is deceptive, or doesn’t use the right APIs to interact with browsers.
On the ads side,
AdWords advertisers
with software downloads hosted on their site, or linked to from their site, must comply with our
Unwanted Software Policy
. Additionally, both
Google Platforms program policies
and the
DoubleClick Ad Exchange (AdX) Seller Program Guidelines
, don’t allow programs that overlay ad space on a given site without permission of the site owner.
To increase awareness about ad injectors and the scale of this issue, we’ll be releasing new research on May 1 that examines the ad injector ecosystem in depth. The study, conducted with researchers at University of California Berkeley, drew conclusions from more than 100 million pageviews of Google sites across Chrome, Firefox, and Internet Explorer on various operating systems, globally. It’s not a pretty picture. Here’s a sample of the findings:
Ad injectors were detected on all operating systems (Mac and Windows), and web browsers (Chrome, Firefox, IE) that were included in our test.
More than 5% of people visiting Google sites have at least one ad injector installed. Within that group, half have at least two injectors installed and nearly one-third have at least four installed.
Thirty-four percent of Chrome extensions injecting ads were classified as outright malware.
Researchers found 192 deceptive Chrome extensions that affected 14 million users; these have since been disabled. Google now incorporates the techniques researchers used to catch these extensions to scan all new and updated extensions.
We’re constantly working to improve our product policies to protect people online. We encourage others to do the same. We’re committed to continuing to improve this experience for Google and the web as a whole.
Even more unwanted software protection via the Safe Browsing API
March 24, 2015
Posted by Emily Schechter, Safe Browsing Program Manager
Deceptive software disguised as a useful download harms your web experience by making undesired changes to your computer. Safe Browsing offers protection from such
unwanted software
by showing a warning in Chrome before you download these programs. In
February
we started showing additional warnings in Chrome before you visit a site that encourages downloads of unwanted software.
Today, we’re adding information about unwanted software to our
Safe Browsing API
.
In addition to our constantly-updated malware and phishing data, our unwanted software data is now publicly available for developers to integrate into their own security measures. For example, any app that wants to save its users from winding up on sites that lead to deceptive software could use our API to do precisely that.
We continue to integrate Safe Browsing technology across Google—in
Chrome
,
Google Analytics
, and more—to protect users. Our Safe Browsing API helps extend our malware, phishing, and unwanted software protection to keep more than 1.1 billion users safe online.
Check out our updated API documentation
here
.
Maintaining digital certificate security
March 23, 2015
Posted by Adam Langley, Security Engineer
On Friday, March 20th, we became aware of unauthorized digital certificates for several Google domains. The certificates were issued by an intermediate certificate authority apparently held by a company called
MCS Holdings
. This intermediate certificate was issued by
CNNIC
.
CNNIC is included in all major root stores and so the misissued certificates would be trusted by almost all browsers and operating systems. Chrome on Windows, OS X, and Linux, ChromeOS, and Firefox 33 and greater would have rejected these certificates because of
public-key pinning
, although misissued certificates for other sites likely exist.
We promptly alerted CNNIC and other major browsers about the incident, and we blocked the MCS Holdings certificate in Chrome with a
CRLSet
push. CNNIC responded on the 22nd to explain that they had contracted with MCS Holdings on the basis that MCS would only issue certificates for domains that they had registered. However, rather than keep the private key in a suitable
HSM
, MCS installed it in a man-in-the-middle proxy. These devices intercept secure connections by masquerading as the intended destination and are sometimes used by companies to intercept their employees’ secure traffic for monitoring or legal reasons. The employees’ computers normally have to be configured to trust a proxy for it to be able to do this. However, in this case, the presumed proxy was given the full authority of a public CA, which is a serious breach of the CA system. This situation is similar to
a failure by ANSSI
in 2013.
This explanation is congruent with the facts. However, CNNIC still delegated their substantial authority to an organization that was not fit to hold it.
Chrome users do not need to take any action to be protected by the CRLSet updates. We have no indication of abuse and we are not suggesting that people change passwords or take other action. At this time we are considering what further actions are appropriate.
This event also highlights, again, that the
Certificate Transparency
effort is critical for protecting the security of certificates in the future.
(Details of the certificate chain for software vendors can be found
here
.)
Update - April 1
: As a result of a joint investigation of the events surrounding this incident by Google and CNNIC, we have decided that the CNNIC Root and EV CAs will no longer be recognized in Google products. This will take effect in a future Chrome update. To assist customers affected by this decision, for a limited time we will allow CNNIC’s existing certificates to continue to be marked as trusted in Chrome, through the use of a publicly disclosed whitelist. While neither we nor CNNIC believe any further unauthorized digital certificates have been issued, nor do we believe the misissued certificates were used outside the limited scope of MCS Holdings’ test network, CNNIC will be working to prevent any future incidents. CNNIC will implement Certificate Transparency for all of their certificates prior to any request for reinclusion. We applaud CNNIC on their proactive steps, and welcome them to reapply once suitable technical and procedural controls are in place.
Safe Browsing and Google Analytics: Keeping More Users Safe, Together
February 26, 2015
Posted by Stephan Somogyi, Product Manager, Security and Privacy
[Cross-posted on the
Google Analytics Blog
]
If you run a web site, you may already be familiar with
Google Webmaster Tools
and how it lets you know if Safe Browsing finds something problematic on your site. For example, we’ll notify you if your site is delivering malware, which is usually a sign that it’s been hacked. We’re extending our Safe Browsing protections to automatically display notifications to all Google Analytics users via familiar
Google Analytics Notifications
.
Google Safe Browsing
has been protecting people across the Internet for over eight years and we're always looking for ways to extend that protection even further. Notifications like these help webmasters like you act quickly to respond to any issues. Fast response helps keep your site—and your visitors—safe.
Pwnium V: the never-ending* Pwnium
February 24, 2015
Posted by Tim Willis, Hacker Philanthropist, Chrome Security Team
[Cross-posted from the
Chromium Blog
]
Around this time each year we announce the rules, details and maximum cash amounts we’re putting up for our
Pwnium competition
. For the last few years we put a huge pile of cash on the table (last year it was
e
million
) and gave researchers one day during
CanSecWest
to present their exploits. We’ve received some great entries over the years, but it’s time for something bigger.
Starting today, Pwnium will change its scope significantly, from a single-day competition held once a year at a security conference to a year round, worldwide opportunity for security researchers.
For those who are interested in what this means for the Pwnium rewards pool, we crunched the numbers and the results are in: it now goes all the way up to $∞ million*.
We’re making this change for a few reasons:
Removing barriers to entry:
At Pwnium competitions, a security researcher would need to have a
bug chain
in March, pre-register, have a physical presence at the competition location and hopefully get a good timeslot. Under the new scheme, security researchers can submit their bugs year-round through the
Chrome Vulnerability Reward Program (VRP)
whenever they find them.
Removing the incentive for bug hoarding:
If a security researcher was to discover a Pwnium-quality bug chain today, it’s highly likely that they would wait until the contest to report it to get a cash reward. This is a bad scenario for all parties. It’s bad for us because the bug doesn’t get fixed immediately and our users are left at risk. It’s bad for them as they run the real risk of a bug collision. By allowing security researchers to submit bugs all year-round, collisions are significantly less likely and security researchers aren’t duplicating their efforts on the same bugs.
Our researchers want this:
On top of all of these reasons, we asked our handful of participants if they wanted an option to report all year. They did, so we’re delivering.
Logistically, we’ll be adding Pwnium-style bug chains on Chrome OS to the
Chrome VRP
. This will increase our top reward to $50,000, which will be on offer all year-round. Check out our
FAQ
for more information.
Happy hunting!
*Our lawyercats wouldn’t let me say “never-ending” or “infinity million” without adding that “this is an experimental and discretionary rewards program and Google may cancel or modify the program at any time.” Check out the reward eligibility requirements on the
Chrome VRP page
.
More Protection from Unwanted Software
February 23, 2015
Posted by Lucas Ballard, Software Engineer
SafeBrowsing helps keep you safe online and includes protection against
unwanted software
that makes undesirable changes to your computer or interferes with your online experience.
We recently expanded our efforts in Chrome, Search, and ads to keep you even safer from sites where these nefarious downloads are available.
Chrome
: Now, in addition to showing warnings
before you download unwanted software
, Chrome will show you a new warning, like the one below, before you visit a site that encourages downloads of unwanted software.
Search
: Google Search now incorporates signals that identify such deceptive sites. This change reduces the chances you’ll visit these sites via our search results.
Ads
: We
recently
began to disable Google ads that lead to sites with unwanted software.
If you’re a site owner, we recommend that you register your site with
Google Webmaster Tools
. This will help you stay informed when we find something on your site that leads people to download
unwanted software
, and will provide you with helpful tips to resolve such issues.
We’re constantly working to keep people safe across the web.
Read more about Safe Browsing technology and our work to protect users
here
.
Using Google Cloud Platform for Security Scanning
February 19, 2015
Posted by Rob Mann, Security Engineering Manager
[Cross-posted from the
Google Cloud Platform Blog
]
Deploying a new build is a thrill, but every release should be scanned for security vulnerabilities. And while web application security scanners have existed for years, they’re not always well-suited for Google App Engine developers. They’re often difficult to set up, prone to over-reporting issues (false positives)—which can be time-consuming to filter and triage—and built for security professionals, not developers.
Today, we’re releasing Google Cloud Security Scanner in beta. If you’re using App Engine, you can easily scan your application for two very common vulnerabilities: cross-site scripting (XSS) and mixed content.
While designing Cloud Security Scanner we had three goals:
Make the tool easy to set up and use
Detect the most common issues App Engine developers face with minimal false positives
Support scanning rich, JavaScript-heavy web applications
To try it for yourself, select
Compute > App Engine > Security scans
in the Google
Developers Console
to run your first scan, or
learn more here
.
So How Does It Work?
Crawling and testing modern HTML5, JavaScript-heavy applications with rich multi-step user interfaces is considerably more challenging than scanning a basic HTML page. There are two general approaches to this problem:
Parse the HTML and emulate a browser. This is fast, however, it comes at the cost of missing site actions that require a full DOM or complex JavaScript operations.
Use a real browser. This approach avoids the parser coverage gap and most closely simulates the site experience. However, it can be slow due to event firing, dynamic execution, and time needed for the DOM to settle.
Cloud Security Scanner addresses the weaknesses of both approaches by using a multi-stage pipeline. First, the scanner makes a high speed pass, crawling, and parsing the HTML. It then executes a slow and thorough full-page render to find the more complex sections of your site.
While faster than a real browser crawl, this process is still too slow. So we scale horizontally. Using
Google Compute Engine
, we dynamically create a botnet of hundreds of virtual Chrome workers to scan your site. Don’t worry, each scan is limited to 20 requests per second or lower.
Then we attack your site (again, don’t worry)! When testing for XSS, we use a completely benign payload that relies on
Chrome DevTools
to execute the debugger. Once the debugger fires, we know we have JavaScript code execution, so false positives are (almost) non-existent. While this approach comes at the cost of missing some bugs due to application specifics, we think that most developers will appreciate a low effort, low noise experience when checking for security issues—we know Google developers do!
As with all dynamic vulnerability scanners, a clean scan does not necessarily mean you’re security bug free. We still recommend a manual security review by your friendly web app security professional.
Ready to get started?
Learn more here
. Cloud Security Scanner is currently in beta with many more features to come, and we’d love to hear your feedback. Simply click the “Feedback” button directly within the tool.
Feedback and data-driven updates to Google’s disclosure policy
February 13, 2015
Posted by Chris Evans and Ben Hawkes,
Project Zero
; Heather Adkins, Matt Moore and Michal Zalewski, Google Security; Gerhard Eschelbeck, Vice President, Google Security
Cross-posted from the
Project Zero blog
Disclosure deadlines have long been an industry standard practice. They improve end-user security by getting security patches to users faster. As noted in
CERT’s 45-day disclosure policy
, they also “balance the need of the public to be informed of security vulnerabilities with vendors' need for time to respond effectively”.
Yahoo!’s 90-day policy
notes that “Time is of the essence when we discover these types of issues: the more quickly we address the risks, the less harm an attack can cause”.
ZDI’s 120-day policy
notes that releasing vulnerability details can “enable the defensive community to protect the user”.
Deadlines also acknowledge an uncomfortable fact that is alluded to by some of the above policies: the offensive security community invests considerably more into vulnerability research than the defensive community. Therefore, when we find a vulnerability in a high profile target, it is often already known by advanced and stealthy actors.
Project Zero
has adhered to a 90-day disclosure deadline. Now we are applying this approach for the rest of Google as well. We notify vendors of vulnerabilities immediately, with details shared in public with the defensive community after 90 days, or sooner if the vendor releases a fix. We’ve chosen a middle-of-the-road deadline timeline and feel it’s reasonably calibrated for the current state of the industry.
To see how things are going, we crunched some data on Project Zero’s disclosures to date. For example, the Adobe Flash team probably has the largest install base and number of build combinations of any of the products we’ve researched so far. To date, they have
fixed 37 Project Zero vulnerabilities
(or 100%) within the 90-day deadline. More generally, of 154 Project Zero bugs fixed so far, 85% were fixed within 90 days. Restrict this to the 73 issues filed and fixed after Oct 1st, 2014, and 95% were fixed within 90 days. Furthermore, recent
well-discussed
deadline
misses
were typically fixed very quickly after 90 days. Looking ahead, we’re not going to have any deadline misses for at least the rest of February.
Deadlines appear to be working to improve patch times and end user security—especially when enforced consistently.
We’ve studied the above data and taken on board some great debate and external feedback around some of the corner cases for disclosure deadlines. We have improved the policy in the following ways:
Weekends and holidays.
If a deadline is due to expire on a weekend or US public holiday, the deadline will be moved to the next normal work day.
Grace period.
We now have a 14-day grace period. If a 90-day deadline will expire but a vendor lets us know before the deadline that a patch is scheduled for release on a specific day within 14 days following the deadline, the public disclosure will be delayed until the availability of the patch. Public disclosure of an unpatched issue now only occurs if a deadline will be significantly missed (2 weeks+).
Assignment of CVEs.
CVEs are an industry standard for uniquely identifying vulnerabilities. To avoid confusion, it’s important that the first public mention of a vulnerability should include a CVE. For vulnerabilities that go past deadline, we’ll ensure that a CVE has been pre-assigned.
As always, we reserve the right to bring deadlines forwards or backwards based on extreme circumstances. We remain committed to treating all vendors strictly equally. Google expects to be held to the same standard; in fact, Project Zero has bugs in the pipeline for Google products (Chrome and Android) and these are subject to the same deadline policy.
Putting everything together, we believe the policy updates are still strongly in line with our desire to improve industry response times to security bugs, but will result in softer landings for bugs marginally over deadline. Finally, we’d like to call on all researchers to adopt disclosure deadlines in some form, and feel free to use our policy verbatim if you find our data and reasoning compelling. We’re excited by the early results that disclosure deadlines are delivering—and with the help of the broader community, we can achieve even more.
Labels
#sharethemicincyber
#supplychain #security #opensource
android
android security
android tr
app security
big data
biometrics
blackhat
C++
chrome
chrome enterprise
chrome security
connected devices
CTF
diversity
encryption
federated learning
fuzzing
Gboard
google play
google play protect
hacking
interoperability
iot security
kubernetes
linux kernel
memory safety
Open Source
pha family highlights
pixel
privacy
private compute core
Rowhammer
rust
Security
security rewards program
sigstore
spyware
supply chain
targeted spyware
tensor
Titan M2
VDP
vulnerabilities
workshop
Archive
2024
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2023
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2022
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2021
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2020
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2019
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2018
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Aug
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Sep
Aug
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
2010
Nov
Oct
Sep
Aug
Jul
May
Apr
Mar
2009
Nov
Oct
Aug
Jul
Jun
Mar
2008
Dec
Nov
Oct
Aug
Jul
May
Feb
2007
Nov
Oct
Sep
Jul
Jun
May
Feed
Follow @google
Follow
Give us feedback in our
Product Forums
.