Hey—we've moved. Visit
The Keyword
for all the latest news and stories from Google
Official Blog
Insights from Googlers into our products, technology, and the Google culture
Creating a world that works for everyone with Google Impact Challenge: Disabilities
April 12, 2016
More than a billion people have a disability. And regardless of the country or community they live in, the gaps in opportunity for people with disabilities are striking: One in three people with a disability lives in poverty. In places like the United States, 50 to 70 percent of people with disabilities are unemployed; in developing countries that number increases to 80 to 90 percent. And only 10 percent of people with disabilities in developing countries have access to the assistive devices they need.
Last spring, Google.org
kicked off the Google Impact Challenge: Disabilities
, an open call to global nonprofits who are building transformative technologies for the billion people around the world with disabilities. We’ve been amazed by the ideas we’ve received, coming from 1,000+ organizations spanning 88 countries. We’ve shared a
handful
of the organizations we’re supporting already—and today we’re excited to share the
full list of 30 winners
.
The organizations we’re supporting all have big ideas for how technology can help create new solutions, and each of their ideas has the potential to scale. Each organization has also committed to open sourcing their technology—which helps encourage and speed up innovation in a sector that has historically been siloed. Meet some of our incredible grantees below, and learn more about all 30 organizations working to improve mobility, communication, and independence for people living with disabilities at
g.co/disabilities
.
The Center for Discovery, $1.125 million Google.org grant
Power wheelchairs help provide greater independence to people with mobility limitations—allowing them to get around without a caregiver, or travel longer distances. But power chairs are expensive and often not covered by insurance, leaving many people limited to manual wheelchairs.
With their Google.org grant, the
Center for Discovery
will continue developing an open source power add-on device, the indieGo, which quickly converts any manual wheelchair into a power chair. The power add-on will provide the mobility and freedom of a power chair for around one-seventh the average cost, and will allow people who mainly use a manual wheelchair to have the option of using power when they need it. The device design will be open sourced to increase its reach—potentially improving mobility for hundreds of thousands of people.
A young man using the indieGo to greet friends.
Perkins School for the Blind, $750,000 Google.org grant
Turn-by-turn GPS navigation allows people with visual impairments to get around, but once they get in vicinity of their destination, they often struggle to find specific locations like bus stops or building entrances that GPS isn’t precise enough to identify. (This is often called the “last 50 feet problem.”) Lacking the detailed information they need to find specific new places, people tend to limit themselves to familiar routes, leading to a less independent lifestyle.
With the support of Google.org,
Perkins School for the Blind
is building tools to crowdsource data from people with sight to help people navigate the last 50 feet. Using an app, people will log navigation clues in a standard format, which will be used to create directions that lead vision-impaired people precisely to their intended destination. Perkins School for the Blind is collaborating with transit authorities who will provide access to transportation data and support Perkin’s mission of making public transportation accessible to everyone.
Perkins School for the Blind employee, Joann Becker, travels by bus. It can be hard for people with visual impairments to locate the exact location of bus stops and other landmarks.
Miraclefeet, $1 million Google.org grant
An estimated 1 million children currently live with untreated clubfoot, a lifelong disability that often leads to isolation, limited access to education, and poverty. Clubfoot can be treated without surgery, but treatment practices are not widely used in many countries around the world.
Miraclefeet
partners with local healthcare providers to increase access to proper treatment for children born with clubfoot. They will use Google.org support to offer support to families via SMS, monitor patient progress through updated software, and provide extensive online training to local clinicians. To date, Miraclefeet has helped facilitate treatment for more than 13,000 children in 13 different countries; this effort will help them significantly scale up their work to reach thousands more.
Miraclefeet helps partners use a simple, affordable brace as part of the clubfoot treatment. Here, a doctor in India shows a mother how to use the
miraclefeet brace
.
Ezer Mizion and Click2Speak, $400,000 Google.org grant
People with high cognitive function but impaired motor skills often have a hard time communicating—both speaking or using standard keyboards to type.
Augmentative and alternative communication devices
(AAC) help people more easily communicate, but are often unaffordable and restricted to specific platforms or inputs. Without an AAC, people may have difficulty maintaining personal relationships and professional productivity.
Ezer Mizion
is working with
Click2Speak
to build an affordable, flexible, and customizable on-screen keyboard that allows people to type without use of their hands. With the grant from Google.org, Ezer Mizion and Click2Speak will gather more user feedback to improve the technology, including support for additional languages, operating systems, and different devices like switches, joysticks, or eye-tracking devices.
A young girl learns to use the Click2Speak on-screen keyboard with a joystick controller.
From employment to education, communication to mobility, each of our grantees is pushing innovation for people with disabilities forward. In addition to these grants, we’re always working to make our own technology more accessible, and yesterday we shared
some of the latest
on this front, including
voice typing
in Google Docs and a
new tool
that helps Android developers build more accessible apps. With all these efforts, our aim to create a world that works for everyone.
Posted by Brigitte Hoyer Gosselink, Google Impact Challenge: Disabilities Project Lead for Google.org
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWK477L8GrSZLabhPMD9VY6SJ3Y5QjhgTZxosNNPQUsGVQNC-KCvsBH70WeJHTXTdhjwuKbfQHzzMCkzzZ-uQSUsDxJCw-6lujcXDLuFCH5eAX1J5ZchbM32Pmu7UPb75YErcV/s1600/miraclefeet.jpg
Brigitte Hoyer Gosselink
Google Impact Challenge: Disabilities Project Lead
Google.org
Building more accessible technology
April 11, 2016
Nearly 20 percent of the U.S. population will have a disability during their lifetime, which can make it hard for them to access and interact with technology, and limits the opportunity that technology can bring. That’s why it’s so important to build tools to make technology accessible to everyone—from people with visual impairments who need screen readers or larger text, to people with motor restrictions that prevent them from interacting with a touch screen, to people with hearing impairments who cannot hear their device’s sounds. Here are some updates we’ve made recently to make our technology more accessible:
A tool to help develop accessible apps
Accessibility Scanner is a
new tool for Android
that lets developers test their own apps and receive suggestions on ways to enhance accessibility. For example, the tool might recommend enlarging small buttons, increasing the contrast between text and its background and more.
Improvements for the visually impaired in Android N
A few weeks ago we
announced
a preview of Android N for developers. As part of this update we’re bringing Vision Settings—which lets people control settings like magnification, font size, display size and
TalkBack
—to the Welcome screen that appears when people activate new Android devices. Putting Vision Settings front and center means someone with a visual impairment can independently set up their own device and activate the features they need, right from the start.
An improved screen reader on Chromebooks
Every Chromebook comes with a built-in screen reader called ChromeVox, which enables people with visual impairments to navigate the screen using text to speech software. Our newest version, ChromeVox Next Beta, includes a simplified keyboard shortcut model, a new caption panel to display speech and Braille output, and a new set of navigation sounds. For more information, visit
chromevox.com
.
Edit documents with your voice
Google Docs now
allows typing, editing and formatting using voice commands
—for example, “copy” or “insert table”—making it easier for people who can’t use a touchscreen to edit documents. We’ve also continued to work closely with
Freedom Scientific
, a leading provider of assistive technology products, to improve the Google Docs and Drive experience with the JAWS screen reader.
Voice commands on Android devices
We recently launched Voice Access Beta, an app that allows people who have difficulty manipulating a touch screen due to paralysis, tremor, temporary injury or other reasons to control their Android devices by voice. For example, you can say “open Chrome” or “go home” to navigate around the phone, or interact with the screen by saying “click next” or “scroll down.” To download, follow the instructions at
http://g.co/voiceaccess
.
To learn more about Google accessibility as a whole, visit
google.com/accessibility
.
Posted by Eve Andersson, Manager, Accessibility Engineering
Eve Andersson
Manager
Accessibility Engineering
On IDPD, working toward a more accessible and inclusive world
December 3, 2015
We
believe
in a world built for everyone, which is why we launched the global
Google Impact Challenge: Disabilities
earlier this year. The Impact Challenge is a Google.org initiative to invest $20 million in nonprofits who are using technology to make the world more accessible for the 1 billion people living with disabilities.
Today, as part of the program, we’re proud to celebrate the U.N. International Day of Persons with Disabilities with three new grants, totalling $2.95 million. Through our grants, the Royal London Society for Blind People will develop the
Wayfindr
project, helping visually impaired people navigate the London underground; Israeli NGO
Issie Shapiro
will distribute Sesame, an app that allows people with mobility impairments to control a smartphone using only head movements; and, finally, German grantee Wheelmap will expand its accessibility mapping efforts worldwide. This week, many Googlers around the world will also
join
Wheelmap’s Map My Day campaign to help out.
We’ve also collected 11
tips
that help people with disabilities get more out of their favorite Google products. (Why 11? It’s a play on “a11y”, tech-speak for “accessibility.”)
Much of the accessibility work we do is driven by passionate Googlers from around the world. To give you a look at what motivates us to make Google, and the world, more inclusive, we asked four Googlers from our Disability Alliance to share more about what they’re working on:
Kiran Kaja, Technical Program Manager, London:
Being blind from birth, I’ve always been excited by devices that talk to you or allow you to talk back to them. Today, I work on Google’s Text to Speech team developing technologies that talk to people with disabilities. I’m also helping improve eyes-free voice actions on Android so that people with low vision can accomplish standard tasks just by talking to their phone. This not only helps people with disabilities, but anyone whose hands are busy with another task—like cooking, driving or caring for an infant. The advances we’re making in speech recognition and text to speech output promise a bright future for voice user interfaces.
Paul Herzlich, Legal Analytics Specialist, Mountain View:
As a wheelchair user from a spinal cord injury, I'm passionate about the potential impact of technology to solve disability-related issues. Outside of my job, I'm working alongside a team of mechanical and electrical engineers, UX designers, and medical professionals to develop a new technology called SmartSeat, which I hope to bring to life in tandem with Google.org through its
Google Impact Challenge: Disabilities
. SmartSeat is a device that notifies wheelchair users when they have been sitting in the same position for too long by using force sensors connected to a mobile app, thereby helping these users prevent pressure sores. You can watch a video of the early prototype on
YouTube
.
Aubrie Lee, Associate Product Marketing Manager, Mountain View:
Like many other disabled people, I’ve spent most of my life as the minority in the room. In high school, I attended a state forum on disability and felt what it was like to be in the majority. Now, I work to create that feeling for other disabled people. I started the Googler Disability Community, a group that works on changing Google’s physical environment and workplace systems to help make our company truly inclusive. Outside of my job, I enjoy exploring the beauty in disability through
photography
and poetry. My own disabilities and the way they influence my interactions with others provide endless inspiration for my art.
Pablo Pacca, Language Market Manager, São Paulo:
I’m in charge of making sure Google’s products are translated well into Brazilian Portuguese for the 180+ million Brazilians who don’t speak English. I’m also an activist and advocate for accessibility and inclusion, both as a blogger on disability issues and the lead for the Google Brazil People with Disabilities (PwD) group. At PwD Brazil, we educate Googlers about disability issues, and work to foster a more accessible office space and inclusive work environment across the company.
Posted by Jacquelline Fuller, Director of Google.org
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjru70gLASw0sZpOO30aBh-ONqy5gZpNvq_EgA7H4-7V3UVEHkdNHdo4L1Cr4DkIYm9i8EWx12FZ8T_5mVBOTNxwSWuNf6PWiYsltZEtFTsx9x89rZ14QvKYGje9C4xnMcsejiT/s1600/DSC_8076.jpg
Jacquelline Fuller
Director
Google.org
Google Drive and the Docs editors: designed with everyone in mind
September 11, 2014
Imagine trying to keep track of another person’s real-time edits in a document—using only your ears. Or trying to create a table from spreadsheet data—without being able to clearly see the cells. Whether you’re backing up a file in Drive or crunching some numbers in Sheets, it should be easy to bring your ideas to life using Google’s tools. But if you’re blind or have low vision, you may need to rely on assistive technologies such as screen readers and Braille displays—and that can make working in the cloud challenging. While screen readers can parse static webpages (like this blog) relatively easily, it’s much harder for them to know what to say in interactive applications like Google Docs because the actions they need to describe are much more complex.
With these reasons in mind, today we’re announcing some improvements to Drive and all our editors—Docs, Sheets, Slides, Drawings, and Forms—specifically designed with blind and low-vision users in mind.
Improved screen reader support in Drive and Docs
In June, we introduced a
new version of Drive
that’s sleeker, easier to navigate and much faster. But just as importantly, the new Drive also includes better keyboard accessibility, support for zoom and high-contrast mode and improved usability with screen readers.
Across Docs, Sheets, Slides, Drawings and Forms, you’ll find that it’s now much easier to use a screen reader, with nicer text-to-voice verbalization and improvements to keyboard navigation. You’ll also notice other updates, including:
Support for alt text on images in Docs, so you can tell a screen reader what they should say to describe an image
Better support for using a keyboard to edit charts and pivot tables in Sheets
Additional screen reader improvements specifically for Docs, Sheets and Slides, including support for spelling suggestions, comments and revision history
The ability to quickly search the menus and perform actions in Docs, Slides and Drawings (and soon Sheets and Forms)—even if you don’t know the action’s key sequence
Collaborating with others is easier too: in Docs, Sheets, Slides or Drawings, screen readers announce when people enter or leave the document, and you’ll now also hear when others are editing alongside you.
Refreshable Braille display support
If you use a
Braille display
, you can now use it to read and enter text in Docs, Slides and Drawings. Even if you don't use a Braille display, with Braille support, your screen reader’s settings for character echoing are automatically followed. Enabling Braille also dramatically reduces the lag between when you press a key and when it’s announced by your screen reader, and improves the announcements of punctuation and whitespace. Learn how to enable Braille support in our
Help Center
.
Get up and going faster
The first time you use a screen reader or a Braille display, getting up to speed can be a daunting task. But it’s simpler with new step-by-step guides for
Drive
,
Docs
,
Sheets
,
Slides
,
Forms
and
Drawings
.
You can also access the in-product “Help” menu at any time without interrupting your work, or use the updated shortcut help dialog to easily search through keyboard shortcuts if you don’t remember them.
Finally, we’re offering phone support for Google Drive accessibility questions. If you get stuck, visit
support.google.com/drive
to request a phone call and someone from our team will reach out to you.
What’s next
Referring to recent updates to Google Drive, Dr. Marc Maurer, President of the National Federation of the Blind,
said
at this year’s National Convention: “The progress...during the last few months has just been positively extraordinary.” We’re pleased the community has welcomed these improvements, and will continue to work with organizations like the NFB to make even more progress.
Everyone, regardless of ability, should be able to experience all that the web has to offer. To find out more about our commitment to a fully accessible web, visit the new Google Accessibility site at
www.google.com/accessibility
.
Posted by Alan Warren, Vice President, Engineering
Making the cloud more accessible with Chrome and Android
February 28, 2013
If you’re a blind or low-vision user, you know that working in the cloud poses unique challenges. Our accessibility team had an opportunity to address some of those challenges at the 28th annual
CSUN International Technology and Persons with Disabilities Conference
this week. While there, we led a workshop on how we’ve been improving the accessibility of Google technologies. For all those who weren’t at the conference, we want to share just a few of those improvements and updates:
Chrome and Google Apps
Chrome OS now supports a high-quality text-to-speech voice (starting with U.S. English). We’ve also made spoken feedback, along with screen magnification and high-contrast mode available out-of-the-box to make Chromebook and Chromebox setup easier for users with accessibility needs.
Gmail now has a consistent navigation interface, backed by HTML5 ARIA, which enables blind and low-vision users to effectively navigate using a set of keyboard commands.
It’s now much easier to access content in your Google Drive using a keyboard—for example, you can navigate a list of files with just the arrow keys. In Docs, you can access features using the keyboard, with a new way to search menu and toolbar options. New keyboard shortcuts and verbalization improvements also make it easier to use Docs, Sheets and Slides with a screenreader.
The
latest stable version of Chrome
, released last week, includes support for the Web Speech API, which developers can use to integrate speech recognition capabilities into their apps. At CSUN, our friends from
Bookshare
demonstrated how they use this new functionality to deliver
ReadNow
—a fully integrated ebook reader for users with print disabilities.
Finally, we released a new
Help Center Guide
specifically for blind and low-vision users to ease the transition to using Google Apps.
Android
We added Braille support to Android 4.1; since then, Braille support has been expanded on Google Drive for Android, making it easier to read and edit your documents. You can also use Talkback with Docs and Sheets to edit on the go.
With Gesture Mode in Android 4.1, you can reliably navigate the UI using touch and swipe gestures in combination with speech output.
Screen magnification is now built into
Android 4.2
—just enable “Magnification gestures,” then triple tap to enter full screen magnification.
The latest release of
TalkBack
(available on Play soon) includes several highly-requested features like structured browsing of web content and the ability to easily suspend/resume TalkBack via an easy-to-use radial menu.
These updates to Chrome, Google Apps, and Android will help create a better overall experience for our blind and low-vision users, but there’s still room for improvement. Looking ahead, we’re focused on the use of accessibility APIs that will make it easier for third-party developers to create accessible web applications, as well as pushing the state of the art forward with technologies like speech recognition and text-to-speech. We’re looking forward to working with the rest of the industry to make computers and the web more accessible for everyone.
Posted by T.V. Raman, Engineering Lead, Google Accessibility
Greater accessibility for Google Apps
September 19, 2012
It's been a year since we posted about enhanced accessibility in
Google Docs, Sites and Calendar
. As we close out another summer, we want to update our users on some of the new features and improvements in our products since then. We know that assistive technologies for the web are still evolving, and we're committed to moving the state of accessibility forward in our applications.
Since last year, we've made a number of accessibility fixes in Google Calendar, including improved focus handling, keyboard access, and navigation. In Google Drive, we incorporated Optical Character Recognition technology to allow screen readers to read text in scanned PDFs and images, and we added NVDA support for screen readers. New accessibility features in mobile apps (Gmail for Mobile and Google Drive on iOS and Android) included enhanced explore-by-touch capabilities and keyboard/trackpad navigability. For a full list of new features and improvements for accessibility in our products, check out our
post
today on accessible@googlegroups.com.
Based on these updates, we’ve also created an
Administrator Guide to Accessibility
that explains best practices for deploying Google Apps to support users’ accessibility needs. We want to give everyone a great experience with Google Apps, and this guide is another resource designed with that goal in mind.
For more information on these specific accessibility improvements, using Google products with screen readers, how to submit feedback and how to track our progress, please visit
www.google.com/accessibility
.
Posted by Jeff Harris, Product Manager
A look inside our 2011 diversity report
May 18, 2012
We work hard to ensure that our commitment to diversity is built into everything we do—from hiring our employees and building our company culture to running our business and developing our products, tools and services. To recap our diversity efforts in 2011, a year in which we partnered with and donated $19 million to more than 150 organizations working on advancing diversity, we created the
2011 Global Diversity & Talent Inclusion Report
. Below are some highlights.
In the U.S., fewer and fewer students are graduating with computer science degrees each year, and enrollment rates are even lower for women and underrepresented groups. It’s important to grow a diverse talent pool and help develop the technologists of tomorrow who will be integral to the success of the technology industry. Here are a few of the things we did last year aimed at this goal in the U.S. and around the world:
We held our third annual
HBCU (Historically Black Colleges and Universities) Faculty Summit
at Google New York, hosting 50 professors and administrators from 16 HBCUs, who came together to collaborate, share insights and engage with Googlers.
We helped 100,000 students and faculty at 22 HBCUs in the U.S. “go Google;” they now use
Google Apps for Education
.
To date, 3,000 students in 77 countries have received
Google scholarships
and we also expanded our scholarship programs for women in technology.
We piloted the
Top Black Talent U.K. program
to help the U.K.’s top black engineering and business students transition into the tech industry. We also partnered with the
African Caribbean Society
to offer 100 students workshops and mentoring with Googlers from engineering, sales and marketing.
We not only promoted diversity and inclusion outside of Google, but within Google as well.
We had more than 10,000 members participate in one of our
18 Global Employee Resource Groups (ERGs)
. Membership and reach expanded as Women@Google held the first ever Women’s Summit in both Mountain View, Calif. and Japan; the Black Googler Network (BGN) made their fourth visit to New Orleans, La., contributing 360 volunteer hours in just two days; and the Google Veterans Network partnered with
GoogleServe
, resulting in 250 Googlers working on nine Veteran-related projects from San Francisco to London.
Googlers in more than 50 offices participated in the Sum of Google, a celebration about diversity and inclusion, in their respective offices around the globe.
We sponsored 464 events in 70 countries to celebrate the anniversary of International Women's Day. Google.org collaborated with Women for Women International to launch the
“Join me on the Bridge” campaign
. Represented in 20 languages, the campaign invited people to celebrate by joining each other on bridges around the world—either physically or virtually—to show their support.
Since our early days, it’s been important to make our tools and services accessible and useful to a global array of businesses and user communities. Last year:
We introduced
ChromeVox
, a screen reader for Google Chrome, which helps people with vision impairment navigate websites. It's easy to learn and free to install as a Chrome Extension.
We grew
Accelerate with Google
to make Google’s tools, information and services more accessible and useful to underrepresented communities and diverse business partners.
On Veterans Day in the U.S., we launched a new platform for military veterans and their families. The
Google for Veterans and Families website
helps veterans and their families stay connected through products like Google+,
YouTube
and
Google Earth
.
We invite you to take a look back with us at our
2011 diversity and inclusion highlights
. We’re proud of the work we’ve done so far, but also recognize that there’s much more to do to. These advances may not happen at Internet speed, but through our collective commitment and involvement, we can be a catalyst for change.
Posted by Yolanda Mangolini, Director, Global Diversity & Inclusion/Talent & Outreach Programs
Introducing Google Drive... yes, really
April 24, 2012
Just like the
Loch Ness Monster
, you may have heard the rumors about Google Drive. It turns out, one of the two actually does exist.
Today, we’re introducing Google Drive—a place where you can create, share, collaborate, and keep all of your stuff. Whether you’re working with a friend on a joint research project, planning a wedding with your fiancé or tracking a budget with roommates, you can do it in Drive. You can upload and access all of your files, including videos, photos, Google Docs, PDFs and beyond.
With Google Drive, you can:
Create and collaborate.
Google Docs is built right into Google Drive, so you can work with others in real time on documents, spreadsheets and presentations. Once you choose to share content with others, you can add and reply to comments on
anything
(PDF, image, video file, etc.) and receive notifications when other people comment on shared items.
Store everything safely and access it anywhere (especially while on the go).
All your stuff is just...
there
. You can access your stuff from anywhere—on the web, in your home, at the office, while running errands and from all of your devices. You can install Drive on your Mac or PC and can download the
Drive app
to your Android phone or tablet. We’re also working hard on a Drive app for your iOS devices. And regardless of platform, blind users can access Drive with a screen reader.
Search everything.
Search by keyword and filter by file type, owner and more. Drive can even recognize text in scanned documents using
Optical Character Recognition
(OCR) technology. Let’s say you upload a scanned image of an old newspaper clipping. You can search for a word from the text of the actual article. We also use image recognition so that if you drag and drop photos from your Grand Canyon trip into Drive, you can later search for [grand canyon] and photos of its gorges should pop up. This
technology
is still in its early stages, and we expect it to get better over time.
You can get started with 5GB of storage for free—that’s enough to store the high-res photos of your trip to the Mt. Everest, scanned copies of your grandparents’ love letters or a career’s worth of business proposals, and still have space for the novel you’re working on. You can choose to upgrade to 25GB for $2.49/month, 100GB for $4.99/month or even 1TB for $49.99/month. When you upgrade to a paid account, your Gmail account storage will also expand to 25GB.
Drive is built to work seamlessly with your
overall Google experience
. You can attach photos from Drive to posts in Google+, and soon you’ll be able to attach stuff from Drive directly to emails in Gmail. Drive is also an open platform, so we’re working with many third-party developers so you can do things like
send faxes
,
edit videos
and
create website mockups
directly from Drive. To install these apps, visit the
Chrome Web Store
—and look out for even more useful apps in the future.
This is just the beginning for Google Drive; there’s a lot more to come.
Get started with Drive today at
drive.google.com/start
—and keep looking for Nessie...
Posted by Sundar Pichai, SVP, Chrome & Apps
Learning independence with Google Search features
March 29, 2012
Searches can become stories. Some are inspiring, some change the way we see the world and some just put a smile on our face. This is a story of how people can use Google to do something extraordinary. If you have a story,
share it
.
- Ed.
We all have memories of the great teachers who shaped our childhood. They found ways to make the lightbulb go off in our heads, instilled in us a passion for learning and helped us realize our potential. The very best teachers were creative with the tools at their disposal, whether it was teaching the fundamentals of addition with Cheerios or the properties of carbon dioxide with baking soda and vinegar. As the Internet has developed, so too have the resources available for teachers to educate their students.
One teacher who has taken advantage of the web as an educational tool is Cheryl Oakes, a resource room teacher in Wells, Maine. She’s also been able to tailor the vast resources available on the web to each student’s ability. This approach has proven invaluable for Cheryl’s students, in particular 16-year-old Morgan, whose learning disability makes it daunting to sort through search results to find those webpages that she can comfortably read. Cheryl taught Morgan how to use the
Search by Reading Level
feature on Google Search, which enables Morgan to focus only on those results that are most understandable to her. To address the difficulty Morgan faces with typing, Cheryl introduced her to
Voice Search
, so Morgan can speak her queries into the computer. Morgan is succeeding in high school, and just registered to take her first college course this summer.
There’s a practically limitless amount of information available on the web, and with search features, you can find the content that is most meaningful for you. For more information, visit
google.com/insidesearch/features.html
.
Posted by Glen Shires, Speech Technology
Understanding accessibility at CSUN 2012
February 28, 2012
This week we’re attending the 27th annual
CSUN International Technology and Persons with Disabilities Conference
. As the Internet evolves, screen readers, browsers and other tools for accessibility need to grow to meet the complexity of the modern web. Conferences like CSUN are an opportunity to check in with web users with disabilities: not just to share
our progress
in making online technologies accessible, but to also discuss improvements for the future.
Who are these users? In August, we conducted a survey with the American Council of the Blind, to find out more about how people with sight impairment use the web. We received nearly 1,000 responses from people who are blind or visually impaired, from a wide range of professions in 57 countries: teachers, software developers, social workers, writers, psychologists, musicians and students. The results paint a picture of why it is critical to improve the accessibility of web applications. Of the respondents:
Almost 90 percent reported regularly using the web to keep in touch with friends and family
Over half use a smartphone, and over half own more than one computer
Over two-thirds of respondents said they use social media
Over 50 percent have completed a baccalaureate degree, and of those, 30 percent have gone on to to postgraduate studies at the masters' or Ph.D. level
Of those who are currently students, over 70 percent have their assistive technology provided for by their school
However, for those who have left school and are of working age, 46 percent are unemployed
Better web accessibility has the potential to increase educational and employment opportunities, provide social cohesion and enable independence for the people with disabilities. We imagine a future for the web where the most visually complex applications can be rendered flawlessly to screen readers and other assistive devices that don't rely on sight, using technologies that work seamlessly on browsers and smartphones.
[
click here for audio description
]
Since we last attended CSUN, we’ve made several improvements to the accessibility of our products:
ChromeVox
(in beta) provides a screen reader that's built for the web, right inside Chrome.
We've improved accessibility for
Google Docs, Sites and Calendar
, including keyboard shortcuts and better support in modern screen readers
Android 4.0 introduces
touch exploration and out-of-box accessibility activation
We've also
expanded caption support on YouTube
—improving access to broadcast and direct-to-web videos for people who are deaf or hard of hearing
If you're attending CSUN 2012, we hope you'll come up and say hello at one of our talks on the accessibility of our products, including the use of
video in Google+ and Docs
and
accessibility on Android devices
. And Friday we’ll host a
Q&A Fireside chat
with Google product teams. You can also try some of these improvements out at our two hands-on demo sessions on Thursday, in the Connaught breakout room:
10am to 12pm—Chromebooks and new features in Google Apps
1pm to 3pm—Android 4.0 Galaxy Nexus phones
If you're not attending CSUN 2012, we'd love to hear your thoughts on accessibility in our
web forum
.
Posted by Naomi Black, Engineering Program Manager, Google Accessibility
Enhanced accessibility in Docs, Sites and Calendar
September 14, 2011
This fall, as classrooms fill with the hustle and bustle of a new semester, more students than ever will use Google Apps to take quizzes, write essays and talk to classmates. Yet blind students (like blind people of all ages) face a unique set of challenges on the web. Members of the blind community rely on screen readers to tell them verbally what appears on the screen. They also use keyboard shortcuts to do things that would otherwise be accomplished with a mouse, such as opening a file or highlighting text.
Over the past few months, we’ve worked closely with advocacy organizations for the blind to improve our products with more accessibility enhancements. While our work isn’t done, we’ve now significantly improved keyboard shortcuts and support for screen readers in several Google applications, including
Google Docs, Google Sites
and
Google Calendar
. Business, government and education customers can also learn more about these updates on the
Enterprise blog
.
In the weeks and months ahead, we’ll continue to improve our products for blind users. We believe that people who depend on assistive technologies deserve as rich and as productive an experience on the web as sighted users, and we’re working to help that become a reality.
For more information on these accessibility changes, using Google products with screen readers, how to send us feedback and how to track our progress, visit
google.com/accessibility
.
Posted by T.V. Raman, Technical Lead, Google Accessibility
An accessibility survey for blind users
August 19, 2011
These days, we rely on the Internet to keep us informed and in touch, yet our experience of the web is filtered through the tools we use to access it. The devices and technologies we choose, and our decisions about when we upgrade those tools, can affect how we interact with the web and with whom we are able to communicate.
In July, I attended the annual conference held by the American Council of the Blind (ACB). I was struck by something I heard from people there: their experience using the web was very different from mine not because they were blind, but because the technology and web tools available to them were unlike the ones available to me, as a sighted person. While the Internet provides many benefits to modern society, it has also created a unique set of challenges for blind and low-vision users who rely on assistive technologies to use the web. We’re committed to making Google’s products more accessible, and we believe the best way to understand the accessibility needs of our users is to listen to them.
This week, we’re announcing a survey that will help us better understand computer usage and assistive technology patterns in the blind community. Over the past three months, we’ve worked closely with the ACB to develop a survey that would give us a greater understanding of how people choose and learn about the assistive technologies they use. This survey will help us design products and tools that interact more effectively with assistive technologies currently available to the blind community, as well as improve our ability to educate users about new features in our own assistive technologies, such as
ChromeVox
and
TalkBack
.
The survey will be available through mid-September on the ACB's website and by phone. We encourage anyone with a visual impairment who relies on assistive technologies to participate; your input will help us offer products that can better suit your needs. For details, visit
www.acb.org/googlesurvey
.
Posted by Naomi Black, Accessibility Engineering Team
Supporting accessibility at CSUN
March 15, 2011
This week we’ll be at the 26th annual
CSUN International Technology & Persons with Disabilities Conference
to talk with users and accessibility experts about how to make our products more accessible to people with disabilities. We’ll also give a
talk
on the current state of accessibility for our products.
We’ve been working in this space for a while, launching features such as
captions
on YouTube, applications such as
WalkyTalky
and
Intersection Explorer
on Android (so people can use Google Maps eyes-free) and building easy-to-navigate, accessible
Google search
pages to work smoothly with adaptive technologies.
We have more to do. At CSUN 2011, we’re looking forward to more insights about how to make Android, Chrome and Google Apps better enabled for people who rely on assistive technologies like screen readers. If you’re attending and are interested in participating in our focus groups there, please fill out our
survey
by 9pm PST today, Tuesday, March 15.
To see an overview of the accessibility features of our products today, visit
google.com/accessibility
. We're launching an updated version of this site later today to make it easier for visitors to find information on using our products, and for developers and publishers to learn how to develop accessible products on our platforms. While you’re there, please give us
feedback
on what we can do better to make our products more accessible.
Posted by Naomi Black, Engineering Program Manager for Accessibility
Honoring the 20th Anniversary of the Americans with Disabilities Act
July 26, 2010
[
Cross-posted on Google Public Policy Blog
]
Bending, walking, breathing, hearing, seeing and sleeping are simple things that are often taken for granted, as are thinking, learning, and communicating.
Twenty years ago today, the
Americans with Disabilities Act
(ADA) was signed into law. This milestone legislation bans persons or companies from discriminating against anyone with limited abilities. It’s hard to imagine a world in which the right to participate in activities commonly enjoyed by the bulk of the population are denied or inadequately accommodated, but that was the case before ADA.
The efforts of the advocates who came to Washington two decades ago to rally for their civil rights has transformed so much of the modern world around us. As someone who’s worn hearing aids since I was 13, for example, I very much appreciate that most television programs and DVDs or Blu-Ray disks are captioned. On my way home, I might pass through a door that I know is wide enough for a wheelchair -- because the ADA set the building codes that require it. I see service animals on the DC Metro, accessible checkout aisles at my grocery store, ramps on sidewalks, and designated parking in movie theater lots: all there because of the important provisions included in the ADA.
Whereas the ADA set legal standards for ensuring equal rights for Americans with disabilities, Google is keenly aware that technology can help all users better enjoy the world around them. From opening millions of titles of printed content to persons with visual impairments through
Google Book Search
, to providing ready and easy-to-use
captions on YouTube
, to including a
built-in screenreader and text-to-speech engine in Android
, to introducing
new extensions on Chrome
to make online text easier to read, we’re serious about honoring
our mission
to make the world’s information universally
accessible
and
useful
. You can keep up with our progress at
google.com/accessibility
.
Congratulations to all those who work to make the ADA a living, breathing reality. For all the years I’ve been working on policy in Washington, it’s still rare to see a law that has had as positive and fundamental an influence on our lives as this Act. There still is work to be done to meet the goals of ADA, and we are committed to doing our part.
Posted by Vint Cerf, Chief Internet Evangelist
Automatic captions in YouTube
November 19, 2009
Since we first
announced captions
in Google Video and YouTube, we've introduced multiple caption tracks, improved search functionality and even automatic translation. Each of these features has had great personal significance to me, not only because I helped to design them, but also because I'm deaf. Today, I'm in Washington, D.C. to announce what I consider the most important and exciting milestone yet: machine-generated automatic captions.
Since the original launch of captions in our products, we’ve been happy to see growth in the number of captioned videos on our services, which now number in the hundreds of thousands. This suggests that more and more people are becoming aware of how useful captions can be. As we’ve explained in the past, captions not only help the deaf and hearing impaired, but with
machine translation
, they also enable people around the world to access video content in any of 51 languages. Captions can also
improve search
and even enable users to jump to the exact parts of the videos they're looking for.
However, like everything YouTube does, captions face a tremendous challenge of scale. Every minute, 20 hours of video are uploaded. How can we expect every video owner to spend the time and effort necessary to add captions to their videos? Even with all of the captioning support already available on YouTube, the majority of user-generated video content online is still inaccessible to people like me.
To help address this challenge, we've combined Google's automatic speech recognition (ASR) technology with the YouTube caption system to offer automatic captions, or auto-caps for short. Auto-caps use the same voice recognition algorithms in
Google Voice
to automatically generate captions for video. The captions will not always be perfect (check out the video below for an amusing example), but even when they're off, they can still be helpful—and the technology will continue to improve with time.
In addition to automatic captions, we’re also launching automatic caption timing, or auto-timing, to make it significantly easier to create captions manually. With auto-timing, you no longer need to have special expertise to create your own captions in YouTube. All you need to do is create a simple text file with all the words in the video and we’ll use Google’s ASR technology to figure out when the words are spoken and create captions for your video. This should significantly lower the barriers for video owners who want to add captions, but who don’t have the time or resources to create professional caption tracks.
To learn more about how to use auto-caps and auto-timing, check out this short video and our
help center article
:
You should see both features available in English by the end of the week. For our initial launch, auto-caps are only visible on a handful of partner channels (list below*). Because auto-caps are not perfect, we want to make sure we get feedback from both viewers and video owners before we roll them out more broadly. Auto-timing, on the other hand, is rolling out globally for all English-language videos on YouTube. We hope to expand these features for other channels and languages in the future. Please send us your
feedback
to help make that happen.
Today I'm more hopeful than ever that we'll achieve our long-term goal of making videos universally accessible. Even with its flaws, I see the addition of automatic captioning as a huge step forward.
*
Partners for the initial launch of auto-caps:
UC Berkeley
,
Stanford
,
MIT
,
Yale
,
UCLA
,
Duke
,
UCTV
,
Columbia
,
PBS
,
National Geographic
,
Demand Media
,
UNSW
and most
Google
&
YouTube
channels.
Update
on 11/24:
We've posted a full length video of our announcement event in Washington D.C. on YouTube. We've included English captions using our new auto-timing feature.
Posted by Ken Harrenstien, Software Engineer
More accessibility features in Android 1.6
October 20, 2009
From time to time, our own T.V. Raman shares his tips on how to use Google from his perspective as a technologist who cannot see — tips that sighted people, among others, may also find useful.
The most recent release of Android 1.6, a.k.a. Donut, introduces accessibility features designed to make Android apps more widely usable by blind and low-vision users. In brief, Android 1.6 includes a built-in screenreader and
text-to-speech
(TTS) engine which make it possible to use most Android applications, as well as all of Android's default UI, when not looking at the screen.
Android-powered devices with Android 1.6 and future software versions will include the following accessibility enhancements:
Text-to-Speech (TTS) is now bundled with the Android platform. The platform comes with voices for English (U.S. and U.K.), French, Italian, Spanish and German.
A standardized Text To Speech API is part of the Android SDK, and this enables developers to create high-quality talking applications.
Starting with Android 1.6, the Android platform includes a set of easy to use accessibility APIs that make it possible to create accessibility aids such as screenreaders for the blind.
Application authors can easily ensure that their applications remain usable by blind and visually impaired users by ensuring that all parts of the user interface are reachable via the trackball; and all image controls have associated textual metadata.
Starting with Android 1.6, the Android platform comes with applications that provide spoken, auditory (non-speech sounds) and haptic (vibration) feedback. Named TalkBack, SoundBack and KickBack, these applications are available via the Settings > Accessibility menu.
In addition,
project Eyes-Free
(which includes accessibility tools such as TalkBack) provides several UI enhancements for using touch-screen input. Many of these
innovations
are available via Android Market and are already being heavily used. We believe these eyes-free tools will serve our users with special needs as well.
You can turn on the accessibility features by going to Settings --> Accessibility and checking the box "Accessibility". While the web browser and browser-based applications do not yet "talk" using these enhancements, we're working on them for upcoming releases. Check out this
Google Open Source Blog post
for more details, and stay tuned to the
eyes-free channel on YouTube
for step-by-step demonstrations on configuring and using accessibility support on Android.
A new home for accessibility at Google
October 16, 2009
Information access is at the core of
Google’s mission
, which is why we work to make the world's content available to people with disabilities, such as blindness, visual impairment, color deficiency, deafness, hearing loss and limited dexterity. Building accessible products isn't only the right thing to do, it also opens up Google services to very significant populations of people. According to the United Nations,
650 million people
live with a disability, which makes them the world's largest minority.
We regularly develop and release accessibility features and improvements. Sometimes these are snazzy new applications like the a new
talking RSS reader
for Android devices. Other times the changes aren't flashy, but they're still important, such as our recent incremental improvements to
WAI-ARIA support in Google Chrome
(adding support for ARIA roles and labels). We also work on more foundational research to improve customization and access for our users, such as
AxsJax
(an Open Source framework for injecting usability enhancements into Web 2.0 applications).
We've
written frequently
about accessibility on our various blogs and help forums, but this information has never been easily accessible (pun intended) in one central place. This week we've launched a handy new website for Accessibility at Google to pull all our existing resources together:
www.google.com/accessibility
. Here you can follow the latest accessibility updates from our blogs, find resources from our help center, participate in a discussion group, or send us your feedback and feature requests. Around here, we often say, "launch early and iterate" — meaning, get something out the door, get feedback, and then improve it. In that tradition, our accessibility website is pretty simple, and we expect this site to be the first of many iterations. We're excited about the possibilities.
The thing we're most excited about is getting
your feedback
about Google products and services so we can make them better for the future. Take a look and let us know what you think.
Posted by Jonas Klink, Accessibility Product Manager
An ARIA for Google Moderator
April 13, 2009
From time to time, our own
T.V. Raman
shares his tips on how to use Google from his perspective as a technologist who cannot see -- tips that sighted people, among others, may also find useful. - Ed.
Google-AxsJAX was launched in late 2007 as a library for access-enabling Web-2.0 applications. Since then, we have released accessibility enhancements for many Web-2.0 applications via the
AxsJAX site
as early experiments that have eventually graduated into full-fledged products. Just recently we
posted
about using the AxsJAX library to provide ARIA enhancements for Google Calendar, Google Finance and Google News. Now we are happy to share an early AxsJAX extension for
Google Moderator
that enables fluent eyes-free use of the tool.
For details about AxsJAX enhancements, see the
AxsJAX FAQ
. Briefly, you need Firefox 3.0 and a screenreader that supports W3C ARIA to take advantage of these enhancements. Users who do not have a screenreader installed can most easily experience the results by installing
Fire Vox
, a freely available self-voicing extension for Firefox.
You can activate the AxsJAX enhancement for Google Moderator either by clicking on the link that says "Click here for ARIA enhanced Google Moderator" or by
accessing the ARIA-enhanced version directly
. After enabling the enhancement, you can use Google Moderator via the keyboard, with all user interaction producing spoken feedback via W3C ARIA.
Here is a brief overview of the experience:
1. The user interface is divided into logical panes — one listing topic areas, and the other listing questions on a given topic. At times (e.g., before a meeting), you may find an additional
Featured Question
pane that shows a randomly selected question that you can vote on.
2. Users can ask new questions under a given topic, or give a thumbs-up/thumbs-down to questions that have already been asked.
3. Use the
left
and
right
arrow keys to switch between the two panes. You hear the title of the selected pane as you switch.
4. Use
up
and
down
arrows to navigate among the items in the selected pane. As you navigate, you hear the current item.
5. Hit
enter
to select the current item.
6. The current item can be magnified by repeatedly pressing the
+
(or
=
) key. To reduce magnification, press the
-
key.
7. When navigating the questions in a given topic, hit
y
or
n
to vote a question up or down.
8. When navigating items in the topic pane, hit
a
to ask a question. Once you confirm your request to post the question, it will show up in the list of questions for that topic so that others can vote that question up or down.
Please visit the
Google Group for accessibility
to provide feedback. This AxsJAX extension is still a work in progress, so we'd love to hear from you as we continue to work out the kinks.
Update on 4/14:
Clarified in the second and third paragraphs that you do not need to install this enhancement. You can access it directly from Google Moderator.
Posted by Posted by T. V. Raman, Research Scientist, and Charles L. Chen, Software Engineer
ARIA for Google Calendar, Finance and News: In praise of timely information access
April 2, 2009
From time to time, our own
T.V. Raman
shares his tips on how to use Google from his perspective as a technologist who cannot see -- tips that sighted people, among others, may also find useful.
In our continued efforts to make Google applications more accessible, we have launched ARIA support for
several Google applications
over the last few months.
W3C ARIA
is a set of HTML DOM properties that enables adaptive technologies like screenreaders to work better with dynamic web applications. As with previous ARIA-enabled Google solutions, screenreader users can now switch on ARIA support in the following applications by activating an invisible Enable Screenreader Support link. Alternatively, simply browse to the links in this blog with a supporting screenreader and Firefox 3.0 to experience the interface enhancements. If you do not have a screenreader installed, but are curious to experience what eyes-free interaction with these applications feels like, we recommend the freely downloadable Firefox enhancement
Fire Vox by Charles Chen
.
Google Calendar
: The ARIA-enhanced Google Calendar enables speech-enabled access to the day view in Google Calendar. You can use the keyboard to move through events, move through the days of the week, as well as to cycle through your various calendars. As you work with the calendar, the application raises appropriate DOM events through W3C ARIA to invoke the relevant spoken feedback through the screenreader.
Google Finance
: The Finance page can be viewed as a set of logical panes, with related content appearing as items in each pane. The ARIA-enhanced version of Google Finance enables you to switch panes, and navigate the current pane with the arrow keys. Navigation produces spoken feedback through the screenreader. In addition, Google Finance provides several power user tools, including a stock screener, all of which are speech-enabled through ARIA. These power user tools provide interesting examples for Web developers experimenting with ARIA. (ARIA support for Finance was developed by intern Svetoslav Ganov as his starter project.)
Google News
: Finally, we have added ARIA support to enable rapid eyes-free access to Google News. These enhancements follow the same pattern as seen earlier for Google Finance, and the ability to navigate between the different views provided by Google News, (e.g., World News vs Sports enables rapid access to the large volume of news that is accessible via the Google News interface).
As with all of our ARIA-enhanced services, you can obtain additional help by pressing the
?
key to hear the available list of shortcuts. If you're interested in discussing these enhancements, visit the
Google Group for accessibility
.
Posted by T. V. Raman, Research Scientist, and Charles L. Chen, Software Engineer
Accessible View: An ARIA for web search
November 5, 2008
From time to time, our own
T.V. Raman
shares his tips on how to use Google from his perspective as a technologist who cannot see -- tips that sighted people, among others, may also find useful.
In the spirit of a
recent post
discussing some of our search experiments, last week we launched an opt-in search experiment we're calling
Accessible View
, which makes it easy to navigate search results using only the keyboard. Like many of our recent accessibility-related enhancements, this experiment is built using the basic functionality provided by W3C ARIA and
Google-AxsJAX
, an evolving set of HTML DOM properties that enable adaptive technologies to work better with AJAX-style applications.
The Accessible View experiment is another step toward making our search results more accessible for everyone. In July 2006, we launched
Accessible Search
on Google Labs, where the goal was to help visually impaired users find content that worked well with adaptive technologies. We continue to refine and tune the ranking on Accessible Search. And with Accessible View, users can easily toggle between regular Google search results and Accessible Search results by using the 'A' and 'W' keys.
When we designed the Accessible View interface, we first looked at how people used screen readers and other adaptive technologies when performing standard search-related tasks. We then asked how many of these actions we could eliminate to speed up the search process. The result: a set of keyboard shortcuts for effectively navigating the results page, and to arrange for the user's adaptive technology to speak the right information during navigation.
We've also added a magnification lens that highlights the user's selected search result. Since launching Accessible Search, one of the most requested features has been support for low-vision users. While implementing the keyboard navigation described here, we incorporated the magnification lens first introduced by
Google Reader
.
Bringing it all together, we implemented
keyboard shortcuts
that extend what was originally pioneered by the keyboard shortcuts experiment. These shortcuts help users navigate through different parts of the results page with a minimal number of keystrokes. The left and right arrows cycle through the various categories of items on the page (e.g., results, ads, or search refinements), and the up and down arrow keys move through the current category. Power users can leave their hands on the home row by using the
h
,
j
,
k
, and
l
keys. In addition, we enable an infinite stream of results viewed through the
n
and
p
keys — so you can move through the results without getting disoriented by a page refresh after the first 10 results.
Key
Behavior
j/k
next/previous result
n/p
next/previous result, scroll if necessary
enter
open current result
up/down
next/previous result
left/right
switch categories (results, ads, refinements)
a
jump to ads
A
switch to Accessible Search results
W
switch to default Google results
r
jump to related searches
Try out
the experiment
and give us
your feedback
.
Posted by T.V. Raman, Research Scientist, and Charles L. Chen, Software Engineer
Labels
accessibility
41
acquisition
26
ads
131
Africa
19
Android
58
apps
419
April 1
4
Asia
39
books + book search
48
commerce
12
computing history
7
crisis response
33
culture
12
developers
120
diversity
35
doodles
68
education and research
144
entrepreneurs at Google
14
Europe
46
faster web
16
free expression
61
google.org
73
googleplus
50
googlers and culture
202
green
102
Latin America
18
maps and earth
194
mobile
124
online safety
19
open source
19
photos
39
policy and issues
139
politics
71
privacy
66
recruiting and hiring
32
scholarships
31
search
505
search quality
24
search trends
118
security
36
small business
31
user experience and usability
41
youtube and video
140
Archive
2016
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2007
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2006
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2005
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2004
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Feed
Google
on
Follow @google
Follow
Give us feedback in our
Product Forums
.