Showing posts with label faster web. Show all posts
Showing posts with label faster web. Show all posts

Friday, January 27, 2012

Fridaygram: faster web, stronger machines, prettier planet

Author Photo
By Scott Knaster, Google Code Blog Editor

Everybody likes a faster web, and that theme has been evident this week here on Google Code Blog. On Monday, Yuchung Cheng wrote about Google’s research into making TCP faster through various proposals and experiments. Yesterday, Roberto Peon and Will Chan blogged about SPDY (pronounced speedy), Google’s protocol for speeding up the web’s application layer historically handled by HTTP. In related news this week, the chairman of the HTTPbis Working Group announced support for SPDY in a public post.

At Google, these projects are part of our Make the Web Faster initiative, although TCP improvements and SPDY are efforts of the whole community. Even if you’re not working on TCP or SPDY, you can find lots of useful resources at our Make the Web Faster site. For example, there are articles on compression, caching, metrics, and more, a set of tools for measuring and optimizing pages, and several discussion forums for communicating with other interested folks.

Sometimes stronger is more important than faster. Scientists looking to improve the durability of machinery have been studying the yellow fattail scorpion, which uses bumps on its back to resist damage from sandstorms. Researchers hope to use the scorpion’s design to create erosion-resistant surfaces for blades, pipes, and similar parts. Or maybe they’ll make machines that look like giant yellow scorpions.

Finally, take a step back from everything on Earth and have a look at NASA’s latest "Blue Marble" images of our planet. We have a beautiful home.


Let’s say this fast: Fridaygram posts are just for fun. Fridaygrams are designed for your Friday afternoon and weekend enjoyment. Each Fridaygram item must pass only one test: it has to be interesting to us nerds. That definitely includes speed, space, and scorpions.

Thursday, January 26, 2012

Making the web speedier and safer with SPDY

Will
Roberto

By Roberto Peon and Will Chan, Software Engineers

Cross-posted with the Chromium Blog

In the two years since we announced SPDY, we’ve been working with the web community on evolving the spec and getting SPDY deployed on the Web.

Chrome, Android Honeycomb devices, and Google's servers have been speaking SPDY for some time, bringing important benefits to users. For example, thanks to SPDY, a significant percentage of Chrome users saw a decrease in search latency when we launched SSL-search. Given that Google search results are some of the most highly optimized pages on the internet, this was a surprising and welcome result.

We’ve also seen widespread community uptake and participation. Recently, Firefox has added SPDY support, which means that soon half of the browsers in use will support SPDY. On the server front, nginx has announced plans to implement SPDY, and we're actively working on a full featured mod-spdy for Apache. In addition, Strangeloop, Amazon, and Cotendo have all announced that they’ve been using SPDY.

Given SPDY's rapid adoption rate, we’re working hard on acceptance tests to help validate new implementations. Our best practices document can also help website operators make their sites as speedy as possible.

With the help of Mozilla and other contributors, we’re pushing hard to finalize and implement SPDY draft-3 in early 2012, as standardization discussions for SPDY will start at the next meeting of the IETF.

We look forward to working even more closely with the community to improve SPDY and make the Web faster!

To learn more about SPDY, see the link to a Tech Talk here, with slides here.


Roberto Peon and Will Chan co-lead the SPDY effort at Google. Roberto leads SPDY server efforts and continues to tell people to be unafraid of trying to change the world for the better. Will works on the Chrome network stack and leads the Chrome SPDY efforts. Outside of work, Will enjoys traveling the world in search of cheap beer and absurd situations.

Posted by Scott Knaster, Editor

Monday, January 23, 2012

Let's make TCP faster

Author Photo
By Yuchung Cheng, Make The Web Faster Team

Transmission Control Protocol (TCP), the workhorse of the Internet, is designed to deliver all the Web’s content and operate over a huge range of network types. To deliver content effectively, Web browsers typically open several dozen parallel TCP connections ahead of making actual requests. This strategy overcomes inherent TCP limitations but results in high latency in many situations and is not scalable.

Our research shows that the key to reducing latency is saving round trips. We’re experimenting with several improvements to TCP. Here’s a summary of some of our recommendations to make TCP faster:

1. Increase TCP initial congestion window to 10 (IW10). The amount of data sent at the beginning of a TCP connection is currently 3 packets, implying 3 round trips (RTT) to deliver a tiny 15KB-sized content. Our experiments indicate that IW10 reduces the network latency of Web transfers by over 10%.

2. Reduce the initial timeout from 3 seconds to 1 second. An RTT of 3 seconds was appropriate a couple of decades ago, but today’s Internet requires a much smaller timeout. Our rationale for this change is well documented here.

3. Use TCP Fast Open (TFO). For 33% of all HTTP requests, the browser needs to first spend one RTT to establish a TCP connection with the remote peer. Most HTTP responses fit in the initial TCP congestion window of 10 packets, doubling response time. TFO removes this overhead by including the HTTP request in the initial TCP SYN packet. We’ve demonstrated TFO reducing Page Load time by 10% on average, and over 40% in many situations. Our research paper and internet-draft address concerns such as dropped packets and DOS attacks when using TFO.

4. Use Proportional Rate Reduction for TCP (PRR). Packet losses indicate the network is in disorder or is congested. PRR, a new loss recovery algorithm, retransmits smoothly to recover losses during network congestion. The algorithm is faster than the current mechanism by adjusting the transmission rate according to the degree of losses. PRR is now part of the Linux kernel and is in the process of becoming part of the TCP standard.

In addition, we are developing algorithms to recover faster on noisy mobile networks, as well as a guaranteed 2-RTT delivery during startup. All our work on TCP is open-source and publicly available. We disseminate our innovations through the Linux kernel, IETF standards proposals, and research publications. Our goal is to partner with industry and academia to improve TCP for the whole Internet. Please watch this blog and http://code.google.com/speed/ for further information.


Yuchung Cheng works on the transport layer to make the Web faster. He believes the current transport layer badly needs an overhaul to catch up with other (networking) technologies. He can be reached at [email protected].

Posted by Scott Knaster, Editor

Friday, December 23, 2011

Fridaygram: goodbye to 2011

Author Photo
By Scott Knaster, Google Code Blog Editor

This is the last Fridaygram of 2011, and like most everybody else, we’re in a reflective mood. It’s also the 208th post on Google Code Blog this year, which means we’ve averaged more than one post every two days, so that’s plenty of stuff for you to read. What did we write about?

At Google, we love to launch. Many of our posts were about new APIs and client libraries. We also posted a bunch of times about HTML5 and Chrome and about making the web faster. And we posted about Android, Google+, and Google Apps developer news.

Many of our 2011 posts were about the steady progress of App Engine, Cloud Storage, and other cloud topics for developers. We also published several times about commerce and in-app payments.

2011 was a stellar year for Google I/O and other developer events around the world. Some of our most popular posts provided announcements, details, and recaps of these events. And we welcomed a couple dozen guest posts during Google I/O from developers with cool stories to tell.

The two most popular Code Blog posts of the year were both launches: the Dart preview in October, and the Swiffy launch in June.

Last, and surely least, I posted 26 Fridaygrams in an attempt to amuse and enlighten you. Thank you for reading those, and thanks for dropping by and reading all the posts we’ve thrown your way this year. See you in 2012!

And finally, please enjoy one more Easter egg.

Tuesday, December 20, 2011

Speed metrics in Google Analytics

Author Photo
By Satish Kambala, Staff Software Engineer

At Google we believe that speed matters and a faster web is better for everyone. That’s why we started the Make The Web Faster initiative. To improve the speed of a website, we need to measure how fast web pages load. The Site Speed report, which is now available by default to all users of Google Analytics, provides just that: it enables website owners to measure page load time for their web pages.

You can use the Site Speed report to correlate speed with other metrics in Google Analytics, such as page views and conversions. This enables website owners to identify and optimize those pages that drive these metrics. Page load times can be analyzed by browser type or user location to understand if specific optimizations are required. Recently, we enhanced the Site Speed report by adding a new section called Technical (see screenshot below) which displays network and server time components of page load time.


site speed report screen shot

You can learn more about the Site Speed report here. This report, along with powerful page speed analysis tools such as Page Speed Online, will help website owners delight their users by building fast and responsive websites.

Have ideas on how to make your website faster or ways to speed up the entire Web? Send us your thoughts.


Satish Kambala works at Google on stuff that helps in making the web faster. In his free time, apart from watching cricket and movies, Satish likes exploring places with his wife.

Posted by Scott Knaster, Editor

Thursday, December 15, 2011

Mobile Web performance challenges and strategies

Author Photo
By Ramki Krishnan, Technical Program Manager

Consumers are increasingly relying on their mobile devices to access the Web, thrusting mobile web performance into the limelight. Mobile users expect web pages to display on their mobile devices as fast as or faster than on their desktops.

As part of Google’s effort to Make The Web Faster, we invited Guy Podjarny, CTO of Blaze.io, to talk about some of the major performance concerns in the mobile web and ways to alleviate these issues. Guy’s talk focused on Front-End Optimization and highlighted 3 areas: mobile network, software, and hardware. Each of these impacts performance in myriad ways. The full video is available here, and runs just under an hour. If you don’t have time to watch this enlightening talk, this post discusses some key takeaways.

Mobile networks have high latency, and reducing the number of requests and the size of downloads are well-known optimization strategies. Guy also mentions using on-demand image displays such as loading above-the-fold images by default and other images only as they scroll into view. To handle network reliability, he recommends non-blocking requests eliminating single points of failure, with a selective aggregation of files needed for content display. Periodic pinging of the cell tower by the client can also reduce latency associated with dropped connections, but judicious timeouts and battery drain on the mobile device need to be factored in.

Modern mobile browsers are built mobile-friendly, and they can be helped further by exploiting localStorage to store CSS and JavaScript files. Pipelining multiple requests on a connection is an option, but developers need to work around head-of-line blocking by using techniques such as splitting dynamic and static resource requests on different domains.

Mobile hardware CPUs are weaker than their desktop counterparts. Guy points out the need to minimize JavaScript when designing mobile-friendly web pages and avoid reflows or defer JavaScript until after page loads. Clever image rendering techniques such as automatically resizing images to devices and loading full resolution only on zoom can also help.

Guy’s presentation makes clear that mobile web optimizations need to mitigate latencies introduced by mobile networks, software, and hardware. Rapidly changing OSes and browsers add to the challenges facing publishers. New and evolved tools and technologies will help ensure an optimal web browsing experience for mobile users.


Ramki Krishnan works at Google on the "Make The Web Faster" team. When not at work, he dreams of being a tennis pro, a humorist, and a rock drummer all rolled into one.

Posted by Scott Knaster, Editor

Friday, July 29, 2011

Page Speed Service: Web performance, delivered.


By Ram Ramani, Engineering Manager

Update 7/29/11: We were notified of a bug in the measurement tool that sometimes causes incorrect measurements. If your results indicated a slowdown on your pages, please run the tests again, and make sure you specify a fully qualified domain such as www.example.com. We apologize for any inconvenience and confusion this may have caused.

Details:
Measurement tests run for bare domains (such as example.com, without the prefix www) previously indicated that pages were loading more slowly, rather than speeding up, when using Page Speed Service. The test results page now prominently notifies you of this when you visit this page, if this error applies to you. Please check your old measurement results page if this bug applies to you. Running the tests again with the fully qualified domain such as www.example.com usually fixes the issue and gives you the correct measurement.


Two years ago we released the Page Speed browser extension and earlier this year the Page Speed Online API to provide developers with specific suggestions to make their web pages faster. Last year we released mod_pagespeed, an Apache module, to automatically rewrite web pages. To further simplify the life of webmasters and to avoid the hassles of installation, today we are releasing the latest addition to the Page Speed family: Page Speed Service.

Page Speed Service is an online service that automatically speeds up loading of your web pages. To use the service, you need to sign up and point your site’s DNS entry to Google. Page Speed Service fetches content from your servers, rewrites your pages by applying web performance best practices, and serves them to end users via Google's servers across the globe. Your users will continue to access your site just as they did before, only with faster load times. Now you don’t have to worry about concatenating CSS, compressing images, caching, gzipping resources or other web performance best practices.

In our testing we have seen speed improvements of 25% to 60% on several sites. But we know you care most about the numbers for your site, so check out how much Page Speed Service can speed up your site. If you’re encouraged by the results, please sign up. If not, be sure to check back later. We are diligently working on adding more improvements to the service.

At this time, Page Speed Service is being offered to a limited set of webmasters free of charge. Pricing will be competitive and details will be made available later. You can request access to the service by filling out this web form.

Ram Ramani is an Engineering Manager on the Make the Web Faster Team in Bangalore, India. He is a believer in "Faster is better".

Posted by Scott Knaster, Editor

Tuesday, July 19, 2011

Lightning fast! Performance tips for using Google APIs

Sven
Anton
By Anton Lopyrev and Sven Mawson, Google Developer Team

Over a year ago, we launched support for partial response and partial update for a number of APIs based on the Google Data Protocol. That launch was a part of our continuous effort to make the web faster. It was well received by our developer community as it significantly reduced network, memory, and CPU resources needed to work with certain Google APIs.

Today, we are adding support for partial response and an improved version of partial update, called patch, to a number of newer APIs such as Buzz, URL Shortener, Tasks and many others. In fact, all APIs available in the Google APIs Discovery Service and the APIs Explorer now support this feature.

To learn how to use partial response and partial update with a Google API, you can see the “Performance Tips” page in the documentation of the Tasks and Buzz APIs. We’ll roll out this page for all of the supported APIs over the next few months, but you can already use the algorithms with all of them today.

The partial response algorithm is identical to what was provided by the Google Data Protocol. By supplying a fields query parameter to any API call that returns data, you can request specific fields. Here is an example request that returns only titles and timestamps of a user’s public Buzz activities:
https://www.googleapis.com/buzz/v1/activities/[email protected]/@public?alt=json&pp=1&fields=items(title,updated)
Given that the full response is around 53KB and the partial response is only 3KB, the data sent to the client is reduced by almost 95%!

While the partial response algorithm is unchanged, the partial update algorithm has changed significantly compared to what was provided by Google Data Protocol. We’ve received feedback that the old algorithm was too complicated and hard to use, which prompted us to design something much simpler. The basics remain the same: you can use the HTTP PATCH verb in supported API methods to send partial updates to Google servers. However, the mechanics are different. Adding and modifying data uses the same 'merge' semantics as before. But deleting is simplified; just set a field to 'null'. Of course, the devil is in the details, so please check out the documentation for the nitty gritty.

You can try out both partial response and patch algorithms in the APIs Explorer. For partial responses, the fields parameter is available for most methods. In addition, the partial update methods are denoted by .patch in the method name. You can try both the fields parameter and the patch method on the “tasklist” resource in the APIs explorer.

If you are using Java or Python client libraries to access Google APIs, you can already ask for partial responses and send patch requests in the code. We are adding partial support to the rest of the Google APIs client libraries over time.

As our APIs get more and more use from devices with limited resources, taking advantage of performance optimizations such as partial response and patch is crucial for making your applications faster and more efficient. By using these features in your applications, you are joining us in our effort to make the web faster. For this, we thank you! Let us know of any issues and feature requests by posting to the developer forums of your favorite APIs or by leaving a comment on this post. Happy hacking!

Anton Lopyrev is an Associate Product Manager for Google APIs Infrastructure. He is a computer graphics enthusiast who is also passionate about product design.

Sven Mawson is a Software Engineer working on Google’s API Infrastructure. He believes well-designed, beautiful APIs need not sacrifice performance.


Posted by Scott Knaster, Editor

Wednesday, June 15, 2011

Tracking performance with HTTP Archive


By Arvind Jain, Make the Web Faster Team

At Google, we put a lot of effort into making the web faster. To understand the impact of our work, we need to track the speed of the web over time. HTTP Archive allows us to do that.

HTTP Archive generates regular reports illustrating trends such as page size and Page Speed score of the top pages on the web. Interested users can download the raw dataset for free, modify the source code to perform their own analyses, and unearth valuable trends.

HTTP Archive crawls the world’s top 18,000 URLs, with a plan to increase that number to a million or more in the coming months.

Google engineers built HTTP Archive as an open source service. We are now transitioning the ownership and maintenance of it to the Internet Archive. Google is proud to support the continued development of HTTP Archive and to help create a rich repository of data that developers can use to conduct performance research.

Arvind Jain founded and leads the Make the Web Faster initiative at Google. As part of that initiative, Arvind also started the Instant Pages effort, just announced yesterday.

Posted by Scott Knaster, Editor

Monday, May 09, 2011

Page Speed Online has a shiny new API

Andrew
Richard
By Andrew Oates and Richard Rabbat, Page Speed Team

A few weeks ago, we introduced Page Speed Online, a web-based performance analysis tool that gives developers optimization suggestions. Almost immediately, developers asked us to make an API available to integrate into other tools and their regression testing suites. We were happy to oblige.

Today, as part of Google I/O, we are excited to introduce the Page Speed Online API as part of the Google APIs. With this API, developers now have the ability to integrate performance analysis very simply in their command-line tools and web performance dashboards.

We have provided a getting started guide that helps you to get up and running quickly, understand the API, and start monitoring the performance improvements that you make to your web pages. Not only that, in the request, you’ll be able to specify whether you’d like to see mobile or desktop analysis, and also get Page Speed suggestions in one of the 40 languages that we support, giving API access to the vast majority of developers in their native or preferred language.

We’re also pleased to share that the WordPress plugin W3 Total Cache now uses the Page Speed Online API to provide Page Speed suggestions to WordPress users, right in the WordPress dashboard. “The Page Speed tool itself provides extremely pointed and valuable insight into performance pitfalls. Providing that tool via an API has allowed me to directly correlate that feedback with actionable solutions that W3 Total Cache provides.” said Frederick Townes, CTO Mashable and W3 Total Cache author.

Take the Page Speed Online API for a spin and send us feedback on our mailing list. We’d love to hear your experience integrating the new Page Speed Online API.


Andrew Oates is a Software Engineer on the Page Speed Team in Google's Cambridge, Massachusetts office. You can find him in the credits for the Pixar film Up.

Richard Rabbat is the Product Management Lead on the "Make the Web Faster" initiative. He has launched Page Speed, mod_pagespeed and WebP. At Google since 2006, Richard works with engineering teams across the world.

Posted by Scott Knaster, Editor

Thursday, May 05, 2011

Measure page load time with Google Analytics

Zhiheng
Phil
By Zhiheng Wang, Make the Web Faster Team, and Phil Mui, Google Analytics Team

At Google, we’re passionate about speed and making the web faster, and we’re glad to see that many website owners share the same idea. A faster web is better for both users and businesses. A slow-loading landing page not only impacts your conversion rate, but can also impact AdWords Landing Page Quality and ranking in Google search.

To improve the performance of your pages, you first need to measure and diagnose the speed of a page, which can be a difficult task. Furthermore, even with page speed measurements, it’s critical to look at page speed in the context of other web analytics data.

Therefore, we are thrilled to announce the availability of the Site Speed report in Google Analytics. With the Site Speed report you can measure the page load time across your site right within your Google Analytics account.


Uses for the Site Speed report

With the Site Speed report, not only will you be able to monitor the speed of your pages, you can also analyze it along with other analytics data, such as:
  • Content: Which landing pages are slowest?
  • Traffic sources: Which campaigns correspond to faster page loads overall?
  • Visitor: How does page load time vary across geographies?
  • Technology: Does your site load faster or slower for different browsers?

Setting up the Site Speed report

For now, page speed measurement is turned off by default, so you’ll only see 0s in the Site Speed report until you’ve enabled it. To start measuring site speed, you need to make a small change to your Analytics tracking code. We have detailed instructions in the Site Speed article in the Analytics Help Center. Once you’ve updated your tracking code, a small sample of pageviews will be used to calculate the page load time.

Bringing the Site Speed report into Google Analytics is an important step of the Make the Web Faster effort, and we look forward to your feedback on Site Speed.


Zhiheng Wang spends most of his time at work building stuff so others can serve the web better. He spends the rest of his time at home fixing stuff so his family can surf the web better.

Phil Mui is the Group Product Manager of Google Analytics and has been leading its development since its early days. He has a Ph.D. from MIT and a M.Phil. from Oxford where he was a Marshall Scholar.

Posted by Scott Knaster, Editor

Thursday, March 31, 2011

Introducing Page Speed Online, with mobile support



At Google, we’re striving to make the whole web fast. As part of that effort, we’re launching a new web-based tool in Google Labs, Page Speed Online, which analyzes the performance of web pages and gives specific suggestions for making them faster. Page Speed Online is available from any browser, at any time. This allows website owners to get immediate access to Page Speed performance suggestions so they can make their pages faster.



In addition, we’ve added a new feature: the ability to get Page Speed suggestions customized for the mobile version of a page, specifically smartphones. Due to the relatively limited CPU capabilities of mobile devices, the high round-trip times of mobile networks, and rapid growth of mobile usage, understanding and optimizing for mobile performance is even more critical than for the desktop, so Page Speed Online now allows you to easily analyze and optimize your site for mobile performance. The mobile recommendations are tuned for the unique characteristics of mobile devices, and contain several best practices that go beyond the recommendations for desktop browsers, in order to create a faster mobile experience. New mobile-targeted best practices include eliminating uncacheable landing page redirects and reducing the amount of JavaScript parsed during the page load, two common issues that slow down mobile pages today.

Page Speed Online is powered by the same Page Speed SDK that powers the Chrome and Firefox extensions and webpagetest.org.

Please give Page Speed Online a try. We’re eager to hear your feedback on our mailing list and find out how you’re using it to optimize your site.

Tuesday, March 22, 2011

Page Speed for Chrome, and in 40 languages!

(Cross-posted from the Google Webmaster Central Blog.)


Today we’re launching the most requested feature for Page Speed, Page Speed for Chrome. Now Google Chrome users can get Page Speed performance suggestions to make their sites faster, right inside the Chrome browser. We would like to thank all our users for your great feedback and support since we launched. We’re humbled that 1.4 M unique users are using the Page Speed extension and finding it useful to help with their web performance diagnosis.

Google Chrome support has always been high on our priority list but we wanted to get it right. It was critical that the same engine that powers the Page Speed Add-On for Firefox be used here as well. So we first built the Page Speed SDK, which we then integrated into the Chrome extension.

Page Speed for Chrome retains the same core features as the Firefox add-on. In addition, there are two major improvements appearing in this version first. We’ve improved scoring and suggestion ordering to help web developers focus on higher-potential optimizations first. Plus, because making the web faster is a global initiative, Page Speed now supports displaying localized rule results in 40 languages! These improvements are part of the Page Speed SDK, so they will also appear in the next release of our Firefox add-on as well.

If your site serves different content based on the browser’s user agent, you now have a good method for page performance analysis as seen by different browsers, with Page Speed coverage for Firefox and Chrome through the extensions, and Internet Explorer via webpagetest.org, which integrates the Page Speed SDK.

We’d love to hear from you, as always. Please try Page Speed for Chrome, and give us feedback on our mailing list about additional functionality you’d like to see. Stay tuned for updates to Page Speed for Chrome that take advantage of exciting new technologies such as Native Client.

Thursday, March 17, 2011

Your Web, Half a Second Sooner

At Google we’re constantly trying to make the web faster — not just our corner of it, but the whole thing. Over the past few days we’ve been rolling out a new and improved version of show_ads.js, the piece of JavaScript used by more than two million publishers to put AdSense advertisements on their web pages. The new show_ads is small and fast, built so that your browser can turn its attention back to its main task — working on the rest of the web page — as soon as possible. This change is now making billions of web pages every day load faster by half a second or more.

The old show_ads did lots of work: loading additional scripts, gathering information about the web page it was running on, and building the ad request to send back to Google. The new show_ads has a different job. It creates a friendly (same-origin) iframe on the web page, and starts the old script with a new name, show_ads_impl, running inside that iframe. The _impl does all the heavy lifting, and in the end the ads look exactly the same. But there’s a substantial speed advantage: many things happening inside an iframe don’t block the web browser’s other work.

How much of an effect this has depends on context: a page with nothing but ads on it isn’t going to get any faster. But on the real-world sites we tested, the latency overhead from our ads is basically gone. Page load times with the new asynchronous AdSense implementation are statistically indistinguishable from load times for the same pages with no ads at all.

The new show_ads is a drop-in replacement for the old one: web site owners don’t need to do anything to get this speed-up. But these dynamically-populated friendly iframes are finicky beasts. For now, we’re only using this technique on Chrome, Firefox, and Internet Explorer 8, with more to come once we’re sure that it plays well with other browsers.

And what if you’ve built a page that loads AdSense ads and then manipulates them in exotic ways not compatible with friendly iframes? (This is the web, after all, land of “What do you mean that’s ‘not supported’? I tried it, and it worked!”) You can set “google_enable_async = false” for any individual ad slot to revert to the old blocking behavior. But if your site loads ads in some tortuous way because you were looking for latency benefits, consider giving the straightforward invocation of show_ads.js a whirl. Because now, we’re fast.

Monday, January 31, 2011

Go Daddy Makes The Web Faster by enabling mod_pagespeed

As part of our Make The Web Faster initiative, Google announced the availability of mod_pagespeed, an open-source module for Apache webservers to automatically accelerate the sites they serve. Go Daddy, the top web hosting provider and world's largest domain name registrar, announced that they would roll out the mod_pagespeed feature for their Linux Web hosting customers. The feature is now available and is in use by Go Daddy customers who have already started to report faster webpage load times.

“Who on the Internet wouldn't want a faster website?” asked Go Daddy CEO and Founder Bob Parsons. “The benefits of mod_pagespeed are really a slam dunk. It’s built to boost users’ web performance, and ultimately, the bottom line for their business.”

By using several filters that implement web performance best practices, mod_pagespeed rewrites web page resources to automatically optimize their content. The filters improve performance for JavaScript, HTML and CSS, as well as JPEG and PNG images.

Mike Bender, co-creator of photo-blog AwkwardFamilyPhotos.com and Go Daddy customer, detected a 48% decrease, slicing the load time for his image-rich website from 12.8 to 6.6 seconds. mod_pagespeed speeds up the first visit of the site by reducing the payload size and improving compression. Repeat visits are accelerated by making caching more efficient and decreasing the size of resources such as CSS and HTML on the page as demonstrated in this chart (where smaller is faster):



“From the moment we enabled mod_pagespeed, the difference was noticeable,” said Bender. “It was a simple ‘flick of the switch,’ and the site started loading faster.”


For Go Daddy customers currently using the Linux 4GH web hosting platform, find out how to enable mod_pagespeed for your own website here. Other webmasters can install mod_pagespeed binaries or build directly from source.


Monday, November 22, 2010

Edmunds partners with Google to make the web faster

Note: This is a guest post from Ismail Elshareef, who is the Principal Architect at Edmunds.com. Thanks for the post and for making the web faster Ismail!

In the Fall of 2008, we embarked on a complete redesign of our car enthusiast site, insideline.com. One of the main redesign objectives was to deliver the fastest page load possible to our consumers. Leading up to that point, we have been closely following and implementing the performance best practices championed by Google's Make the Web Faster team and others. We understood the impact performance has on user experience and the bottom line.

Some of the many performance-enhancing features that have been implemented on insideline.com (and now on our beta.edmunds.com) are:
  1. Reducing the number of HTTP requests: We combined CSS and JavaScript files as necessary as well as using sprites and data URIs when appropriate. We have also reduced the number of blocking requests as much as possible to make the pages "feel" faster
  2. Serving static content from different domains: This helped maximize the browser parallel download capacity and made the request payload faster since no cookies were sent over the wire to those domains
  3. Using Expires headers: Caching static files in the client's browser to eliminate unnecessary, redundant requests to our servers
  4. Lazy-loading Page Modules: Render the bare minimum page components first so that the user sees something on the page, and then go through the modules and load them in order of priority. We developed a JavaScript Loader component to help us accomplish that which you can read more on the Edmunds technology blog.
  5. Managing 3rd-party components: iFrame components could be lazy-loaded without a problem. JavaScript components, on the other hand, need to be loaded onto the page before the onLoad event fires. That had the potential of slowing down our pages. The solution we devised was to delay the calling of those components until we initiate the lazy-loading of modules and right before the onLoad event fires
  6. Using non-blocking calls: With the browser being a single thread process, we optimized ways of including resources on the page without affecting page rendering so that the page is perceived to be fast by the user.

The results on insideline.com have been incredbile. Page load time went from 9 seconds on average on the old site to 1.5 seconds on average on the new one, and that's with loading in much richer content onto the page (measured with WebPageTest). We have also seen a 3% increase in ad revenue. On the beta.edmunds.com, which will replace our legacy site fully in December 2010, we have seen a 17% increase in page views and a 2% reduction in the bounce rate for our landing pages in a controlled experiment.

Although we have a long way to go in making our pages and services faster, we are very pleased of the progress we’ve made so far. Working with Google to make the web faster has been an exciting adventure that will continue with more improvements and innovations for both our sites and the web as a whole. Get more details on the Edmunds technology blog and try these enhancements on your site today.

Monday, November 15, 2010

Instant Previews: Under the hood

If you’ve used Google Search recently, you may have noticed a new feature that we’re calling Instant Previews. By clicking on the (sprited) magnifying glass icon next to a search result you see a preview of that page, often with the relevant content highlighted. Once activated, you can mouse over the rest of the results and quickly (instantly!) see previews of those search results, too.

Adding this feature to Google Search involved a lot of client-side Javascript. Being Google, we had to make sure we could deliver this feature without slowing down the page. We know our users want their results fast. So we thought we’d share some techniques involved in making this new feature fast.

JavaScript compilation

This is nothing new for Google Search: all our Javascript is compiled to make it as small as possible. We use the open-sourced Closure Compiler. In addition to minimizing the Javascript code, it also re-writes expressions, reuses variables, and prunes out code that is not being used. The Javascript on the search results page is deferred, and also cached very aggressively on the client side so that it’s not downloaded more than once per version.

On-demand JSONP

When you activate Instant Previews, the result previews are requested by your web browser. There are several ways to fetch the data we need using Javascript. The most popular techniques are XmlHttpRequest (XHR) and JSONP. XHR generally gives you better control and error-handling, but it has two drawbacks: browsers caching tends to be less reliable, and only same-origin requests are permitted (this is starting to change with modern browsers and cross-origin resource sharing, though). With JSONP, on the other hand, the requested script returns the desired data as a JSON object wrapped in a Javascript callback function, which in our case looks something like

google.vs.r({"dim":[302,585],"url":"http://example.com",ssegs:[...]}).

Although error handling with JSONP is a bit harder to do compared to XHR (not all browsers support onerror events), JSONP can be cached aggressively by the browser, and is not subject to same-origin restrictions. This last point is important for Instant Previews because web browsers restrict the number of concurrent requests that they send to any one host. Using a different host for the preview requests means that we don’t block other requests in the page.

There are a couple of tricks when using JSONP that are worth noting:

  • If you insert the script tag directly, e.g. using document.createElement, some browsers will show the page as still “loading” until all script requests are finished. To avoid that, make your DOM call to insert the script tag inside a window.setTimeout call.
  • After your requests come back and your callbacks are done, it’s a good idea to set your script src to null, and remove the tag. On some browsers, allowing too many script tags to accumulate over time may slow everything down.

Data URIs

At this point you are probably curious as to what we’re returning in our JSONP calls, and in particular, why we are using JSON and not just plain images. Perhaps you even used Firebug or your browser’s Developer Tools to examine the Instant Previews requests. If so, you will have noticed that we send back the image data as sets of data URIs. Data URIs are base64 encodings of image data, that modern browsers (IE8+, Chrome, Safari, Firefox, Opera, etc) can use to display images, instead of loading them from a server as usual.

To show previews, we need the image, and the relevant content of the page for the particular query, with bounding boxes that we draw on top of the image to show where that content appears on the page. If we used static images, we’d need to make one request for the content and one request for the image; using JSONP with data URIs, we make just one request. Data URIs are limited to 32K on IE8, so we send “slices” that are all under that limit, and then use Javascript to generate the necessary image tags to display them. And even though base64 encoding adds about 33% to the size of the image, our tests showed that gzip-compressed data URIs are comparable in size to the original JPEGs.

We use caching throughout our implementation, but it’s important to not forget about client-side caching as well. By using JSONP and data URIs, we limit the number of requests made, and also make sure that the browser will cache the data, so that if you refresh a page or redo a query, you should get the previews, well... instantly!

Wednesday, November 03, 2010

Make your websites run faster, automatically -- try mod_pagespeed for Apache

Last year, as part of Google’s initiative to make the web faster, we introduced Page Speed, a tool that gives developers suggestions to speed up web pages. It’s usually pretty straightforward for developers and webmasters to implement these suggestions by updating their web server configuration, HTML, JavaScript, CSS and images. But we thought we could make it even easier -- ideally these optimizations should happen with minimal developer and webmaster effort.

So today, we’re introducing a module for the Apache HTTP Server called mod_pagespeed to perform many speed optimizations automatically. We’re starting with more than 15 on-the-fly optimizations that address various aspects of web performance, including optimizing caching, minimizing client-server round trips and minimizing payload size. We’ve seen mod_pagespeed reduce page load times by up to 50% (an average across a rough sample of sites we tried) -- in other words, essentially speeding up websites by about 2x, and sometimes even faster.

(Video comparison of the AdSense blog site with and without mod_pagespeed)

Here are a few simple optimizations that are a pain to do manually, but that mod_pagespeed excels at:

  • Making changes to the pages built by the Content Management Systems (CMS) with no need to make changes to the CMS itself,
  • Recompressing an image when its HTML context changes to serve only the bytes required (typically tedious to optimize manually), and
  • Extending the cache lifetime of the logo and images of your website to a year, while still allowing you to update these at any time.

We’re working with Go Daddy to get mod_pagespeed running for many of its 8.5 million customers. Warren Adelman, President and COO of Go Daddy, says:

"Go Daddy is continually looking for ways to provide our customers the best user experience possible. That's the reason we partnered with Google on the 'Make the Web Faster' initiative. Go Daddy engineers are seeing a dramatic decrease in load times of customers' websites using mod_pagespeed and other technologies provided. We hope to provide the technology to our customers soon - not only for their benefit, but for their website visitors as well.”

We’re also working with Cotendo to integrate the core engine of mod_pagespeed as part of their Content Delivery Network (CDN) service.

mod_pagespeed integrates as a module for the Apache HTTP Server, and we’ve released it as open-source for Apache for many Linux distributions. Download mod_pagespeed for your platform and let us know what you think on the project’s mailing list. We hope to work with the hosting, developer and webmaster community to improve mod_pagespeed and make the web faster.

Thursday, September 30, 2010

WebP, a new image format for the Web

Cross-posted from the Chromium Blog

As part of Google’s initiative to make the web faster, over the past few months we have released a number of tools to help site owners speed up their websites. We launched the Page Speed Firefox extension to evaluate the performance of web pages and to get suggestions on how to improve them, we introduced the Speed Tracer Chrome extension to help identify and fix performance problems in web applications, and we released a set of closure tools to help build rich web applications with fully optimized JavaScript code. While these tools have been incredibly successful in helping developers optimize their sites, as we’ve evaluated our progress, we continue to notice a single component of web pages is consistently responsible for the majority of the latency on pages across the web: images.

Most of the common image formats on the web today were established over a decade ago and are based on technology from around that time. Some engineers at Google decided to figure out if there was a way to further compress lossy images like JPEG to make them load faster, while still preserving quality and resolution. As part of this effort, we are releasing a developer preview of a new image format, WebP, that promises to significantly reduce the byte size of photos on the web, allowing web sites to load faster than before.

Images and photos make up about 65% of the bytes transmitted per web page today. They can significantly slow down a user’s web experience, especially on bandwidth-constrained networks such as a mobile network. Images on the web consist primarily of lossy formats such as JPEG, and to a lesser extent lossless formats such as PNG and GIF. Our team focused on improving compression of the lossy images, which constitute the larger percentage of images on the web today.

To improve on the compression that JPEG provides, we used an image compressor based on the VP8 codec that Google open-sourced in May 2010. We applied the techniques from VP8 video intra frame coding to push the envelope in still image coding. We also adapted a very lightweight container based on RIFF. While this container format contributes a minimal overhead of only 20 bytes per image, it is extensible to allow authors to save meta-data they would like to store.

While the benefits of a VP8 based image format were clear in theory, we needed to test them in the real world. In order to gauge the effectiveness of our efforts, we randomly picked about 1,000,000 images from the web (mostly JPEGs and some PNGs and GIFs) and re-encoded them to WebP without perceptibly compromising visual quality. This resulted in an average 39% reduction in file size. We expect that developers will achieve in practice even better file size reduction with WebP when starting from an uncompressed image.

To help you assess WebP’s performance with other formats, we have shared a selection of open-source and classic images along with file sizes so you can visually compare them on this site. We are also releasing a conversion tool that you can use to convert images to the WebP format. We’re looking forward to working with the browser and web developer community on the WebP spec and on adding native support for WebP. While WebP images can’t be viewed until browsers support the format, we are developing a patch for WebKit to provide native support for WebP in an upcoming release of Google Chrome. We plan to add support for a transparency layer, also known as alpha channel in a future update.

We’re excited to hear feedback from the developer community on our discussion group, so download the conversion tool, try it out on your favorite set of images, and let us know what you think.

Thursday, September 02, 2010

Drupal 7 - faster than ever

This is a guest post by Owen Barton, partner and director of engineering at CivicActions. Owen has been working with Google's “Make the Web Faster” project team and the Drupal community to make improvements in Drupal 7 front-end performance. This is a condensed version of a more in-depth post over at the CivicActions blog.



Drupal is a popular free and open source publishing platform, powering high profile sites such as The White House, The New York Observer and Amnesty International. The Drupal community has long understood the importance of good front-end performance to successful web sites, being ahead of the game in many ways. This post highlights some of the improvements developed for the upcoming Drupal 7 release, several of which can save an additional second or more of page load times.



Drupal 7 has made its caching system more easily pluggable - to allow for easier memcache integration, for example. It has also enabled caching HTTP headers to be set so that logged out users can cache entire pages locally as well as improve compatibility with reverse proxies and content distribution networks (CDNs). There is also a patch waiting which reduces both the response size and the time taken to generate 404 responses for inlined page assets. Depending on the type of 404 (CSS have a larger effect than images, for example) the slower 404s were adding 0.5 to 1 second to the calling page load times.



Drupal currently has the ability to aggregate multiple CSS and JavaScript files by concatenating them into a smaller number of files to reduce the number of HTTP requests. There is a patch in the queue for Drupal 7 that could allow aggregation to be enabled by default, which is great because the large number of individual files can add anything from 0-1.5 seconds to page loads.



One issue that has become apparent with the Drupal 6 aggregation system is that users can end up downloading aggregate files that include a large amount of duplicate code. On one page the aggregate may contain files a, b and c, whilst on a second page the aggregate may contain files a, b and d - the “c” and “d” files being added conditionally on specific pages. This breaks the benefits of browser caching and slows down subsequent page loads. Benchmarking on core alone shows that avoiding duplicate aggregates can save over a second across 5 page loads. A patch has already been committed that means files need to be explicitly added to the aggregate, and fix Drupal core to add appropriate files to the aggregate unconditionally.



Drupal has supported gzip compression of HTML output for a long time, however for CSS and JavaScript, the files are delivered directly by the webserver, so Drupal has less control. There are webserver based compressors such as Apache’s mod_deflate, but these are not always available. A patch is in the queue that stores compressed versions of aggregated files on write and uses rewrite and header directives in .htaccess that allow these files to be served correctly. Benchmarks show that this patch can make initial page views 20-60% faster, saving anything from 0.3 to 3 seconds total.



The Drupal 7 release promises some real improvements from a front-end performance point of view. Other performance optimizations will no doubt continue to appear and be refined in contributed modules and themes, as well as in site building best practices and documentation. In Drupal 8 we will hopefully see further improvements in the CSS/JS file aggregation system, increased high-level caching effectiveness and hopefully more tools to help site builders reduce file sizes. If you have yet to try Drupal, download it now and give it a try and tell us in the comments if your site performance improves!