Posted:
Author PhotoBy Ilya Grigorik, Developer Advocate and Web Performance Engineer

Network performance is a critical factor in delivering a fast and responsive experience to the user. In fact, our goal is to make all pages load in under one second, and to get there we need to carefully measure and optimize each and every part of our application: how long the page took to load, how long each resource took to load, where the time was spent, and so on.

The good news is that the W3C Navigation Timing API gives us the tools to measure all of the critical milestones for the main HTML document: DNS, TCP, request and response, and even DOM-level timing metrics. However, what about all the other resources on the page: CSS, JavaScript, images, as well as dozens of third party components? Well, that’s where the new Resource Timing API can help!

resource timing api diagram

Resource Timing allows us to retrieve and analyze a detailed profile of all the critical network timing information for each resource on the page - each label in the diagram above corresponds to a high resolution timestamp provided by the Resource Timing API. Armed with this information, we can then track the performance of each resource and determine what we should optimize next. But enough hand-waving, let’s see it in action:
img = window.performance.getEntriesByName("http://mysite.com/mylogo.webp");

var dns  = parseInt(img.domainLookupEnd - img.domainLookupStart),
    tcp  = parseInt(img.connectEnd - img.connectStart),
    ttfb = parseInt(img.responseStart - img.startTime),
transfer = parseInt(img.responseEnd - img.responseStart),
   total = parseInt(img.responseEnd - img.startTime);

logPerformanceData("mylogo", dns, tcp, ttfb, transfer, total);

Replace the URL in the example above with any asset hosted on your own site, and you can now get detailed DNS, TCP, and other network timing data from browsers that support it - Chrome, Opera, and Internet Explorer 10+. Now we’re getting somewhere!

Measuring network performance of third party assets

Many applications rely on a wide variety of external assets such as social widgets, JavaScript libraries, CSS frameworks, web fonts, and so on. These assets are loaded from a third party server and as a result, their performance is outside of our direct control. That said, that doesn’t mean we can’t or shouldn’t measure their performance.

Resources fetched from a third party origin must provide an additional opt-in HTTP header to allow the site to gather detailed network timing data. If the header is absent, then the only available data is the total duration of the request. On that note, great news, we have been working with multiple teams, including at Facebook and Disqus, to do exactly that! You can now use the Resource Timing API to track performance of:


Curious to know how long your favorite web font, or jQuery library hosted on the Google CDN is taking to load, and where the time is spent? Easy, Resource Timing API to the rescue! For bonus points, you can then also beacon this data to your analytics server (e.g. using GA’s User Timing API) to get detailed performance reports, set up an SLA, and more, for each and every asset on your page.

Third party performance is a critical component of the final experience delivered to the user and Resource Timing is a much needed and a very welcome addition to the web platform. What we can measure, we can optimize!

(Note: due to long cache lifetime of some of the assets that are now Resource Timing enabled, some users may not be able to get immediate access to timing data as they may be using a cached copy. This will resolve itself as more users update their resources).


Ilya Grigorik is Developer Advocate at Google, where he spends his days and nights on making the web fast and driving adoption of performance best practices.

Posted by Scott Knaster, Editor

Posted:
Author Photo By Yaniv Yaakubovich, Product Manager, Google+

Cross-posted from the Google+ Developers Blog

Today we’re launching three updates to Google+ Sign-In, making it easier and more effective to include Google authentication in your app:

1. Support for all Google account types 
Google+ Sign-In now supports all Google account types, including Google Apps users, and users without a Google+ profile.

2. Easy migration from other auth methods 
If you’re using OpenID v2 or OAuth 2.0 Login for authentication and want to upgrade to Google+ Sign-In, we’ve made it easy to do so; it’s entirely your choice. Google+ Sign-In can grow your audience in multiple ways — including over-the-air installs, interactive posts, and cross-device sign-in — and now it’s fully compatible with the OpenID Connect standard. For more details, see our sign-in migration guide.

3. Incremental auth
Incremental auth is a new way to ask users for the right permission scopes at the right time, versus all permissions at once.

For example:
  • If your app allows users to save music playlists to Google Drive, you can ask for basic profile info at startup, and only ask for Google Drive permissions when they’re ready to save their first mix. 
  • Likewise: you can ask for Google Calendar permissions only when users RSVP to an event, and so on.
Now that incremental auth is available for Google+ Sign-In, we recommend asking for the minimum set of permissions up front, then asking for further permissions only when they’re required. This approach not only helps users understand how their information will be used in your app, it can also reduce friction and increase app engagement.

8Tracks only asks for the necessary permissions to get users started in their app.


Once in the app, 8Tracks prompts users to connect their YouTube account to get mix recommendations.


When users click ‘Connect Your YouTube account’, 8Tracks asks users for the additional YouTube permission.

If you have any questions, join our Developing with Google+ community, or tag your Stack Overflow posts with ‘google-plus’.


+Yaniv Yaakubovich is a Product Manager on the Google+ Platform team, working on Google+ Sign-in. When he is not working he enjoys reading and exploring California with his wife and son.

Posted by +Scott Knaster, Editor

Posted:
Author PhotoBy Maile Ohye, Developer Programs Tech Lead

To help you capitalize on the huge opportunity to improve your mobile websites, we published a checklist for prioritizing development efforts. Several topics in the checklist reference relevant studies or business cases. Others contain videos and slides explaining how to use Google Analytics and Webmaster Tools to understand mobile visitors' experiences and intent. Copied below is an abridged version of the full checklist. And speaking of improvements… we'd love your feedback on how to enhance our checklist as well!

Checklist for mobile website improvements

Step 1: Stop frustrating your customers
  • Remove cumbersome extra windows from all mobile user-agents | Google recommendation, Article
    • JavaScript pop-ups that can be difficult to close
    • Overlays, especially to download apps (instead consider a banner such as iOS 6+ Smart App Banners or equivalent, side navigation, email marketing, etc.)
    • Survey requests prior to task completion
  • Provide device-appropriate functionality
    • Remove features that requires plugins or videos not available on a user’s device (e.g., Adobe Flash isn’t playable on an iPhone or on Android versions 4.1 and higher) | Business case
    • Serve tablet users the desktop version (or if available, the tablet version) | Study
    • Check that full desktop experience is accessible on mobile phones, and if selected, remains in full desktop version for duration of the session (i.e., user isn’t required to select “desktop version” after every page load) | Study
  • Correct high traffic, poor user-experience mobile pages


How to improve high-traffic, poor user-experience mobile pages with data from Google Analytics bounce rate and events (slides)

For all topics in the category “Stop frustrating your customers”, please see the full Checklist for mobile website improvement.

Step 2: Facilitate task completion
  • Optimize search engine processing and the searcher experience | Business case
    • Unblock resources (CSS, JavaScript) that are robots.txt disallowed
    • For RWD: Be sure to include CSS @media query
    • For separate m. site: add rel=alternate media and rel=canonical, as well as Vary: User-Agent HTTP Header which helps Google implement Skip Redirect
    • For Dynamic serving: Vary: User-Agent HTTP header
  • Optimize popular mobile persona workflows for your site

How to use Google Webmaster Tools and Google Analytics to optimize the top mobile tasks on your website (slides)

For all topics in the category “Facilitate task completion”, please see the full Checklist for mobile website improvement.

Step 3: Turn customers into fans!
  • Consider search integration points with mobile apps | Background, Information
  • Investigate and/or attempt to track cross-device workflow | Business case
    • Logged in behavior on different devices
    • “Add to cart” or “add to wish list” re-visits
  • Brainstorm new ways to provide value
    • Build for mobile behavior, such as the in-store shopper | Business case
    • Leverage smartphone GPS, camera, accelerometer
    • Improve sharing or social behavior | Business case
    • Consider intuitive/fun tactile functionality with swiping, shaking, tapping


Maile Ohye is a Developer Advocate on Google's Webmaster Central Team. She very much enjoys chatting with friends and helping companies build a strategic online presence.

Posted by Scott Knaster, Editor

Posted:
Author PhotoBy Gregory Yakushev, Software Engineer

Today we are introducing new behavior for “all-following” changes to recurring events. Previously, we cut a recurring event at the point an “all-following” change was made and created a new recurring event starting at that point. Now, in most cases we keep the recurring event intact, while still applying the relevant changes to all following instances.

This means that users can now perform operations on the entire recurring series even after an “all-following” change has been made. They can modify, reply to, delete, or apply additional “all-following” changes. Also, in many cases, changes to specific instances of a recurring event will still be preserved after an “all-following” change.

To preserve backward compatibility, API clients will still see a separate recurring event after each “all-following” change. A separate post will announce API support for making these “all-following” changes and accessing whole recurring events with multiple “all-following” changes in them.

For example: suppose I have a recurring event “Daily Meeting” for my team. Paul knows that he will be on vacation, so he declined a few instances next month. I know that we will get new intern in a month, so I invite him starting next month to “all following” instances. Also I want to move it to a different room starting next week, so I change location and apply to “all following” instances.

After all these operations Paul's responses are still preserved: I see that he will not attend a few meetings next month. I also see that on instances two months ahead both of my “all-following” changes are reflected correctly: the room is changed and intern is invited. And all attendees still see all “Daily Meeting” instances as one recurrence: they can accept, decline or remove all of them with one click.


Grisha Yakushev ensures Calendar servers keep your data consistent and safe. He enjoys travelling the world, preferably by hitchhiking.

Posted by Scott Knaster, Editor

Posted:
Author PhotoBy Ari Balogh, Vice President, Cloud Platform

Cross-posted from the Google Cloud Platform Blog

Google Cloud Platform gives developers the flexibility to architect applications with both managed and unmanaged services that run on Google’s infrastructure. We’ve been working to improve the developer experience across our services to meet the standards our own engineers would expect here at Google.

Today, Google Compute Engine is Generally Available (GA), offering virtual machines that are performant, scalable, reliable, and offer industry-leading security features like encryption of data at rest. Compute Engine is available with 24/7 support and a 99.95% monthly SLA for your mission-critical workloads. We are also introducing several new features and lower prices for persistent disks and popular compute instances.

Expanded operating system support
During Preview, Compute Engine supported two of the most popular Linux distributions, Debian and Centos, customized with a Google-built kernel. This gave developers a familiar environment to build on, but some software that required specific kernels or loadable modules (e.g. some file systems) were not supported. Now you can run any out-of-the-box Linux distribution (including SELinux and CoreOS) as well as any kernel or software you like, including Docker, FOG, xfs and aufs. We’re also announcing support for SUSE and Red Hat Enterprise Linux (in Limited Preview) and FreeBSD.

Transparent maintenance with live migration and automatic restart
At Google, we have found that regular maintenance of hardware and software infrastructure is critical to operating with a high level of reliability, security and performance. We’re introducing transparent maintenance that combines software and data center innovations with live migration technology to perform proactive maintenance while your virtual machines keep running. You now get all the benefits of regular updates and proactive maintenance without the downtime and reboots typically required. Furthermore, in the event of a failure, we automatically restart your VMs and get them back online in minutes. We’ve already rolled out this feature to our US zones, with others to follow in the coming months.

New 16-core instances
Developers have asked for instances with even greater computational power and memory for applications that range from silicon simulation to running high-scale NoSQL databases. To serve their needs, we’re launching three new instance types in Limited Preview with up to 16 cores and 104 gigabytes of RAM. They are available in the familiar standard, high-memory and high-CPU shapes.

Faster, cheaper Persistent Disks
Building highly scalable and reliable applications starts with using the right storage. Our Persistent Disk service offers you strong, consistent performance along with much higher durability than local disks. Today we’re lowering the price of Persistent Disk by 60% per Gigabyte and dropping I/O charges so that you get a predictable, low price for your block storage device. I/O available to a volume scales linearly with size, and the largest Persistent Disk volumes have up to 700% higher peak I/O capability. You can read more about the improvements to Persistent Disk in our previous blog post.

10% Lower Prices for Standard Instances
We’re also lowering prices on our most popular standard Compute Engine instances by 10% in all regions.

Customers and partners using Compute Engine
In the past few months, customers like Snapchat, Cooladata, Mendelics, Evite and Wix have built complex systems on Compute Engine and partners like SaltStack, Wowza, Rightscale, Qubole, Red Hat, SUSE, and Scalr have joined our Cloud Platform Partner Program, with new integrations with Compute Engine.
“We find that Compute Engine scales quickly, allowing us to easily meet the flow of new sequencing requests… Compute Engine has helped us scale with our demands and has been a key component to helping our physicians diagnose and cure genetic diseases in Brazil and around the world.”
- David Schlesinger, CEO of Mendelics
"Google Cloud Platform provides the most consistent performance we’ve ever seen. Every VM, every disk, performs exactly as we expect it to and gave us the ability to build fast, low-latency applications."
- Sebastian Stadil, CEO of Scalr
We’re looking forward to this next step for Google Cloud Platform as we continue to help developers and businesses everywhere benefit from Google’s technical and operational expertise.


Ari Balogh is the Vice President, Cloud Platform at Google and manages the teams responsible for building Google Cloud Platform and other parts of Google’s internal infrastructure.

Posted by Scott Knaster, Editor