Posted by Philip Walton, Developer Programs Engineer
The web has changed a lot since the early days of Google Analytics. Back then, most websites actually consisted of individual pages, and moving from one page to the next involved clicking a link and making a full-page request. With sites like this, it's possible to track the majority of relevant user interactions with a single, one-size-fits-all JavaScript tracking snippet.
But the web of today is much more complex and varied than it used to be. In addition to traditional, static websites, we have full-featured web applications. User interactions aren't limited to clicking links and submitting forms, and a "pageview" doesn't always mean a full-page load.
The web has changed, but analytics implementations have stayed pretty much the same. Most Google Analytics users copy and paste the default tracking snippet and that's it. They know there's more they can do with Google Analytics, but taking the time to learn is often not a priority.
Autotrack for analytics.js is a new solution to this problem. It attempts to leverage as many Google Analytics features as possible while requiring minimal manual implementation. It gives developers a foundation for tracking data relevant to today's modern web.
The autotrack library is built as a collection of analytics.js plugins, making it easy to use the entire library as-is or to pick and choose just the plugins you need. The next few sections describe some of the features autotrack enables.
When a user clicks a link that points to another page on a site, that other page typically sends a pageview hit once the user arrives. Because there's a series of pageviews, Google Analytics can figure out on the back end where the user navigated to (and from). But if a user clicks a link or submits a form to an external domain, that action is not captured unless you specifically tell Google Analytics what happened.
Historically, outbound link and form tracking has been tricky to implement because most browsers stop executing JavaScript on the current page once a new page starts to load. Autotrack handles these complications for you, so you get outbound link and form tracking for free.
If you're building a single page application that dynamically loads content and updates the URL using the History API, the default tracking snippet will not suffice -- it only tracks the initial page load. Even if you're sending additional pageviews after successfully loading new content, there can still be complications.
Autotrack automatically detects URL changes made via the History API and tracks those as pageviews. It also keeps the tracker in sync with the updated URL so all subsequent hits (events, social interactions, etc.) are associated with the correct URL.
Sometimes it's easier to declaratively add an event to the HTML than to manually write an event listener in JavaScript. Tracking simple click events is a prime example of this. To track click events with autotrack, you just add data attributes to your markup.
<button data-event-category="Video" data-event-action="play">Play</button>
When a user clicks on the above button, an event with the corresponding category and action (and, optionally, label and value) is sent to Google Analytics.
Most sites today use responsive design to update the page layout based on the screen size or capabilities of the user's device. If media queries are used to alter the look or functionality of a page, it's important to capture that information to better understand how usage differs when different media queries are active.
Autotrack allows you to register the set of media query values you're using, and those values are automatically tracked via custom dimensions. It also tracks when those values change. (Note that media query tracking requires you to set up custom dimensions in Google Analytics. The process only takes a few minutes, and the instructions are explained in the mediaQueryTracker plugin documentation.)
These are just a few of the features you can enable when using Autotrack. For a complete list of all plugins and instructions on how to use them, refer to the Autotrack documentation on Github.
While anyone could use and benefit from autotrack, the library is primarily geared toward sites that do not customize their current analytics implementation and would like to take advantage of the features described in this article.
If you're just using the default tracking snippet today, you should consider using autotrack. If you already have a custom implementation of Google Analytics, you should first check the documentation to make sure none of the autotrack features will conflict and no data will be double-counted.
To get started using autotrack, check out the usage section of the documentation. If you're curious to see what the data captured by autotrack looks like, the Google Analytics Demos & Tools site uses autotrack and has a page with charts showing the site's own Google Analytics data.
If you want to go deeper, the autotrack library is open source and can be a great learning resource. Have a read through the plugin source code to get a better understanding of how many of the advanced analytics.js features work.
Lastly, if you have feedback or suggestions, please let us know. You can report bugs or submit any issues on Github.
Posted by Siddartha Janga, Google iOS Developers
Brewing for quite some time, we are excited to announce EarlGrey, a functional UI testing framework for iOS. Several Google apps like YouTube, Google Calendar, Google Photos, Google Translate, Google Play Music and many more have successfully adopted the framework for their functional testing needs.
The key features offered by EarlGrey include:
Are you in need for a cup of refreshing EarlGrey? EarlGrey has been open sourced under the Apache license. Check out the getting started guide and add EarlGrey to your project using CocoaPods or manually add it to your Xcode project file.
We’re delighted to announce the availability of the People API. With it, you can retrieve data about an authenticated user’s connections from their Contacts. Previously, developers had to make multiple calls to the Google+ API for user profiles and the Contacts API for contacts. The new People API uses the newest protocols and technologies and will eventually replace the Contacts API which uses the GData protocol.
For example, if your user has contacts in her private contact list, a call to the API (if she provides consent to do so) will retrieve a list containing the contacts merged with any linked profiles. If the user grants the relevant scopes, the results are returned as a people.connections.list object. Each person object in this list will have a resourceName property, which can be used to get additional data about that person with a call to people.get.
people.connections.list
people.get.
The API is built on HTTP and JSON, so any standard HTTP client can send requests to it and parse the response. However, applications need to be authorized to access the APIs so you will need to create a project on the Google Developers Console in order to get the credentials you need to access the service. All the steps to do so are here. If you’re new to the Google APIs and/or the Developers Console, check out this first in a series of videos to help you get up-to-speed.
Once you’re connected and authorized, you can then get the user’s connections like this (using the Google APIs Client Library for Java):
ListConnectionsResponse response = peopleService.people().connections().list("people/me").execute(); List<Person> connections = response.getConnections();
Full documentation on the people.connections.list method is available here.
The list of connections will have details on all the user’s social connections if the required scopes have been granted. Contacts will only be returned if the user granted a contacts scope.
Each Person item will have a resource_name associated with it, so additional data for that person will be accessible via a simple call:
Person person = peopleService.people().get("resourceName").execute();
Details on this API call can be found here.
In addition to merging data from multiple sources and APIs into a single cohesive data source, the new People API also exposes additional data that was not possible to get before, such as private addresses, phone numbers, e-mails, and birthdays for a user who has given permission.
We hope that these new features and data along with simplified access to existing data inspires you to create the next generation of cool web and mobile apps that delight your users and those in their circles of influence. To learn more about the People API, check out the official documentation here.
Originally posted on Google Apps Developers blog
Posted by Henry Wang, Associate Product Marketing Manager
Originally posted on Google Research Blog
Posted by Peter Lubbers, Senior Program Manager, Google Developer Training
Almost three years ago we shipped our very first Udacity course about HTML5 Game Development. Today marks a milestone that we proudly want to share with the world. The 1 millionth person has enrolled in our Google Developer Training courses. Was it you?
This milestone is more than just a number. Thanks to our partnership with Udacity, this training gives developers access to skills that empower families, communities, and the future.
One million developers around the world have made a commitment to learn a new language, expand their craft, start a new business, completely shift careers and more. So, here's to the next million people who are excited about using technology to solve the world’s most pressing challenges.
Keep learning!
Posted by Vijay Subramani, Technical Program Manager, Google Cloud Platform
Back in 2011, we announced the deprecation of the following APIs: Google Patent Search API, Google News Search API, Google Blog Search API, Google Video Search API, Google Image Search API. We supported these APIs for a three year period (and beyond), but as all things come to an end, so has the deprecation window for these APIs.
We are now announcing the turndown of the above APIs. These APIs will cease operations on February 15, 2016.
You may wish to look at our Custom Search API as an alternative for these APIs.
Posted by Nathan Martz, Product Manager, Google Cardboard
Human beings experience sound in all directions—like when a fire truck zooms by, or when an airplane is overhead. Starting today, the Cardboard SDKs for Unity and Android support spatial audio, so you can create equally immersive audio experiences in your virtual reality (VR) apps. All your users need is their smartphone, a regular pair of headphones, and a Google Cardboard viewer.
Many apps create simple versions of spatial audio—by playing sounds from the left and right in separate speakers. But with today’s SDK updates, your app can produce sound the same way humans actually hear it. For example:
We built today’s updates with performance in mind, so adding spatial audio to your app has minimal impact on the primary CPU (where your app does most of its work). We achieve these results in a couple of ways:
It’s really easy to get started with the SDK’s new audio features. Unity developers will find a comprehensive set of components for creating soundscapes on Android, iOS, Windows and OS X. And native Android developers will now have a simple Java API for simulating virtual sounds and environments.
Check out our Android sample app (for developer reference only), browse the documentation on the Cardboard developers site, and start experimenting with spatial audio today. We’re excited to see (and hear) the new experiences you’ll create!