Author PhotoBy Felipe Hoffa, Cloud Platform team

Cross-posted from the Google Cloud Platform Blog
Editor's note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.

Last Tuesday we announced an exciting set of changes to Google BigQuery making your experience easier, faster and more powerful. In addition to new features and improvements like table wildcard functions, views, and parallel exports, BigQuery now features increased streaming capacity, lower pricing, and more.

1000x increase in streaming capacity

Last September we announced the ability to stream data into BigQuery for instant analysis, with an ingestion limit of 100 rows per second. While developers have enjoyed and exploited this capability, they've asked for more capacity. You now can stream up to 100,000 rows per second, per table into BigQuery - 1,000 times more than before.

For a great demonstration of the power of streaming data into BigQuery, check out the live demo from the keynote at Cloud Platform Live.

Users often partition their big tables into smaller units for data lifecycle and optimization purposes. For example, instead of having yearly tables, they could be split into monthly or even daily sets. BigQuery now offers table wildcard functions to help easily query tables that match common parameters.

The downside of partitioning tables is writing queries that need to access multiple tables. This would be easier if there was a way to tell BigQuery "process all the tables between March 3rd and March 25th" or "read every table which names start with an 'a'". You can do this with this release.

TABLE_DATE_RANGE() queries all tables that overlap with a time range (based on the table names), while TABLE_QUERY() accepts regular expressions to select the tables to analyze.

For more information, see the documentation and syntax for table wildcard functions.

Improved SQL support and table views

BigQuery has adopted SQL as its query language because it's one of the most well known, simple and powerful ways to analyze data. Nevertheless BigQuery used to impose some restrictions on traditional SQL-92, like having to write multiple sub-queries instead of simpler multi-joins. Not anymore, now BigQuery supports multi-join and CROSS JOIN, and improves its SQL capabilities with more flexible alias support, fewer ORDER BY restrictions, more window functions, smarter PARTITION BY, and more.

A notable new feature is the ability to save queries as views, and use them as building blocks for more complex queries. To define a view, you can use the browser tool to save a query, the API, or the newest version of the BigQuery command-line tool (by downloading the Google Cloud SDK).

User-defined metadata

Now you can annotate each dataset, table, and field with descriptions that are displayed within BigQuery. This way people you share your datasets with will have an easier time identifying them.

JSON parsing functions

BigQuery is optimized for structured data: before loading data into BigQuery, you should first define a table with the right columns. This is not always easy, as JSON schemas might be flexible and in constant flux. BigQuery now lets you store JSON encoded objects into string fields, and you can use the JSON_EXTRACT and JSON_EXTRACT_SCALAR functions to easily parse them later using JSONPath-like expressions.

For example:
SELECT json_extract_scalar(
   "{'book': { 
       'title':'Harry Potter'}}", 

Fast parallel exports

BigQuery is a great place to store all your data and have it ready for instant analysis using SQL queries. But sometimes SQL is not enough, and you might want to analyze your data with external tools. That's why we developed the new fast parallel exports: With this feature, you can define how many workers will be consuming the data, and BigQuery exports the data to multiple files optimized for the available number of workers.

Check the exporting data documentation, or stay tuned for the upcoming Hadoop connector to BigQuery documentation.

Massive price reductions

At Cloud Platform live, we announced a massive price reduction: Storage costs are going down 68%, from 8 cents per gigabyte per month to only 2.6, while querying costs are going down 85%, from 3.5 cents per gigabyte to only 0.5. Previously announced streaming costs are now reduced by 90%. And finally, we announced the ability to purchase reserved processing capacity, for even cheaper prices and the ability to precisely predict costs. And you always have the option to burst using on-demand capacity.

I want to take this space to celebrate the latest open source community contributions to the BigQuery ecosystem. R has its own connector to BigQuery (and a tutorial), as Python pandas too (check out the video we made with Pearson). Ruby developers are now able to use BigQuery with an ActiveRecord connector, and send all their logs with fluentd. Thanks all, and keep surprising us!

Felipe Hoffa is part of the Cloud Platform Team. He'd love to see the world's data accessible for everyone in BigQuery.

Posted by Louis Gray, Googler

By Igor Clark, Google Creative Lab

With the release of the Google Cast SDK, making interactive experiences for the TV is now as easy as making interactive stuff for the web.

Google Creative Lab and Hook Studios took the SDK for a spin to make Photowall for Chromecast: a new Chrome Experiment that lets people collaborate with images on the TV.

Anyone with a Chromecast can set up a Photowall on their TV and have friends start adding photos to it from their phones and tablets in real time.

So how does it work?

The wall-hosting apps communicate with the Chromecast using the Google Cast SDK’s sender and receiver APIs. A simple call to the requestSession method using the Chrome API or launchApplication on the iOS/Android APIs is all it takes to get started. From there, communication with the Chromecast is helped along using the Receiver API’s getCastMessageBus method and a sendMessage call from the Chrome, iOS or Android APIs.

Using the Google Cast SDK makes it easy to launch a session on a Chromecast device. While a host is creating their new Photowall, they simply select which Chromecast they would like to use for displaying the photos. After a few simple steps, a unique five-digit code is generated that allows guests to connect to the wall from their mobile devices.

The Chromecast device then loads the Photowall application and begins waiting for setup to complete. Once ready, the Chromecast displays the newly-generated wall code and waits for photos to start rolling in. If at any point the Chromecast loses power or internet connection, the device can be relaunched with an existing Photowall right from the administration dashboard.

Tying it all together: The mesh

A mesh network connects the Photowall’s host, the photo-sharing guests, and the Chromecast. The devices communicate with each other via websockets managed by a Google Compute Engine-powered node.js server application. A Google App Engine app coordinates wall creation, authentication and photo storage on the server side, using the App Engine Datastore.

After a unique code has been generated during the Photowall creation process, the App Engine app looks for a Compute Engine instance to use for websocket communication. The instance is then told to route websocket traffic flagged with the new wall’s unique code to all devices that are members of the Photowall with that code.

The instance’s address and the wall code are returned to the AppEngine app. When a guest enters the wall code into the photo-sharing app on their browser, the AppEngine app returns the address of the Compute Engine websocket server associated with that code. The app then connects to that server and joins the appropriate websocket/mesh network, allowing for two-way communication between the host and guests.

Why is this necessary? If a guest uploads a photo that the host decides to delete for whatever reason, the guest needs to be notified immediately so that they don’t try to take further action on it themselves.

A workaround for websockets

Using websockets this way proved to be challenging on iOS devices. When a device is locked or goes to sleep the websocket connection should be terminated. However, in iOS it seems that the Javascript execution can be halted before the websocket close event is fired. This means that we are unaware of the disconnection, and when the phone is unlocked again we are left unaware that the connection has been dropped.

To get around this inconsistent websocket disconnection issue, we implemented a check approximately every 5 seconds to examine the ready state of the socket. If it has disconnected we reconnect and continue monitoring. Messages are buffered in the event of a disconnection and sent in order when a connection is reestablished.

Custom photo editing

The heart of the Photowall mobile web application is photo uploading. We created a custom photo editing experience for guests wanting to add their photos to a Photowall. They can upload photos directly from their device’s camera or choose one directly from its gallery. Then comes the fun stuff: cropping, doodling and captioning.

Photowall for Chromecast has been a fun opportunity to throw out everything we know about what a photo slideshow could be. And it’s just one example of what the Chromecast is capable of beyond content streaming. We barely scratched the surface of what the Google Cast SDK can do. We’re excited to see what’s next for Chromecast apps, and to build another.

For more on what’s under the hood of Photowall for Chromecast, you can tune in to our Google Developers Live event for an in-depth discussion on Thursday, April 3rd, 2014 at 2pm PDT.

Igor Clark is Creative Tech Lead at Google Creative Lab. The Creative Lab is a small team of makers and thinkers whose mission is to remind the world what it is they love about Google.

Posted by Louis Gray, Googler

By Navneet Joneja, Cloud Platform Team

Cross-posted from the Google Cloud Platform blog

Editor’s note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.

For many developers, building a cloud-native application begins with a fundamental decision. Are you going to build it on Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Will you build large pieces of plumbing yourself so that you have complete flexibility and control, or will you cede control over the environment to get high productivity?

You shouldn’t have to choose between the openness, flexibility and control of IaaS, and the productivity and auto-management of PaaS. Describing solutions declaratively and taking advantage of intelligent management systems that understand and manage deployments leads to higher availability and quality of service. This frees engineers up to focus on writing code and significantly reduces the need to carry a pager.

Today, we’re introducing Managed Virtual Machines and Deployment Manager. These are our first steps towards enabling developers to have the best of both worlds.
Managed Virtual Machines

At Google Cloud Platform Live we took the first step towards ending the PaaS/IaaS dichotomy by introducing Managed Virtual Machines. With Managed VMs, you can build your application (or components of it) using virtual machines running in Google Compute Engine while benefiting from the auto-management and services that Google App Engine provides. This allows you to easily use technology that isn’t built into one of our managed runtimes, whether that is a different programming language, native code, or direct access to the file system or network stack. Further, if you find you need to ssh into a VM in order to debug a particularly thorny issue, it’s easy to “break glass” and do just that.

Moving from an App Engine runtime to a managed VM can be as easy as adding one line to your app.yaml file:


At Cloud Platform Live, we also demonstrated how the next stage in the evolution of Managed VMs will allow you to bring your own runtime to App Engine, so you won’t be limited to the runtimes we support out of the box.

Managed Virtual machines will soon launch in Limited Preview, and you can request access here starting today.

Introducing Google Cloud Deployment Manager

A key part of deploying software at scale is ensuring that configuration happens automatically from a single source of truth. This is because accumulated manual configuration often results in “snowflakes” - components that are unique and almost impossible to replicate - which in turn makes services harder to maintain, scale and troubleshoot.

These best practices are baked into the App Engine and Managed VM toolchains. Now, we’d like to make it easy for developers who are using unmanaged VMs to also take advantage of declarative configuration and foundational management capabilities like health-checking and auto-scaling. So, we’re launching Google Cloud Deployment Manager - a new service that allows you to create declarative deployments of Cloud Platform resources that can then be created, actively health monitored, and auto-scaled as needed.

Deployment Manager gives you a simple YAML syntax to create parameterizable templates that describe your Cloud Platform projects, including:
  • The attributes of any Compute Engine virtual machines (e.g. instance type, network settings, persistent disk, VM metadata).
  • Health checks and auto-scaling
  • Startup scripts that can be used to launch applications or other configuration management software (like Puppet, Chef or SaltStack)
Templates can be re-used across multiple deployments. Over time, we expect to extend Deployment Manager to cover additional Cloud Platform resources.

Deployment manager enables you to think in terms of logical infrastructure, where you describe your service declaratively and let Google’s management systems deploy and manage their health on your behalf. Please see the Deployment Manager documentation to learn more and to sign up for the Limited Preview.

We believe that combining flexibility and openness with the ease and productivity of auto-management and a simple tool-chain is the foundation of the next-generation cloud. Managed VMs and Deployment Manager are the first steps we’re taking towards delivering that vision.

Update on operating systems

We introduced support for SuSE and Red Hat Enterprise Linux on Compute Engine in December. Today, SuSE is Generally Available, and we announced Open Preview for Red Hat Enterprise Linux last week. We’re also announcing the limited preview for Windows Server 2008 R2, and you can sign up for access now. Windows Server will be offered at $0.02 per hour for the f1-micro and g1-small instances and $0.04 per core per hour for all other instances (Windows prices are in addition to normal VM charges).

Simple, lower prices

As we mentioned on Tuesday, we think pricing should be simpler and more closely track cost reductions as a result of Moore’s law. So we’re making several changes to our pricing effective April 1, 2014.

First, we’ve cut virtual machine prices by up to 53%:
  • We’ve dropped prices by 32% across the board.
  • We’re introducing sustained-use discounts, which lower your effective price as your usage goes up. Discounts start when you use a VM for over 25% of the month and increase with usage. When you use a VM for an entire month, this amounts to an additional 30% discount.
What’s more, you don’t need to sign up for anything, make any financial commitments or pay any upfront fees. We automatically give you the best price for every VM you run, and you still only pay for the minutes that you use.

Here’s what that looks like for our standard 1-core (n1-standard-1) instance: Finally, we’ve drastically simplified pricing for App Engine, and lowered pricing for instance-hours by 37.5%, dedicated memcache by 50% and Datastore writes by 30%. In addition, many services, including SNI SSL and PageSpeed are now offered to all applications at no extra cost.

We hope you find these new capabilities useful, and look forward to hearing from you! If you haven’t yet done so, you can sign up for Google Cloud Platform here.

Navneet Joneja, Senior Product Manager

Posted by Louis Gray, Googler

Author PhotoBy Seth Ladd, Developer Advocate

In celebration of Dart 1.0, the global developer community organized over 120 Dart Flight School events, and the response was overwhelming. Throughout February, 8500 developers learned how to build modern web (and server!) apps with Dart and AngularDart. Attendees got their Dart wings in Laos, France, Uganda, San Francisco, New Delhi, Bolivia and everywhere in between.

If you missed out, you can watch this Introduction to AngularDart video, build your first Dart app with the Darrrt Pirate Badge code lab, and try the AngularDart code lab.

Here are some of our favorite photos -- some events really embraced the theme!

+Kasper Lund, co-founder of Dart, speaking inside a decommissioned 747 at a Flight School hosted by GDG Netherlands.

GDG Seattle hosted their Flight School in the Museum of Flight.


8 cities in China did simultaneous events over Hangouts on Air (on Air, get it?) GDGs in Beijing, Hangzhou, Lanzhou, Shanghai, Suzhou, Xiamen, Xi’an, and Zhangjiakou participated.

Check out more photo highlights from around the world.

Thank you to the amazing community organizers, speakers, volunteers, and attendees that made this possible.

Next time, space!

Seth Ladd is a Developer Advocate on Dart. He's a web engineer, book author, conference organizer, and loves a game of badminton.

Posted by Louis Gray, Googler

By Billy Rutledge, Director of Developer Relations

Today we launched the Google I/O 2014 website at Play with the experiment, get a preview of this year's conference, and continue to follow the Google Developers blog for updates on the event.


Now, on to what I know you're waiting to hear about most. A month ago, we mentioned that this year’s registration process would be different. You won't need to scramble the second registration opens, as we will not be implementing a first-come-first-served model this year. Instead, registration will remain open from April 8 - 10 and you can apply any time during this window. We'll randomly select applicants after the window closes on April 10, and send ticket purchase confirmation emails shortly thereafter.

So sit back, relax, sleep in, and visit the Google I/O website from April 8-10 when the registration window is open. For full details, visit our Help page.

I/O Extended & Live:

If you can't make it to San Francisco, or would just rather experience I/O on your own schedule, we'll be bringing I/O to you in two ways. Watch the livestream of the keynote and sessions from the comfort of your home or office. Or, attend an I/O Extended event in your area. More details on these programs will be available soon.

We're working hard to make sure Google I/O 2014 is the best I/O yet. We hope to see you in June!

Billy Rutledge, Director of Developer Relations

Posted by Louis Gray, Googler

By Cody Bratt, Google Cloud Platform team

Cross-posted from the Google Cloud Platform blog

Editor's note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.

Yesterday, we unveiled a new set of developer experiences for Google Cloud Platform that are inspired by the work we've done inside of Google to improve our developers productivity. We want to walk you through these experiences in more detail and how we think they can help you focus on developing your applications and growing your business.

Isolating production problems faster

Understanding how applications are running in production can be challenging and sometimes it's unavoidable that errors will make it into production. We've started adding new capabilities to Cloud Platform this week that make it easier to isolate and fix production problems in your application running on Google App Engine.

We are adding a 'Diff' link to the Releases page (shown below) which will bring you to a rollup of all the commits that happened between deployments and the source code changes they introduced. Invaluable when you are trying to isolate a production issue.
You can see here where Brad introduced an error into production.
But, looking at source changes can still be like looking for a needle in a haystack. This is why we're working to combine data from your application running in production and link it with your source code. logviewer
In the new and improved logs viewer, we aggregate logs from all your instances in a single view in near real time. Of course, with high traffic levels, just browsing is unlikely to be helpful. You can filter based on a number of different criteria including regular expressions and the time when you saw an issue. We've also improved the overall usability by letting you scroll continuously rather than viewing 20 logs per page at a time.

Ordinarily, debugging investigations from the logs viewer would require you to find the code, track down the right file and line and ensure it's the same version that was deployed in production. This can cause you to completely lose context. In order to address this, we've added links from stack traces to the associated code causing them for push-to-deploy users. In one click you are brought to the source viewer at the revision causing the problem with the associated line highlighted. Screen Shot 2014-03-21 at 1.36.21 PM.png Shortening your time to fix

You may have noticed the 'Edit' buttons in the source viewer. We're introducing this because we think that finding the error in production is only part of the effort required when things go awry. We continue to ask ourselves how we can make it even faster for you to fix problems. One of the ways we do this inside Google today is to make the fix immediately from the browser.

So, we're bringing this to you by making it possible to edit a file in place directly from the source viewer in the Console. You can simply click the 'Edit' button, make your changes to that file, and click the 'Commit' button. No local setup required. We also know that sometimes fixes aren't that simple and we're investigating ways we can make it easy to seamlessly open your files in a desktop editor or IDE directly from the Console.
Fix simple problems directly in the browser
commit successful.png
Trigger your push-to-deploy setup instantly
Since we've integrated this with your push-to-deploy configuration, the commit triggers any build and test steps you have to ensure your code is fully vetted before it reached production. Further, since we've built this on top of Git, each change is fully attributed and traceable in your Git repository. This is not SSHing into a machine and hacking on code in production.

An integrated ecosystem of tools you know

Speaking of using other editors, we believe that you are most productive when you have access to all the tools you know and love. That's why we're committed to creating a well integrated set of experiences across those tools, rather than than asking you to switch. With the Git push-to-deploy feature we launched last year, we wanted to make it easy for you to deploy your application utilizing standard Git commands while giving you free private hosted Git repositories. In addition, we understand that many of you host your source code on GitHub and in fact so does the Google Cloud Platform team. As you can see from the new 'Releases' page we showed this week, we're introducing the ability to connect your GitHub repository to push-to-deploy. We will register a post-commit hook with GitHub to give you all the deployment benefits without moving your source code. Just push to the master branch of your GitHub repository and your code will be deployed!
Push-to-deploy now supports Java builds
Next, we're excited to introduce new release types for you to use with push-to-deploy. As you can see above, this project is set up to trigger a build, run all your unit tests, and if they all pass, deploy. Taking a peek under the covers, we utilize a standard Google Compute Engine virtual machine that you own running in your project to perform this build and test. In this case, Google has automatically provisioned the machine with Maven and Jenkins and everything you need to build and run your tests. Your build and tests can be as complex as they need to be and they can use all the resources available on that machine they need to run. What's more is that all the builds will be done on a clean machine ensuring reliable, repeatable builds on every release. We're starting with Maven-based Java builds, but working to release support for other languages, test frameworks and build systems in the future.
The release history showing build, test and deployment status
Tying this together, we've simplified getting started on your projects by introducing the new 'gcloud' command in the Google Cloud SDK. The Google Cloud SDK contains all the tools and libraries you need to create and manage resources on the Cloud Platform. We recently posted some Cloud SDK tips and tricks. Now, using the 'gcloud init' command, we make setting up a local copy of your project fast by finding your project's associated Git repository and downloading the source code, so all you have to do is change directories and open your favorite editor. gcloudinit
Once you are initialized, all you need to do is start writing code. After you're done, running 'git push' will push your changes back to the remote repository and trigger your push-to-deploy flow. Then, all your changes will also be available to view on the 'Source' page in the Console.

We'll be rolling out these features to you in stages over the coming weeks, but if you are interested in being a trusted tester for any of them, please contact us here. We're very excited to hear how you're using these tools and what features we can build on top of them to make your developer experience even more seamless and fast. We're just getting started.

Cody Bratt is a product manager on the Google Cloud Platform team

Posted by Louis Gray, Googler

Author PhotoBy Urs Hölzle, Senior Vice President

Editors note: Tune in to Google Cloud Platform Live for more information about our announcements. And join us during our 27-city Google Cloud Platform Roadshow which kicks off in Paris on April 7.

Today, at Google Cloud Platform Live we're introducing the next set of improvements to Cloud Platform: lower and simpler pricing, cloud-based DevOps tooling, Managed Virtual Machines (VM) for App Engine, real-time Big Data analytics with Google BigQuery, and more.

Industry-leading, simplified pricing

The original promise of cloud computing was simple: virtualize hardware, pay only for what you use, with no upfront capital expenditures and lower prices than on-premise solutions. But pricing hasn't followed Moore's Law: over the past five years, hardware costs improved by 20-30% annually but public cloud prices fell at just 8% per year.

We think cloud pricing should track Moore's Law, so we're simplifying and reducing prices for our various on-demand, pay-as-you-go services by 30-85%:
  • Compute Engine reduced by 32% across all sizes, regions, and classes.
  • App Engine pricing simplified, with significant reductions in database operations and front-end compute instances.
  • Cloud Storage is now priced at a consistent 2.6 cents per GB. That's roughly 68% less for most customers.
  • Google BigQuery on-demand prices reduced by 85%.

Sustained-Use discounts

In addition to lower on-demand prices, you'll save even more money with Sustained-Use Discounts for steady-state workloads. Discounts start automatically when you use a VM for over 25% of the month. When you use a VM for an entire month, you save an additional 30% over the new on-demand prices, for a total reduction of 53% over our original prices.
Sustained-Use Discounts automatically reward users who run VMs for over 25% of any calendar month

With our new pricing and sustained use discounts, you get the best performance at the lowest price in the industry. No upfront payments, no lock-in, and no need to predict future use.

Making developers more productive in the cloud

We're also introducing features that make development more productive:
  • Build, test, and release in the cloud, with minimal setup or changes to your workflow. Simply commit a change with git and we'll run a clean build and all unit tests.
  • Aggregated logs across all your instances, with filtering and search tools.
  • Detailed stack traces for bugs, with one-click access to the exact version of the code that caused the issue. You can even make small code changes right in the browser.
We're working on even more features to ensure that our platform is the most productive place for developers. Stay tuned.

Introducing Managed Virtual Machines

You shouldn't have to choose between the flexibility of VMs and the auto-management and scaling provided by App Engine. Managed VMs let you run any binary inside a VM and turn it into a part of your App Engine app with just a few lines of code. App Engine will automatically manage these VMs for you.

Expanded Compute Engine operating system support

We now support Windows Server 2008 R2 on Compute Engine in limited preview and Red Hat Enterprise Linux and SUSE Linux Enterprise Server are now available to everyone.

Real-Time Big Data

BigQuery lets you run interactive SQL queries against datasets of any size in seconds using a fully managed service, with no setup and no configuration. Starting today, with BigQuery Streaming, you can ingest 100,000 records per second per table with near-instant updates, so you can analyze massive data streams in real time. Yet, BigQuery is very affordable: on-demand queries now only cost $5 per TB and 5 GB/sec reserved query capacity starts at $20,000/month, 75% lower than other providers.


This is an exciting time to be a developer and build apps for a global audience. Today we've focused a lot on productivity, making it easier to build and test in the cloud, using the tools you're already familiar with. Managed VMs give you the freedom to combine flexible VMs with the auto-management of App Engine. BigQuery allows big data analysis to just work, at any scale.

And on top of all of that, we're making it more affordable than it's ever been before, reintroducing Moore's Law to the cloud: the cost of virtualized hardware should fall in line with the cost of the underlying real hardware. And you automatically get discounts for sustained use with no long-term contracts, no lock-in, and no upfront costs, so you get the best price and the best performance without needing a PhD in Finance.

We've made a lot of progress this first quarter and you'll hear even more at Google I/O in June.

Posted by Louis Gray, Googler

By Lucia Fedorova, Google Calendar API Team

Choose your own event IDs

Imagine you work at a package delivery company and are developing a system for automatically assigning deliveries to your employees. Each of your employees has a work calendar in Google Calendar and needs to know when and where to be for the next delivery. Previously, you had to store an ID of an event in the internal deliveries database; otherwise you would not be able to find and update calendar events when work assignments changed. But now you can simply use the delivery ID as the ID of a corresponding event in Google Calendar. The complexity of figuring out which calendar event matches which delivery entirely disappears. This opens up a whole new set of integration options -- for example, when an employee declines the calendar event, Google Calendar can notify you so that you can automatically reschedule the delivery to someone else.

Try it out for yourself: just set the ID field when creating a new single or recurring event via the Calendar API v3 and observe that it sticks! The IDs must still follow certain format, but don’t worry -- it’s possible to represent almost any content in the ID by using base32hex encoding.

Set up notifications for changes in your calendar

It’s also now possible to use the Calendar API to specify how you want to receive notifications each time an event is added to a calendar or a guest responds to an invitation. The different types of change notifications can be toggled separately, which means you can set up different notification types for new events, changed events, canceled events, response updates and daily agendas. These settings are available for each calendar in the CalendarList collection.

If you are interested in using these new features, check out the Google Calendar API v3 documentation for Events and CalendarList to get started.

Lucia Fedorova is a Tech Lead of the Google Calendar API team. The team focuses on providing great experience to Google Calendar developers and enabling new and exciting integrations.

Posted by Louis Gray, Googler

By Jen Kovnats, Google Maps API Team

Cross-posted from the Google Geo Developers blog

Maps give us an easy way to visualize all types of information, from patterns in health expenditure across the world, to oceans with the highest concentration of coral reefs at risk. The tools used to create these maps should be just as easy to use. That’s why, starting today, the JavaScript Maps API will support GeoJSON, making it simpler for developers to visualize richer data, with even cleaner code.

GeoJSON has emerged as a popular format for sharing location-based information on the web, and the JavaScript Maps API is embracing this open standard. This means, as a developer, you can now pull raw data from multiple data sources, such as the US Geological Survey or Google Maps Engine, and easily display it on your website.

How does it work? The new Data layer allows you to treat a dataset like… well, a set of data, rather than individual and unrelated features. If you have a GeoJSON file, you can now load it on the map simply by adding a single line of code to your JavaScript:‘earthquakes.json’);
Earthquakes over the past week loaded on the map

Tada! And what’s more, most places have attributes beyond just location: stores have opening times, rivers have current speed, and each Girl Guide troop has cookie selling turf. The Data layer allows you to represent all attributes in GeoJSON right on the map and make decisions about what data to display more easily.

You can use this information to create a styling function that says, “show the earthquakes as circles, scaled to their magnitude” and as the data or rules are updated, the styling will automatically be applied to every feature. This beats having to manually update each feature or rule as more information is added to the map.
Earthquakes over the past week, with a styling function applied
Earthquakes over the past week, with additional color and basemap styling applied

Get started by checking out our developer docs, the code for these earthquake maps, this cool demo showing data from different sources, and this Google Developers Live video. This is a new feature, so if you run into problems or think of any additions you’d love to see, get help on StackOverflow and check our support page for the right tags to use.

We’re looking forward to seeing what you build with this new tool and, as always, we’re eager for your feedback. Please comment on this post or on our Google+ Page.

Jen Kovnats is a Product Manager on the Maps API team bent on making mapping easy.

Posted by Louis Gray, Googler

By Austin Robison, Android Wear team

Cross-posted from the Android Developers blog

Android Wear extends the Android platform to wearables. These small, powerful devices give users useful information just when they need it. Watches powered by Android Wear respond to spoken questions and commands to provide info and get stuff done. These new devices can help users reach their fitness goals and be their key to a multiscreen world.
We designed Android Wear to bring a common user experience and a consistent developer platform to this new generation of devices. We can’t wait to see what you will build.

Getting started

Your app’s notifications will already appear on Android wearables and starting today, you can sign up for the Android Wear Developer Preview. You can use the emulator provided to preview how your notifications will appear on both square and round Android wearables. The Developer Preview also includes new Android Wear APIs which will let you customize and extend your notifications to accept voice replies, feature additional pages, and stack with similar notifications. Head on over to to sign up and learn more.
For a brief introduction to the developer features of Android Wear, check out these DevBytes videos. They include demos and a discussion about the code snippets driving them.

What’s next?

We’re just getting started with the Android Wear Developer Preview. In the coming months we’ll be launching new APIs and features for Android Wear devices to create even more unique experiences for the wrist.
Join the Android Wear Developers community on Google+ to discuss the Preview and ask questions.
We’re excited to see what you build!
Posted by Louis Gray, Googler

By Dan Ciruli, Google Cloud Platform Team

We strive to make our APIs accessible to anyone on any platform: ReST, HTTP and JSON mean that from nearly any language on nearly any hardware, you can call any of our APIs. However, to be truly useful on many platforms, it helps to have a client library -- one that packs a lot of functionality like handling auth, streaming media uploads and downloads, and gives you native language idioms.

Today, we are announcing General Availability of the Google APIs Client Library for .NET.

This library is an open-source effort, hosted at NuGet, that lets developers building on the Microsoft® .NET Framework to integrate their desktop or Windows Phone applications with Google’s services. It handles OAuth 2.0 integration, streaming uploads and downloads of media, and batching requests. For more than fifty Google APIs, it is the easiest way to get access for any Windows developer. Whether you are plugging Google Calendar into your .NET Framework-based application, translating text in a Windows Phone app or writing a PowerShell script to start Google Compute Engine instances, the Google APIs Client Library for .NET can save you tons of time.

Want to try it out? Visit the Getting Started tutorial. Want to hear more about about using Google’s services from .NET? Follow the Google APIs Client Library for .NET blog here.

Dan Ciruli is a Product Manager in the Cloud Platform Team intent on making developers' lives easier.

Posted by Louis Gray, Googler

By Dan Lazin, Google Apps Team

Cross-posted from the Google Apps Developer blog

We've just announced Google Docs and Sheets add-ons — new tools created by developers like you that give Google users even more features in their documents and spreadsheets. Joining the launch are more than 50 add-ons that partners have built using Apps Script. Now, we're opening up the platform in a developer-preview phase. If you have a cool idea for Docs and Sheets users, we'd love to publish your code in the add-on store and get it in front of millions of users.

To browse through add-ons for Docs and Sheets, select Get add-ons in the Add-ons menu of any document or spreadsheet. (Add-ons for spreadsheets are only available in the new Google Sheets).

Under the hood

Docs and Sheets add-ons are powered by Google Apps Script, a server-side JavaScript platform that requires zero setup. Even though add-ons are in developer preview right now, the tools and APIs are available to everyone. The only restriction is on final publication to the store.

Once you have a great working prototype in Docs or Sheets, please apply to publish. Scripts that are distributed as add-ons gain a host of benefits:

  • Better discovery: Apps Script has long been popular among programmers and other power users, but difficult for non-technical users to find and install. Add-ons let you distribute your code through a polished storefront—as well as direct links and even Google search results.
  • Sharing: When two people collaborate on a document and one of them uses an add-on, it appears in the Add-ons menu for both to see. Similarly, once you get an add-on from the store, it appears in the menu in every document you create or open, although your collaborators will only see it in documents where you use it. For more info on this sharing model, see the guide to the add-on authorization lifecycle.
  • Automatic updates: When you republish an add-on, the update pushes out automatically to all your users. There's no more hounding people to switch to the latest version.
  • Share functionality without sharing code: Unlike regular Apps Script projects, add-ons don't expose your source code for all to see. That's reassuring both to less-technical users and to the keepers of your codebase's secrets.
  • Enterprise features: If your company has its own Google Apps domain, you can publish add-ons restricted just to your employees. This private distribution channel is a great way for organizations that run on Google Apps to solve their own unique problems.

Beautiful, professional appearance

Thanks to hard work from our developer partners, the add-ons in the store look and feel just like native features of Google Docs and Sheets. We're providing a couple of new resources to help all developers achieve the same visual quality: a CSS package that applies standard Google styling to typography, buttons, and other form elements, and a UI style guide that provides great guidance on designing a Googley user experience.

A replacement for the script gallery

Add-ons are available in the new version of Google Sheets as a replacement for the older version's script gallery. If you have a popular script in the old gallery, now's a great time to upgrade it to newer technology.

We can't wait to see the new uses you'll dream up for add-ons, and we're looking forward to your feedback on Google+ and questions on Stack Overflow. Better yet, if you're free at noon Eastern time this Friday, join us live on YouTube for a special add-on-centric episode of Apps Unscripted.

Dan is a technical writer on the Developer Relations team for Google Apps Script. Before joining Google, he worked as video-game designer and newspaper reporter. He has bicycled through 17 countries.

Posted by Louis Gray, Googler

By Chary Chen, Chrome Web Store team

Cross-posted from the Chromium blog

As a developer, you should spend as much of your development time as possible creating great content and services — not managing overhead. Today we're announcing new tools and services in the Chrome Web Store that make it easier to automate the publish process and monetize all of your Chrome Web Store items.

Table 1: Chrome Web Store (CWS) monetization methods by item type

Hosted Apps
Packaged Apps
Free trial
✓  new!
✓  new!
Paid up-front
✓  new!
✓  new!
✓  new!
In-app payments (IAP)
Google Wallet for Digital Goods
CWS Managed IAP  new!
CWS Managed IAP  new!

The Managed In-App Payments feature simplifies the developer experience of our previous solution and expands it to extensions and themes. You can now create and manage all of your in-app products directly in the developer dashboard instead of having to embed or dynamically generate and serve a payment token for each sale. You can enable or disable products, provide localized descriptions, set prices for different regions and the Chrome Web Store manages the licensing.

The Free Trial feature, which is now available for Chrome Packaged Apps and Extensions, allows a developer to specify that an item can be used for a limited time before it must be purchased. This gives users the flexibility to try paid items before deciding to buy them.

In addition to making it easier to monetize your Web Store items, we have now made it easier to publish them. Our Chrome Web Store API has been expanded to allow developers to programmatically create, update and publish items in the Web Store. If you have an automated build and deployment process, we hope you will be able to use this API to integrate the Web Store publishing flow into your existing process.

We’re excited to release these new features, so please give them a try and send your feedback via Stack Overflow, our G+ Developers page, or our developer forum.

Posted by Chary Chen, Software Engineer & developer delighter Chary Chen is a member of the Chrome Web Store team, where she connects great developers with Google-scale users.

Posted by Louis Gray, Googler

Author Picture By Greg DeMichillie, Google Cloud Platform team

Cross-posted from the Google Cloud Platform blog

Google’s global Cloud Platform Developer Roadshow is coming to a city near you. As many of you know, on March 25, we will be making major product announcements at Google Cloud Platform Live. The Roadshow will kick off immediately following this event and will visit 26 cities around the world. If you’d like to attend, register here.

In the roadshow, we will be talking about new approaches to computing that enable you to move beyond traditional divisions of PaaS and IaaS. We will also show how we are creating a developer experience that enables you to work more efficiently as you build, test, and deploy your code.

This is a great opportunity to see behind the scenes of the world's biggest cloud and engage with the international Google Cloud Platform team.

The Roadshow will be visiting Europe, Asia, and North America. We hope you can join us.

Greg DeMichillie has spent his entire career working on developer platforms for web, mobile, and the cloud. He started as a software engineer before making the jump to Product Management. When not coding, he's an avid photographer and gadget geek.

Posted by Louis Gray, Googler

Author PhotoBy Felipe Hoffa, Cloud Platform team

Cross-posted from the Google Cloud Platform Blog

Aggregating numbers by geolocation is a powerful way to analyze your data, but not an easy task when you have millions of IP addresses to analyze. In this post, we'll check how we can we use Google BigQuery to quickly solve this use case using a publicly available dataset.

We take the developer community seriously and it’s a great way for us to see what your use cases are. This is where I found a very interesting question: "user2881671" on Stack Overflow had created a way to transform IP addresses into geographical locations in BigQuery, and asked for help optimizing their query. We worked out an optimized solution there, and today I'm happy to present an even better solution.

For example, if you want to peek at what are the top cities contributing modifications to Wikipedia, you can run this query:
SELECT COUNT(*) c, city, countryLabel, NTH(1, latitude) lat, NTH(1, longitude) lng
   INTEGER(PARSE_IP(contributor_ip)) AS clientIpNum,
   INTEGER(PARSE_IP(contributor_ip)/(256*256)) AS classB
 WHERE contributor_ip IS NOT NULL
   ) AS a
JOIN EACH [fh-bigquery:geocode.geolite_city_bq_b2b] AS b
ON a.classB = b.classB
WHERE a.clientIpNum BETWEEN b.startIpNum AND b.endIpNum
AND city != ''
GROUP BY city, countryLabel
We can visualize the results on a map:

You can do the same operation with your own tables containing ipv4 IP addresses. Just take the previous query and replace [publicdata:samples.wikipedia] with your own table, and contributor_ip with the name of its column containing ipv4 addresses.

Technical details

First, I downloaded the Creative Commons licensed GeoLite City IPv4 made available by MaxMind in its .csv format. There is a newer database available too, but I didn't work with it as it's only available in binary form for now. I uploaded its 2 tables into BigQuery: blocks and locations.

To get better performance later, some processing was needed: For each rule I extracted into a new column its class B prefix (192.168.x.x) and generated duplicate rules for segments that spanned more than one B class. I also joined both original tables, to skip that step when processing data. In the StackOverflow question "user2881671" went even further, generating additional rules for segments without a location mapping (cleverly using the LAG() window function), but I skipped that step here (so addresses without a location will be skipped rather than counted). In total, only 32,702 new rows were needed.

The final query JOINs the class B prefix from your IP addresses with the lookup table, to prevent the performance hit of doing a full cross join.

You can find the new table with the BigQuery web UI, or using the REST-based API to integrate these queries and dataset with your own software.

To get started with BigQuery, you can check out our site and the "What is BigQuery" introduction. You can post questions and get quick answers about BigQuery usage and development on Stack Overflow. Follow the latest BigQuery news at We love your feedback and comments. Join the discussion on +Google Cloud Platform using the hashtag #BigQuery.

This post includes GeoLite data created by MaxMind, available from, distributed under the Creative Commons Attribution-ShareAlike 3.0 Unported License.

Felipe Hoffa is part of the Cloud Platform Team. He'd love to see the world's data accessible for everyone in BigQuery.

Posted by Scott Knaster, Editor