Posted:

Posted by, Ido Green, Developer Advocate

There is no higher form of user validation than having customers support your product with their wallets. However, the path to a profitable business is not necessarily an easy one. There are many strategies to pick from and a lot of little things that impact the bottom line. If you are starting a new business (or thinking how to improve the financial situation of your current startup), we recommend this course we've been working on with Udacity!

This course blends instruction with real-life examples to help you effectively develop, implement, and measure your monetization strategy. By the end of this course you will be able to:

  • Choose & implement a monetization strategy relevant to your service or product.
  • Set performance metrics & monitor the success of a strategy.
  • Know when it might be time to change methods.

Go try it at: udacity.com/course/app-monetization–ud518

We hope you will enjoy and earn from it!

Posted:

Posted by, Thomas Park, Senior Software Engineer, Google BigQuery

Many types of computations can be difficult or impossible to express in SQL. Loops, complex conditionals, and non-trivial string parsing or transformations are all common examples. What can you do when you need to perform these operations but your data lives in a SQL-based Big data tool? Is it possible to retain the convenience and speed of keeping your data in a single system, when portions of your logic are a poor fit for SQL?

Google BigQuery is a fully managed, petabyte-scale data analytics service that uses SQL as its query interface. As part of our latest BigQuery release, we are announcing support for executing user-defined functions (UDFs) over your BigQuery data. This gives you the ability to combine the convenience and accessibility of SQL with the option to use a familiar programming language, JavaScript, when SQL isn’t the right tool for the job.

How does it work?

BigQuery UDFs are similar to map functions in MapReduce. They take one row of input and produce zero or more rows of output, potentially with a different schema.

Below is a simple example that performs URL decoding. Although BigQuery provides a number of built-in functions, it does not have a built-in for decoding URL-encoded strings. However, this functionality is available in JavaScript, so we can extend BigQuery with a simple User-Defined Function to decode this type of data:


function decodeHelper(s) {
  try {
    return decodeURI(s);
  } catch (ex) {
    return s;
  }
}

// The UDF.
function urlDecode(r, emit) {
  emit({title: decodeHelper(r.title),
        requests: r.num_requests});
}

BigQuery UDFs are functions with two formal parameters. The first parameter is a variable to which each input row will be bound. The second parameter is an “emitter” function. Each time the emitter is invoked with a JavaScript object, that object will be returned as a row to the query.

In the above example, urlDecode is the UDF that will be invoked from BigQuery. It calls a helper function decodeHelper that uses JavaScript’s built-in decodeURI function to transform URL-encoded data into UTF-8.

Note the use of try / catch in decodeHelper. Data is sometimes dirty! If we encounter an error decoding a particular string for any reason, the helper returns the original, un-decoded string.

To make this function visible to BigQuery, it is necessary to include a registration call in your code that describes the function, including its input columns and output schema, and a name that you’ll use to reference the function in your SQL:


bigquery.defineFunction(
   'urlDecode',  // Name used to call the function from SQL.

   ['title', 'num_requests'],  // Input column names.

   // JSON representation of output schema.
   [{name: 'title', type: 'string'},
    {name: 'requests', type: 'integer'}],

   urlDecode  // The UDF reference.
);

The UDF can then be invoked by the name “urlDecode” in the SQL query, with a source table or subquery as an argument. The following query looks for the most-visited French Wikipedia articles from April 2015 that contain a cédille character (ç) in the title:


SELECT requests, title
FROM
  urlDecode(
    SELECT
      title, sum(requests) AS num_requests
    FROM 
      [fh-bigquery:wikipedia.pagecounts_201504]
    WHERE language = 'fr'
    GROUP EACH BY title
  )
WHERE title LIKE '%ç%'
ORDER BY requests DESC
LIMIT 100

This query processes data from a 5.6 billion row / 380 Gb dataset and generally runs in less than two minutes. The cost? About $1.37, at the time of this writing.

To use a UDF in a query, it must be described via UserDefinedFunctionResource elements in your JobConfiguration request. UserDefinedFunctionResource elements can either contain inline JavaScript code or pointers to code files stored in Google Cloud Storage.

Under the hood

JavaScript UDFs are executed on instances of Google V8 running on Google servers. Your code runs close to your data in order to minimize added latency.

You don’t have to worry about provisioning hardware or managing pipelines to deal with data import / export. BigQuery automatically scales with the size of the data being queried in order to provide good query performance.

In addition, you only pay for what you use - there is no need to forecast usage or pre-purchase resources.

Developing your function

Interested in developing your JavaScript UDF without running up your BigQuery bill? Here is a simple browser-based widget that allows you to test and debug UDFs.

Note that not all JavaScript functionality supported in the browser is available in BigQuery. For example, anything related to the browser DOM is unsupported, including Window and Document objects, and any functions that require them, such as atob() / btoa().

Tips and tricks

Pre-filter input

In our URL-decoding example, we are passing a subquery as the input to urlDecode rather than the full table. Why?

There are about 5.6 billion rows in [fh-bigquery:wikipedia.pagecounts_201504]. However, one of the query predicates will filter the input data down to the rows where language is “fr” (French) - this is about 262 million rows. If we ran the UDF over the entire table and did the language and cédille filtering in a single WHERE clause, that would cause the JavaScript framework to process over 21 times more rows than it would with the filtered subquery. This equates to a lot of CPU cycles wasted doing unnecessary data conversion and marshalling.

If your input can easily be filtered down before invoking a UDF by using native SQL predicates, doing so will usually lead to a faster (and potentially cheaper) query.

Avoid persistent mutable state

You must not store and access mutable state across UDF execution for different rows. The following contrived example illustrates this error:


// myCode.js
var numRows = 0;

function dontDoThis(r, emit) {
  emit(rowCount: ++numRows);
}

// The query.
SELECT max(rowCount) FROM dontDoThis(myTable);

This is a problem because BigQuery will shard your query across multiple nodes, each of which has independent V8 instances and will therefore accumulate separate values for numRows.

Expand select *

You cannot execute SELECT * FROM urlDecode(...) at this time; you must explicitly list the columns being selected from the UDF: select requests, title from urlDecode(...)

For more information about BigQuery User-Defined Functions, see the full feature documentation.

Posted:

Posted by Laurence Moroney, Developer Advocate

In this episode of Coffee With a Googler, Laurence meets with Developer Advocate Timothy Jordan to talk about all things Ubiquitous Computing at Google. Learn about the platforms and services that help developers reach their users wherever it makes sense.

We discuss Brillo, which extends the Android Platform to 'Internet of Things' embedded devices, as well as Weave, which is a services layer that helps all those devices work together seamlessly.

We also chat about beacons and how they can give context to the world around you, making the user experience simpler. Traditionally, users need to tell you about their location, and other types of context. But with beacons, the environment can speak to you. When it comes to developing for beacons, Timothy introduces us to Eddystone, a protocol specification for BlueTooth Low Energy (BLE) beacons, the Proximity Beacon API that allows developers to register a beacon and associate data with it, and the Nearby Messages API which helps your app 'sight' and retrieve data about nearby beacons.

Timothy and his team have produced a new Udacity series on ubiquitous computing that you can access for free! Take the course to learn more about ubiquitous computing, the design paradigms involved, and the technical specifics for extending to Android Wear, Google Cast, Android TV, and Android Auto.

Also, don't forget to join us for a ubiquitous computing summit on November 9th & 10th in San Francisco. Sign up here and we'll keep you updated.

Posted:

Posted by Larry Yang, Lead Product Manager, Project Tango

At Google I/O, we showed the world many of the cool things you can do with Project Tango. Now you can experience it yourself by downloading these apps on Google Play onto your Project Tango Tablet Development Kit.

A few examples of creative experiences include:

MeasureIt is a sample application that shows how easy it is to measure general distances. Just point a Project Tango device at two or more points. No tape measures and step ladders required.

Constructor is a sample 3D content creation tool where you can scan a room and save the scan for further use.

Tangosaurs lets you walk around and dig up hidden fossils that unlock a portal into a virtual dinosaur world.

Tango Village and Multiplayer VR are simple apps that demonstrate how Project Tango’s motion tracking enables you to walk around VR worlds without requiring an input device.

Tango Blaster lets you blast swarms of robots in a virtual world, and can even work with the Tango device mounted on a toy gun.

We also showed a few partner apps that are also now available in Google Play. Break A Leg is a fun VR experience where you’re a magician performing tricks on stage.

SideKick’s Castle Defender uses Project Tango’s depth perception capability to place a virtual world onto a physical playing surface.

Defective Studio’s VRMT is a world-building sandbox designed to let anyone create, collaborate on, and share their own virtual worlds and experiences. VRMT gives you libraries of props and intuitive tools, to make the virtual creation process as streamlined as possible.

We hope these applications inspire you to use Project Tango’s motion tracking, area learning and depth perception technologies to create 3D experiences. We encourage you to explore the physical space around the user, including precise navigation without GPS, windows into virtual 3D worlds, measurement of spaces, and games that know where they are in the room and what’s around them.

As we mentioned in our previous post, Project Tango Tablet Development Kits will go on sale in the Google Store in Denmark, Finland, France, Germany, Ireland, Italy, Norway, Sweden, Switzerland and the United Kingdom starting August 26.

We have a lot more to share over the coming months! Sign-up for our monthly newsletter to keep up with the latest news. Connect with the 5,000 other developers in our Google+ community. Get help from other developers by using the Project Tango tag in Stack Overflow. See what others are creating on our YouTube channel. And share your story on Twitter with #ProjectTango.

Join us on our journey.

Posted:

Posted by Taylor Savage, Product Manager

We’re excited to announce that the full speaker list and talk schedule has been released for the first ever Polymer Summit! Find the latest details on our newly launched site here. Look forward to talks about topics like building full apps with Polymer, Polymer and ES6, adaptive UI with Material Design, and performance patterns in Polymer.

The Polymer Summit will start on Monday, September 14th with an evening of Code Labs, followed by a full day of talks on Tuesday, September 15th. All of this will be happening at the Muziekgebouw aan ‘t IJ, right on the IJ river in downtown Amsterdam. All tickets to the summit were claimed on the first day, but you can sign up for the waitlist to be notified, should any more tickets become available.

Can’t make it to the summit? Sign up here if you’d like to receive updates on the livestream and tune in live on September 15th on polymer-project.org/summit. We’ll also be publishing all of the talks as videos on the Google Developers YouTube Channel.

Posted:

Posted by Hoi Lam, Developer Advocate

If your users’ devices know where they are in the world – the place that they’re at, or the objects they’re near – then your app can adapt or deliver helpful information when it matters most. Beacons are a great way to explicitly label the real-world locations and contexts, but how does your app get the message that it’s at platform 9, instead of the shopping mall or that the user is standing in front of a food truck, rather than just hanging out in the parking lot?

With the Google beacon platform, you can associate information with registered beacons by using attachments in Proximity Beacon API, and serve those attachments back to users’ devices as messages via the Nearby Messages API. In this blog post, we will focus on how we can use attachments and messages most effectively, making our apps more context-aware.

Think per message, not per beacon

Suppose you are creating an app for a large train station. You’ll want to provide different information to the user who just arrived and is looking for the ticket machine, as opposed to the user who just wants to know where to stand to be the closest to her reserved seat. In this instance, you’ll want more than one beacon to label important places, such as the platform, entrance hall and waiting area. Some of the attachments for each beacon will be the same (e.g. the station name), others will be different (e.g. platform number). This is where the design of Proximity Beacon API, and the Nearby Messages API in Android and iOS helps you out.

When your app retrieves the beacon attachments via the Nearby Messages API, each attachment will appear as an individual message, not grouped by beacon. In addition, Nearby Messages will automatically de-duplicate any attachments (even if they come from different beacons). So the situation looks like this:

This design has several advantages:

  • It abstracts the API away from implementation (beacon in this case), so if in the future we have other kinds of devices which send out messages, we can adopt them easily.
  • Built in deduplication means that you do not need to build your own to react to the same message, such as the station name in the above example.
  • You can add finer grained context messages later on, without re-deploying.

In designing your beacon user experience, think about the context of your user, the places and objects that are important for your app, and then label those places. The Proximity Beacon API makes beacon management easy, and Nearby Messages API abstract the hardware away, allowing you to focus on creating relevant and timely experiences. The beacons themselves should be transparent to the user.

Using beacon attachments with external resources

In most cases, the data you store in attachments will be self-contained and will not need to refer to an external database. However, there are several exceptions where you might want to keep some data separately:

  • Large data items such as pictures and videos.
  • Where the data resides on a third party database system that you do not control.
  • Confidential or sensitive data that should not be stored in beacon attachments.
  • If you run a proprietary authentication system that relies on your own database.

In these cases, you might need to use a more generic identifier in the beacon attachment to lookup the relevant data from your infrastructure.

Combining the virtual and the real worlds

With beacons, we have an opportunity to delight users by connecting the virtual world of personalization and contextual awareness with real world places and things that matter most. Through attachments, the Google beacon platform delivers a much richer context for your app that goes beyond the beacon identifier and enables your apps to better serve your users. Let’s build the apps that connect the two worlds!

Posted:

Posted by Laurence Moroney, Developer Advocate

In this week’s Coffee with a Googler, we’re joined by Heidi Dohse from the Google Cloud team to talk about how she saved her own life through technology. At Google she works on the Cloud Platform team that supports our genomics work, and has a passion for the future of the delivery of health care.

When she was a child, Heidi Dohse had an erratic heartbeat, but, without knowing anything about it, she just ignored it. As a teen she became a competitive windsurfer and skier, and as part of a surgery on her knee between seasons, she had an EKG and discovered that her heart was beating irregularly at 270bpm.

She had an experimental surgery and was immediately given a pacemaker, and became a young heart patient, expecting not to live much longer. Years later, Heidi is now on her 7th pacemaker, but it doesn’t stop her from staying active as an athlete. She’s a competitive cyclist, often racing hundreds of miles, and climbing tens of thousands of feet as she races.

At the core of all this is her use of technology. She has carefully managed her health by collecting and analyzing the data from her pacemaker. The data goes beyond just heartbeat, and includes things such as the gyroscope, oxygen utilization, muscle stimulation and electrical impulses, she can pro-actively manage her health.

It’s the future of health care -- instead of seeing a doctor for an EKG every few months, with this technology and other wearables, one can constantly check their health data, and know ahead of time if there would be any health issues. One can proactively ensure their health, and pre-empt any health issues.

Learn more in the video.

Posted:

Posted by Laurence Moroney, Developer Advocate

Over the past few months, we’ve been bringing you 'Coffee with a Googler', giving you a peek at people working on cool stuff that you might find inspirational and useful. Starting with this week’s episode, we’re going to accompany each video with a short post for more details, while also helping you make the most of the tech highlighted in the video.

This week we meet with MacDuff Hughes from the Google Translate team. Google Translate uses statistics based translation. By finding very large numbers of examples of translations from one language to another, it uses statistics to see how various phrases are treated, so it can make reasonable estimates at the correct phrases that are natural sounding in the target language. For common phrases, there are many candidate translations, so the engine converts them within the context of the passage that the phrase is in. Images can also be translated. When you point your mobile device at printed text and it will translate to the preferred for you.

Translate is adding languages all the time, and part of its mission is to serve languages that are less frequently used such as Gaelic, Welsh or Maori, in the same manner as the more commonly used ones, such as English, Spanish and French. To this end, Translate supports more than 90 languages. In the quest to constantly improve translation, the technology provides a way for the community to validate translations, and this is particularly used by less commonly used translations, effectively helping them to grow and thrive. It also enhances the machine translation by having people involved too.

You can learn more about Google Translate, and the translate app here.

Developers have a couple of options to use translate:

  • The Free Website Translate plugin that you can add to your site and have translations available immediately.
  • The Cloud-based Translate API that you can use to build apps or sites that have translation in them.

Watch the episode here:


Posted:

Posted by Ori Weinroth, Google Cloud Platform Marketing

As a CTO, VP R&D, or CIO at a technology startup you typically need to maneuver and make the most out of limited budgets. Chances are, you’ve never had your CEO walk in and tell you, “We’ve just closed our Series A round. You now have unlimited funding to launch version 1.5.”

So how do you extract real value from what you’re given to work with? We’re gathering four start technology leaders for a free webinar discussion around exactly that: their strategies and tactics for operating lean. They will cover key challenges and share tips and tricks for:

  • Reducing burn rate by making smart tech choices
  • Getting the most out of a critical but finite resource - your dev team
  • Avoiding vendor lock-in so as to maximize cost efficiencies

We’ve invited the following technology leaders from some of today’s most dynamic startups:

Sign up for our Lean Ops Webinar in your timezone to hear their take:

Americas
Wednesday, 13 August 2015
11:00 AM PT
[Click here to register]

Europe, Middle East and Africa
Wednesday, 13 August 2015
10:00 AM (UK), 11:00 AM (France), 12:00 PM (Israel)
[Click here to register]

Asia Pacific
Wednesday, 13 August 2015
10:30AM (India), 1:00 PM (Singapore/Hong Kong), 3:00PM (Sydney, AEDT)
[Click here to register]

Our moderator will be Amir Shevat, senior program manager at Google Developer Relations. We look forward to an insightful and open discussion and hope you can attend.

Posted:

Posted by Larry Yang, Product Manager, Project Tango

Project Tango Tablet Development Kits are available in South Korea and Canada starting today, and on August 26, will be available in Denmark, Finland, France, Germany, Ireland, Italy, Norway, Sweden, Switzerland, and the United Kingdom. The dev kit is intended for software developers only. To order a device, visit the Google Store.

Project Tango is a mobile platform that uses computer vision to give devices the ability to sense the world around them. The Project Tango Tablet Development Kit is a standard Android device plus a wide-angle camera, a depth sensing camera, accurate sensor timestamping, and a software stack that exposes this new computer vision technology to application developers. Learn more on our website.

The Project Tango community is growing. We’ve shipped more than 3,000 developer devices so far. Developers have already created hundreds of applications that enable users to explore the physical space around them, including precise navigation without GPS, windows into virtual 3D worlds, measurement of physical spaces, and games that know where they are in the room and what’s around them. And we have an app development contest in progress right now.

We’ve released 13 software updates that make it easier to create Area Learning experiences with new capabilities such as high-accuracy and building-scale ADFs, more accurate re-localization, indoor navigation, and GPS/maps alignment. Depth Perception improvements include the addition of real-time textured and Unity meshing. Unity developers can take advantage of an improved Unity lifecycle. The updates have also included improvements in IMU characterization, performance, thermal management and drift-reduction. Visit our developer site for details.

We have a lot more to share over the coming months. Sign-up for our monthly newsletter to keep up with the latest news. Join the conversation in our Google+ community. Get help from other developers by using the Project Tango tag in Stack Overflow. See what other’s are saying on our YouTube channel. And share your story on Twitter with #ProjectTango.

Join us on our journey.