tag:blogger.com,1999:blog-5960988249724351952024-03-16T23:52:08.900-07:00Google for Developers Blog - News about Web, Mobile, AI and CloudExplore the Google for Developers Blog which provides insights and the latest news about our AI, Cloud, Mobile and Web app development announcements. ewoodhttp://www.blogger.com/profile/12341551220176883769noreply@blogger.comBlogger2434125tag:blogger.com,1999:blog-596098824972435195.post-78720128150142983592024-03-14T13:00:00.000-07:002024-03-14T13:12:13.552-07:00Tune in for Google I/O on May 14 <meta name="twitter:image" content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5KaZPcdR4PhMjIYs1C6Tmtf53ycN-MFP21KfaawcpFdiFfNeAi8nFV_pSn522tKRYLVMRbYTnUGso7Mm5VHEeDpGDNsrGdDPY7agtWZx78ipkAWA1BBuV0linTPZC9k9lMbF8KhmUYZYbBu8SiCqO3LXHESofs-Fh5cmvrwhhoJmHPnItOPSZVwmneGQ/s1600/IO-24-Profile-Social%20%281%29.png">
<img style="display:none" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5KaZPcdR4PhMjIYs1C6Tmtf53ycN-MFP21KfaawcpFdiFfNeAi8nFV_pSn522tKRYLVMRbYTnUGso7Mm5VHEeDpGDNsrGdDPY7agtWZx78ipkAWA1BBuV0linTPZC9k9lMbF8KhmUYZYbBu8SiCqO3LXHESofs-Fh5cmvrwhhoJmHPnItOPSZVwmneGQ/s1600/IO-24-Profile-Social%20%281%29.png">
<em>Posted by Jeanine Banks – VP & General Manager, Developer X, and Head of Developer Relations</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilIyz8XnRK4TS2rI70Dol6xTJBWXj-O0XTj1JskV_6uJzS8QMJ3qMeERLU4F8VmwxxsOBkzzS85TPPnyzUeMpVH29gzO_WfaDg7eU6QXAd4_8D94kk9wQjR9Wr0UeCOrcVlufIrEkCkeYZRK4C12WlFzTv0VokFbuxH1h7a6oNNzVxDjNk51JH9rrxEpU/s1600/IO24_SVD%20Blog%20Banner%2002_4209%E2%80%8A%C3%97%E2%80%8A1253%20%281%29.png" imageanchor="1" ><img style="100%" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilIyz8XnRK4TS2rI70Dol6xTJBWXj-O0XTj1JskV_6uJzS8QMJ3qMeERLU4F8VmwxxsOBkzzS85TPPnyzUeMpVH29gzO_WfaDg7eU6QXAd4_8D94kk9wQjR9Wr0UeCOrcVlufIrEkCkeYZRK4C12WlFzTv0VokFbuxH1h7a6oNNzVxDjNk51JH9rrxEpU/s1600/IO24_SVD%20Blog%20Banner%2002_4209%E2%80%8A%C3%97%E2%80%8A1253%20%281%29.png" data-original-width="1058" data-original-height="800" /></a>
<a name='more'></a><p></p>
<p><a href="https://io.google/2024/?utm_source=blogpost&utm_medium=pr&utm_campaign=svd-gfd&utm_content=" target="_blank">Google I/O</a> is arriving this year on May 14th and you’re invited to join us online! I/O offers something for everyone, whether you are developing a new application, modernizing an existing one, or transforming it into a business.</p>
<p>The Gemini era unlocks new possibilities for developers to build creative and productive AI-enabled applications. I/O is where you’ll hear how you can get from idea to production AI applications faster. We’re excited to share what’s new for mobile, web, and multiplatform development, and how to scale your applications in the cloud. You will be able to dive deeper into topics that interest you with over 100 sessions, workshops, codelabs, and demos.</p>
<p>Visit the <a href="https://io.google/2024/?utm_source=blogpost&utm_medium=pr&utm_campaign=svd-gfd&utm_content=" target="_blank">Google I/O</a> site and register to stay informed about I/O and other related events coming soon. The livestreamed keynotes start May 14 at 10am PT, so mark your calendar.</p>
<p>If you haven’t already, go try out our newest <a href="https://io.google/2024/puzzle/?utm_source=blogpost&utm_medium=pr&utm_campaign=svd-gfd&utm_content=" target="_blank">Google I/O puzzle</a> and head to <a href="https://www.instagram.com/googlefordevs/" target="_blank">@googlefordevs on Instagram</a> if you need a hint.</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-55059705696214479492024-03-13T12:00:00.000-07:002024-03-13T12:02:14.737-07:00GDE Women’s History Month Feature: Gema Parreño Piqueras, AI/ML GDE<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhK3ge5G9_uBqJ0Wrn8Ug6RWL0v0ps29kv9hL1jE2jeXi1T2vgt6lhAs9AzT9EUKpEma6K9clGvB5qCVfy_PpnGzUIUJMV3NEnlXIyr0Ez3YJJvnCKIN2BK7WnJuO1eapc9rkfkxOkSXFF6oI1oh0pMO9tjZDnH_IaNitCS-TDHZIDad6FFPubCX9nlc8U/s1600/GDE-IWD-Profile-Social%20%283%29.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhK3ge5G9_uBqJ0Wrn8Ug6RWL0v0ps29kv9hL1jE2jeXi1T2vgt6lhAs9AzT9EUKpEma6K9clGvB5qCVfy_PpnGzUIUJMV3NEnlXIyr0Ez3YJJvnCKIN2BK7WnJuO1eapc9rkfkxOkSXFF6oI1oh0pMO9tjZDnH_IaNitCS-TDHZIDad6FFPubCX9nlc8U/s1600/GDE-IWD-Profile-Social%20%283%29.png" style="display: none;" />
<em>Posted by Justyna Politanska-Pyszko – Program Manager, Google Developer Experts </em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfJyZsKv_jGEhHLLJK6bddMbY28IffgQs49R1iX9KPzge82Gwe3L5SavLQ4xJqeVQiX1vvsN86fcEgOdMMgsCXdW5l_lUnbeuS8AVY6Prsrqv11XNB49ni7mis-i-Yt3_Z_jGONSdEEWy4GL5Ca78aRLP8me4RXtxMRsjVBs5PKTQeewpoVASrAuXOWDA/s1600/GDE-IWD-Profile-Banner%20%289%29.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfJyZsKv_jGEhHLLJK6bddMbY28IffgQs49R1iX9KPzge82Gwe3L5SavLQ4xJqeVQiX1vvsN86fcEgOdMMgsCXdW5l_lUnbeuS8AVY6Prsrqv11XNB49ni7mis-i-Yt3_Z_jGONSdEEWy4GL5Ca78aRLP8me4RXtxMRsjVBs5PKTQeewpoVASrAuXOWDA/s1600/GDE-IWD-Profile-Banner%20%289%29.png" /></a>
<a name='more'></a><p></p>
<h5><span style="font-size: large;">For Women's History Month, we're shining a spotlight on Gema Parreño Piqueras, an AI/ML Google Developer Expert (GDE) from Madrid, Spain. GDEs are recognized by Google for their outstanding technical expertise and passion for sharing knowledge.</span></h5>
<p>Gema's dedication to the GDE program makes her a true leader within the Google Developers community, and her work in Artificial Intelligence and Machine Learning pushes the boundaries of Google's technological capabilities.</p>
<p>Gema is a force to be reckoned with in the world of data science. As a data scientist at Izertis and a GDE, she's not only making significant contributions to the field of AI/ML but also blazing a trail for women in tech. Her unique background in architecture and her passion for problem-solving led her to an impressive career in AI/ML and development of her extraordinary project – helping NASA track asteroids! Learn more about her projects incorporating AI:</p>
<h4>NASA Project: <a href="https://2016.spaceappschallenge.org/challenges/solar-system/near-earth-objects-machine-learning/projects/deep-asteriod" target="_blank">Deep Asteroid</a></h4>
<p>Gema's architectural skills proved invaluable when she turned her <a href="https://github.com/SoyGema/Deep_Asteroid" target="_blank">attention to AI</a>. In 2016, she created the program Deep Asteroid for NASA's International Space Apps Challenge. This innovative program assists scientists in detecting, tracking, and classifying asteroids, potentially protecting our planet from future threats.</p>
<h4>Journey to AI/ML</h4>
<p>Intrigued by the potential of AI, Gema embarked on a journey that merged her architectural background with cutting-edge technology. Her experience with 3D modeling translated seamlessly into the world of machine learning, giving her a fresh perspective. Over the past seven years, she's overcome challenges and established herself as a true expert.</p>
<p>As a Google Developer Expert, Gema has found a vibrant community that has fueled her growth. She has attended numerous GDE events throughout Europe and had the opportunity to collaborate with Google teams. This experience was instrumental in the development of Deep Asteroid, demonstrating the power of community and access to advanced technology.</p>
<p>Gema’s advice for women aspiring to enter the field is simple and powerful: "Don't be afraid to experiment, fail, and learn from those failures. Persistence and a willingness to dive into the unknown are what will set you apart." Gema encourages women to find supportive communities, like the GDE program, where they can network, learn, and grow.</p>
<p>You can find Gema on <a href="https://www.linkedin.com/in/gemaparreno/" target="_blank">LinkedIn</a>, <a href="https://github.com/SoyGema" target="_blank">GitHub</a> and <a href="https://www.twitter.com/SoyGema" target="_blank">X (formerly known as twitter)</a>.</p><br />
<p><em>The Google Developer Experts (GDE) program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.</em></p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-91766539137726810012024-03-12T08:45:00.000-07:002024-03-12T08:45:00.138-07:00Google for Games is coming to GDC 2024<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh535KFDGpU3ktvtTsidVQnSckABXmTe-x9Yq3Gcnf9kgOnlswfGwSS-1w_WLI-K9ngfNJ6qayBcWPOSk1balMvfinbLrmGLbvxnmlrGVSvFDkt74GFeGmZf-tW56wZVCyRSaLlWs9mHyYOE0pQpQpE6XKsebRcU9-Skv7cli6GnEw3aFMq10Stfw0eN7c/s1600/2.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh535KFDGpU3ktvtTsidVQnSckABXmTe-x9Yq3Gcnf9kgOnlswfGwSS-1w_WLI-K9ngfNJ6qayBcWPOSk1balMvfinbLrmGLbvxnmlrGVSvFDkt74GFeGmZf-tW56wZVCyRSaLlWs9mHyYOE0pQpQpE6XKsebRcU9-Skv7cli6GnEw3aFMq10Stfw0eN7c/s1600/2.png" style="display: none;" />
<em>Posted by Aurash Mahbod – General Manager, Games on Google Play</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju8hT5PNGAGnezRnKQ_W9TVWFKTiPq9nFU46UZA_tIqf0uoNSkiGODGNbDw5vxuQedh_5XqDXx6VdsLUTNlUv6CruDvpeTatf5KOxFutGMy85lzt76sVCNomV6zlpZCo22C9Qq6KoKopwDmcCL6_RD0_Fg0A0IvHiKO3QgDYNp9lheeMefazoQUZVJEJA/s1600/GGDS-blog-banner-3%20%281%29.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju8hT5PNGAGnezRnKQ_W9TVWFKTiPq9nFU46UZA_tIqf0uoNSkiGODGNbDw5vxuQedh_5XqDXx6VdsLUTNlUv6CruDvpeTatf5KOxFutGMy85lzt76sVCNomV6zlpZCo22C9Qq6KoKopwDmcCL6_RD0_Fg0A0IvHiKO3QgDYNp9lheeMefazoQUZVJEJA/s1600/GGDS-blog-banner-3%20%281%29.png" /></a>
<a name='more'></a><p></p>
<p>Google for Games is coming to GDC in San Francisco! Join us on <b>March 19</b> for the Game Developers Conference (GDC) at the Moscone Center, where game developers from across the world will gather to learn, network, problem-solve, and help shape the future of the industry. <b>From March 18 to March 22</b>, experience our comprehensive suite of multi-platform game development tools and explore the new features from Play Pass at the West Hall, Level 2 Lobby.</p>
<p>This year, <a href="https://gamedevsummit.withgoogle.com/#gdc" target="_blank">we’re proud to host eight sessions</a> for developers, designers, business and marketing teams, and everyone else in the gaming community with an interest to grow their game business. Take a look at this year’s sessions below and if you’re interested in learning more about topics from Google Play and Android, check out <a href="https://android-developers.googleblog.com/2024/03/key-product-updates-from-2024-google-for-games-developer-summit.html" target="_blank">key product updates</a> from the Google for Games Developer Summit.</p><br/>
<h3>Scaling your game development</h3>
<p>We’re hosting three sessions designed to help scale your game development using tools from Firebase, Android, and Google Cloud. Learn more about building high quality games with case studies from industry experts.</p><br/>
<h4>Beyond "Set and Forget": Advanced Debugging with Firebase Crashlytics</h4>
<p><b>Tuesday, March 19, 9:30 am - 10:00 am</b> </p>
<div style="text-align: left;"><b>Speaker:</b> Joe Spiro (Developer Relations Engineer, Google) </div><div style="text-align: left;">
<p>Crashlytics has added a number of features that make detecting, tracking, and understanding bugs even easier, from high-level to native code. <a href="https://schedule.gdconf.com/session/google-developer-summit-beyond-set-and-forget-advanced-debugging-with-firebase-crashlytics-presented-by-google/903102" target="_blank">Take your fixes to another level</a> with native stack traces, memory debugging, issue annotation, and the ability to log uncaught exceptions as fatal.</p><br/>
<h4>Enhancing Game Performance: Vulkan and Android Adaptability Technology</h4>
<p><b>Tuesday, March 19, 10:50 am - 11:50 am</b> </p>
<div style="text-align: left;"><b>Speakers:</b> Dohyun Kim (Developer Relations Engineer, Android Games, Google), Hak Matsuda (Developer Relations Engineer, Android Games, Google), Jungwoo Kim (Principal Engineer, Samsung), Syed Farhan Hassan (Software Engineer, ARM) </div><div style="text-align: left;">
<p>Learn how to <a href="https://schedule.gdconf.com/session/google-developer-summit-enhancing-game-performance-vulkan-and-android-adaptability-technology-presented-by-google/903176ted-by-google/903176" target="_blank">leverage Vulkan graphics API</a> to improve your graphics quality or performance, including performance tuning with dynamic upscaling. Find out how the Android Dynamic Performance Framework (ADPF) can enhance game performance and power in Unity and native C++, with easy integration through the Unreal Engine plugin. We're also sharing how NCSoft Lineage W improved thermal status and performance using ADPF.</p><br/>
<h4>Creating a global-scale game with Google Cloud</h4>
<p><b>Tuesday, March 19, 4:40 pm - 5:10 pm</b> </p>
<div style="text-align: left;"><b>Speaker:</b> Mark Mandel (Developer Advocate, Google) </div><div style="text-align: left;">
<p>This session will cover the best of Google Cloud's open source projects (Agones, Open Match, and more) and products (GKE, Spanner, Anthos Service Mesh, Cloud Build, Cloud Deploy, and more) to teach you how to build, deploy, and scale world-scale multiplayer games with Google Cloud.</p><br/>
<h3>Increasing user engagement</h3>
<p>We’re hosting two sessions designed to help you increase engagement by creating dynamic gameplay experiences using generative AI and expanding opportunities on Google Play to grow your community of players with exclusive rewards.</p>
<h4>Reimagine the Future of Gaming with Google AI</h4>
<p><b>Tuesday, March 19, 10:50 am - 11:50 am</b> </p>
<div style="text-align: left;"><b>Speakers:</b> Gus Martins (Developer Advocate, Google), Dan Zaratsian (AI/ML Solutions Architect, Google), Lei Zhang (Director, Play Partnerships, Global GenAI & Greater China Play Partnerships, Google), Jack Buser (Director, Game Industry Solutions), Simon Tokumine (Director of Product Management, Google AI), Giovane Moura Jr. (App Modernization Specialist, Google), Moonlit Beshinov (Head of Google for Games Partnerships and Industry Strategy, Google) </div><div style="text-align: left;">
<p>In <a href="https://schedule.gdconf.com/session/next-gen-tech-forum-reimagine-the-future-of-gaming-with-google-ai-presented-by-google/903410" target="_blank">our keynote session</a>, senior executives from Google Cloud, Google Play, and Labs will share their unique perspectives on generative AI in the gaming landscape. Learn more about cutting-edge AI solutions from Google Cloud, Android, Google Play, and Labs designed to simplify game development, publishing, and business operations, plus actionable strategies to leverage AI for faster development, better player experiences, and sustainable growth.</p>
<h4>Grow your community of loyal gamers with Google Play</h4>
<p><b>Tuesday, March 19, 1:20 pm - 1:50 pm</b> </p>
<div style="text-align: left;"><b>Speaker:</b> Tom Grinsted (Group Product Manager, Google Play Games, Google) </div><div style="text-align: left;">
<p>In <a href="https://schedule.gdconf.com/session/google-developer-summit-growing-your-community-of-loyal-gamers-with-google-play-presented-by-google/903293" target="_blank">this session</a>, we’ll cover new features and insights from Google Play to create rewarding experiences for gamers using Play Pass, Play Points, and Play Games Services. Get a behind-the-scenes look at how Google Play rewards a growing community of passionate gamers, and how to use this to super-charge your business.</p><br/>
<h3>Maximizing reach across screens</h3>
<p>These sessions, from Google Play, Android, and Flutter, introduce ways to expand your mobile games to PC. Learn about the latest tools that will help you accelerate growth across large screens.</p>
<h3>Bringing more users to your Google Play Games on PC game</h3>
<p><b>Tuesday, March 19, 2:10 pm - 2:40 pm</b> </p>
<div style="text-align: left;"><b>Speakers:</b> Aly Hung (Developer Relations Engineer, Android and Google Play, Google), Dara Monasch (Product Manager, Google), Justin Gardner (Partner Program Manager, App Attribution, Google) </div><div style="text-align: left;">
<p><a href="https://schedule.gdconf.com/session/google-developer-summit-bringing-more-users-to-your-google-play-games-on-pc-game-presented-by-google/903178" target="_blank">Join us for an overview</a> of Google Play Games on PC, how it has grown in the past year, and a walkthrough of how to optimize and attribute your PC advertisements for your Google Play Games on PC titles. Learn how to use Google Play Games to increase your reach and acquisition of PC users for your mobile game, as well as how to effectively use the Google Play Install Referrer API to attribute and optimize your ads across mobile and PC.</p>
<h4>Android input on desktop: How to delight your users</h4>
<p><b>Tuesday, March 19, 3:00 pm - 3:30 pm</b> </p>
<div style="text-align: left;"><b>Speakers:</b> Shenshen Cui (Staff Developer Relations Engineer, Google), Patrick Martin (Developer Relations Engineer, Google) </div><div style="text-align: left;">
<p>Give your players a first-class gaming experience with <a href="https://schedule.gdconf.com/session/google-developer-summit-android-input-on-desktop-how-to-delight-your-users-presented-by-google/903177" target="_blank">our best practices</a> for handling input between mobile and PC games, including technical details on how to implement these best practices across mobile, tablets, Chromebooks and Windows PCs<sup>1</sup>. Learn how Android handles keyboard, mouse, and controller input across different form factors, with case studies for designing for both touch and hardware input.</p>
<h4>Building Multiplatform Games with Flutter</h4>
<p><b>Tuesday, March 19, 3:50 pm - 4:20 pm</b> </p>
<div style="text-align: left;"><b>Speakers:</b> Zoey Fan (Senior Product Manager, Flutter, Google), Brett Morgan (Developer Relations Engineer, Google) </div><div style="text-align: left;">
<p>Learn why <a href="https://schedule.gdconf.com/session/google-developer-summit-building-multiplatform-games-with-flutter-presented-by-google/903077" target="_blank">game developers are choosing Flutter</a> to build casual games on mobile, desktop, and web browsers. We’ll cover the free, open-source tools and resources available through the Casual Games Toolkit, a collection of free and open-source tools, templates, and resources to make game dev more productive with Flutter.</p>
<p><a href="https://gamedevsummit.withgoogle.com/#gdc" target="_blank">Learn more about all of our sessions</a> coming to you on March, 19, at GDC in San Francisco.</p><br/>
________________
<b></b><p><em><small><sup>1</sup>Windows is a trademark of the Microsoft group of companies.</small></em></p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-36706119736944726832024-03-07T04:30:00.000-08:002024-03-07T11:21:40.001-08:00Large Language Models On-Device with MediaPipe and TensorFlow Lite<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnfrg6Q2vjvKaptB83LzfrdLshj92dAykUM5kKe9G9o2Pejxs-vnLnJeYvApyh1SwUXIq8SuKGvy8ICZ3_RHcPfVibfNaUotry3u8HSpX2hyphenhyphen7l_PMb8z7-5oLQ6S1Fin2P_AAdNVDpki9f3gjdM5bhPTHZnQGyANvWIJGdc7cK3OVkq6Ly3SgKO24BuII/s3960/On-device-LLMs-G4D.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnfrg6Q2vjvKaptB83LzfrdLshj92dAykUM5kKe9G9o2Pejxs-vnLnJeYvApyh1SwUXIq8SuKGvy8ICZ3_RHcPfVibfNaUotry3u8HSpX2hyphenhyphen7l_PMb8z7-5oLQ6S1Fin2P_AAdNVDpki9f3gjdM5bhPTHZnQGyANvWIJGdc7cK3OVkq6Ly3SgKO24BuII/s3960/On-device-LLMs-G4D.png" style="display: none;" />
<em>Posted by Mark Sherwood – Senior Product Manager and Juhyun Lee – Staff Software Engineer</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzLcDVUnZ2wwpFNcZdXmk-TeyMX17QTryhHZVoxXFsK9_tpZyjCfysAoBUinOy6npUOQSmtzGxeLd6jKX3Bvcm1JLSJbbNBg1-Gt5WQZlLCrjRVZgC10eKhWwyK5mWKCuN0XxNuH6oyuhfOl9OwQjDeMmSj3fUiSnOEMFop5ukG38JJUhMb_3Ud82c87U/s1600/HEADER-running-large-models-on-device-with-media-pipe-and-tensorflow-lite.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzLcDVUnZ2wwpFNcZdXmk-TeyMX17QTryhHZVoxXFsK9_tpZyjCfysAoBUinOy6npUOQSmtzGxeLd6jKX3Bvcm1JLSJbbNBg1-Gt5WQZlLCrjRVZgC10eKhWwyK5mWKCuN0XxNuH6oyuhfOl9OwQjDeMmSj3fUiSnOEMFop5ukG38JJUhMb_3Ud82c87U/s1600/HEADER-running-large-models-on-device-with-media-pipe-and-tensorflow-lite.png" /></a>
<a name='more'></a><p></p>
<p><a href="https://www.tensorflow.org/lite" target="_blank">TensorFlow Lite</a> has been a powerful tool for on-device machine learning since its <a href="https://developers.googleblog.com/2017/11/announcing-tensorflow-lite.html" target="_blank">release in 2017</a>, and <a href="https://developers.google.com/mediapipe" target="_blank">MediaPipe</a> further extended that power in <a href="https://blog.research.google/2019/08/on-device-real-time-hand-tracking-with.html" target="_blank">2019</a> by supporting complete ML pipelines. While these tools initially focused on smaller on-device models, today marks a dramatic shift with the <a href="https://developers.google.com/mediapipe/solutions/genai/llm_inference" target="_blank">experimental MediaPipe LLM Inference API</a>.</p>
<p>This new release enables Large Language Models (LLMs) to run fully on-device across platforms. This new capability is particularly transformative considering the memory and compute demands of LLMs, which are over a hundred times larger than traditional on-device models. Optimizations across the on-device stack make this possible, including new ops, quantization, caching, and weight sharing.</p>
<p>The experimental cross-platform MediaPipe LLM Inference API, designed to streamline on-device LLM integration for web developers, supports Web, Android, and iOS with initial support for four openly available LLMs: <a href="http://ai.google.dev/gemma" target="_blank">Gemma</a>, <a href="https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/" target="_blank">Phi 2</a>, <a href="https://falconllm.tii.ae/falcon.html" target="_blank">Falcon</a>, and <a href="https://stability.ai/stable-lm" target="_blank">Stable LM</a>. It gives researchers and developers the flexibility to prototype and test popular openly available LLM models on-device.</p>
<p>On Android, the MediaPipe LLM Inference API is intended for experimental and research use only. Production applications with LLMs can use the <a href="https://ai.google.dev/" target="_blank">Gemini API</a> or <a href="https://android-developers.googleblog.com/2023/12/a-new-foundation-for-ai-on-android.html" target="_blank">Gemini Nano on-device through Android AICore</a>. AICore is the new system-level capability introduced in Android 14 to provide Gemini-powered solutions for high-end devices, including integrations with the latest ML accelerators, use-case optimized LoRA adapters, and safety filters. To start using Gemini Nano on-device with your app, apply to the <a href="https://docs.google.com/forms/d/e/1FAIpQLSdDvg0eEzcUY_-CmtiMZLd68KD3F0usCnRzKKzWb4sAYwhFJg/viewform?usp=header_link" target="_blank">Early Access Preview</a>.</p><br />
<h3>LLM Inference API</h3>
<p>Starting today, you can test out the MediaPipe LLM Inference API via our <a href="https://mediapipe-studio.webapps.google.com/studio/demo/llm_inference" target="_blank">web demo</a> or by building our <a href="https://github.com/googlesamples/mediapipe/tree/main/examples/llm_inference/js" target="_blank">sample demo</a> apps. You can experiment and integrate it into your projects via our <a href="https://developers.google.com/mediapipe/solutions/genai/llm_inference/web_js" target="_blank">Web</a>, <a href="https://developers.google.com/mediapipe/solutions/genai/llm_inference/android" target="_blank">Android</a>, or <a href="https://developers.google.com/mediapipe/solutions/genai/llm_inference/ios" target="_blank">iOS SDKs</a>.</p>
<p>Using the LLM Inference API allows you to bring LLMs on-device in just a few steps. These steps apply across web, iOS, and Android, though the SDK and native API will be platform specific. The following code samples show the web SDK.</p>
<blockquote>
<p>1. Pick model weights compatible with one of our supported model architectures </p></blockquote><blockquote><div style="text-align: left;"> </div></blockquote><blockquote>
<p>2. Convert the model weights into a TensorFlow Lite Flatbuffer using the MediaPipe Python Package</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;"><span style="color: #780088;">from</span> mediapipe<span style="color: #666666;">.</span>tasks<span style="color: #666666;">.</span>python.genai <span style="color: #780088;">import</span> converter
config <span style="color: #666666;">=</span> converter<span style="color: #666666;">.</span>ConversionConfig<span style="color: #2b00fe;">(</span><span style="color: #666666;">...</span><span style="color: #2b00fe;">)</span>
converter<span style="color: #666666;">.</span>convert_checkpoint<span style="color: #2b00fe;">(</span>config<span style="color: #2b00fe;">)</span></span></pre></div></blockquote><div style="text-align: left;"> </div><blockquote>
<p>3. Include the LLM Inference SDK in your application</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;"><span style="color: #780088;">import</span> <span style="color: #2b00fe;">{ FilesetResolver, LlmInference }</span> <span style="color: #780088;">from</span>
<span style="color: #0d904f;">"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-genai”</span></span>
</pre></div></blockquote><div style="text-align: left;"> </div><blockquote>
<p>4. Host the TensorFlow Lite Flatbuffer along with your application.</p></blockquote><blockquote><div style="text-align: left;"> </div></blockquote><blockquote>
<p>5. Use the LLM Inference API to take a text prompt and get a text response from your model.</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;"><span style="color: #780088;">const</span> <span style="color: #2b00fe;">fileset</span> <span style="color: #666666;">=</span> <span style="color: #780088;">await</span>
<span style="color: #2b00fe;">FilesetResolver<span>.</span>forGenAiTasks(<span>"</span></span><span style="color: #0d904f;">https://cdn.jsdelivr.net/npm/@mediapipe/tasks-genai/wasm</span><span style="color: #2b00fe;"><span>"</span>)</span><span style="font-style: italic;">;</span>
<span style="color: #780088;">const</span> <span style="color: #2b00fe;">llmInference</span> <span style="color: #666666;">=</span> <span style="color: #780088;">await</span> <span style="color: #2b00fe;">LlmInference<span>.</span>createFromModelPath(fileset,</span> <span style="color: #2b00fe;">"</span><span style="color: #0d904f;">model<span>.</span>bin</span><span style="color: #2b00fe;">")</span><span style="font-style: italic;">;</span>
<span style="color: #780088;">const</span> <span style="color: #2b00fe;">responseText</span> <span style="color: #666666;">=</span> <span style="color: #780088;">await</span> <span style="color: #2b00fe;">llmInference<span>.</span>generateResponse(<span>"</span></span><span style="color: #0d904f;">Hello, nice to meet you</span><span style="color: #2b00fe;"><span>"</span>)</span><span style="font-style: italic;">;</span>
<span style="color: #2b00fe;">document<span>.</span>getElementById('</span><span style="color: #780088;">output</span>')<span style="color: #666666;">.</span><span style="color: #2b00fe;">textContent</span> <span style="color: #2b00fe;"><span>=</span> responseText</span><span style="font-style: italic;">;</span></span>
</pre></div></blockquote>
<p><br /></p><p>Please see our <a href="https://developers.google.com/mediapipe/solutions/genai/llm_inference" target="_blank">documentation</a> and <a href="https://github.com/googlesamples/mediapipe/tree/main/examples/llm_inference" target="_blank">code examples</a> for a detailed walk through of each of these steps.</p>
<p>Here are real time gifs of Gemma 2B running via the MediaPipe LLM Inference API.</p>

<br />
<h3>Models</h3>
<p>Our initial release supports the following four model architectures. Any model weights compatible with these architectures will work with the LLM Inference API. Use the base model weights, use a community fine-tuned version of the weights, or fine tune weights using your own data.</p>
<div align="left">
<table style="border-collapse: collapse; border: none; margin-right: calc(0%); width: 100%;">
<tbody>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Model</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Parameter Size</span></strong></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><span style="color: #202124; font-size: 12pt;"> Falcon 1B</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><span style="color: #202124; font-size: 12pt;"> 1.3 Billion</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><span style="color: #202124; font-size: 12pt;"> Gemma 2B</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><span style="color: #202124; font-size: 12pt;"> 2.5 Billion</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><span style="color: #202124; font-size: 12pt;"> Phi 2</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><span style="color: #202124; font-size: 12pt;"> 2.7 Billion</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><span style="color: #202124; font-size: 12pt;"> Stable LM 3B</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 50%;">
<p><span style="color: #202124; font-size: 12pt;"> 2.8 Billion</span></p>
</td>
</tr>
</tbody>
</table>
</div><br /><br />
<h3>Model Performance</h3>
<p>Through significant optimizations, some of which are detailed below, the MediaPipe LLM Inference API is able to deliver state-of-the-art latency on-device, focusing on CPU and GPU to support multiple platforms. For sustained performance in a production setting on select premium phones, <a href="https://developer.android.com/ml/aicore" target="_blank">Android AICore</a> can take advantage of hardware-specific neural accelerators.</p>
<p>When measuring latency for an LLM, there are a few terms and measurements to consider. Time to First Token and Decode Speed will be the two most meaningful as these measure how quickly you get the start of your response and how quickly the response generates once it starts.</p>
<div align="left">
<table style="border-collapse: collapse; border: none; margin-right: calc(0%); width: 100%;">
<tbody>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2338%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Term</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 31.7695%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Significance</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 39.8742%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Measurement</span></strong></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2338%;">
<p><span style="color: #202124; font-size: 12pt;"> Token</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 31.7695%;">
<p>LLMs use tokens rather than words as inputs and outputs. Each model used with the LLM Inference API has a tokenizer built in which converts between words and tokens.</p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 39.8742%;">
<p>100 English words ≈ 130 tokens. However the conversion is dependent on the specific LLM and the language.</p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2338%;">
<p><span style="color: #202124; font-size: 12pt;"> Max Tokens</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 31.7695%;">
<p>The maximum total tokens for the LLM prompt + response.</p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 39.8742%;">
<p>Configured in the LLM Inference API at runtime.</p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2338%;">
<p><span style="color: #202124; font-size: 12pt;"> Time to First Token</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 31.7695%;">
<p>Time between calling the LLM Inference API and receiving the first token of the response.</p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 39.8742%;">
<p>Max Tokens / Prefill Speed</p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2338%;">
<p><span style="color: #202124; font-size: 12pt;"> Prefill Speed</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 31.7695%;">
<p>How quickly a prompt is processed by an LLM.</p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 39.8742%;">
<p>Model and device specific. Benchmark numbers below.</p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2338%;">
<p><span style="color: #202124; font-size: 12pt;"> Decode Speed</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 31.7695%;">
<p>How quickly a response is generated by an LLM.</p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 39.8742%;">
<p>Model and device specific. Benchmark numbers below.</p>
</td>
</tr>
</tbody>
</table>
</div><br />
<p>The <i>Prefill Speed</i> and <i>Decode Speed</i> are dependent on model, hardware, and max tokens. They can also change depending on the current load of the device.</p>
<p>The following speeds were taken on high end devices using a max tokens of 1280 tokens, an input prompt of 1024 tokens, and int8 weight quantization. The exception being Gemma 2B (int4), found <a href="https://www.kaggle.com/models/google/gemma/frameworks/tfLite/variations/gemma-2b-it-gpu-int4" target="_blank">here on Kaggle</a>, which uses a mixed 4/8-bit weight quantization.</p><br />
<h4>Benchmarks</h4>

<br />
<h3>Performance Optimizations</h3>
<p>To achieve the performance numbers above, countless optimizations were made across MediaPipe, TensorFlow Lite, XNNPack (our CPU neural network operator library), and our GPU-accelerated runtime. The following are a select few that resulted in meaningful performance improvements.</p>
<p><b><i>Weights Sharing:</i></b> The LLM inference process comprises 2 phases: a prefill phase and a decode phase. Traditionally, this setup would require 2 separate inference contexts, each independently managing resources for its corresponding ML model. Given the memory demands of LLMs, we've added a feature that allows sharing the weights and the KV cache across inference contexts. Although sharing weights might seem straightforward, it has significant performance implications when sharing between compute-bound and memory-bound operations. In typical ML inference scenarios, where weights are not shared with other operators, they are meticulously configured for each fully connected operator separately to ensure optimal performance. Sharing weights with another operator implies a loss of per-operator optimization and this mandates the authoring of new kernel implementations that can run efficiently even on sub-optimal weights.</p>
<p><b><i>Optimized Fully Connected Ops:</i></b> XNNPack’s FULLY_CONNECTED operation has undergone two significant optimizations for LLM inference. First, dynamic range quantization seamlessly merges the computational and memory benefits of full integer quantization with the precision advantages of floating-point inference. The utilization of int8/int4 weights not only enhances memory throughput but also achieves remarkable performance, especially with the efficient, in-register decoding of 4-bit weights requiring only one additional instruction. Second, we actively leverage the I8MM instructions in ARM v9 CPUs which enable the multiplication of a 2x8 int8 matrix by an 8x2 int8 matrix in a single instruction, resulting in twice the speed of the NEON dot product-based implementation.</p>
<p><b><i>Balancing Compute and Memory:</i></b> Upon profiling the LLM inference, we identified distinct limitations for both phases: the prefill phase faces restrictions imposed by the compute capacity, while the decode phase is constrained by memory bandwidth. Consequently, each phase employs different strategies for dequantization of the shared int8/int4 weights. In the prefill phase, each convolution operator first dequantizes the weights into floating-point values before the primary computation, ensuring optimal performance for computationally intensive convolutions. Conversely, the decode phase minimizes memory bandwidth by adding the dequantization computation to the main mathematical convolution operations.</p>
<br />
<p><b><i>Custom Operators:</i></b> For GPU-accelerated LLM inference on-device, we rely extensively on custom operations to mitigate the inefficiency caused by numerous small shaders. These custom ops allow for special operator fusions and various LLM parameters such as token ID, sequence patch size, sampling parameters, to be packed into a specialized custom tensor used mostly within these specialized operations.</p>
<p><b><i>Pseudo-Dynamism:</i></b> In the attention block, we encounter dynamic operations that increase over time as the context grows. Since our GPU runtime lacks support for dynamic ops/tensors, we opt for fixed operations with a predefined maximum cache size. To reduce the computational complexity, we introduce a parameter enabling the skipping of certain value calculations or the processing of reduced data.</p>
<p><b><i>Optimized KV Cache Layout:</i></b> Since the entries in the KV cache ultimately serve as weights for convolutions, employed in lieu of matrix multiplications, we store these in a specialized layout tailored for convolution weights. This strategic adjustment eliminates the necessity for extra conversions or reliance on unoptimized layouts, and therefore contributes to a more efficient and streamlined process.</p><br />
<h3>What’s Next </h3>
<p>We are thrilled with the optimizations and the performance in today’s experimental release of the MediaPipe LLM Inference API. This is just the start. Over 2024, we will expand to more platforms and models, offer broader conversion tools, complimentary on-device components, high level tasks, and more. </p>
<p>You can check out the official sample on <a href="https://github.com/googlesamples/mediapipe/tree/main/examples/llm_inference" target="_blank">GitHub</a> demonstrating everything you’ve just learned about and read through our official <a href="https://developers.google.com/mediapipe/solutions/genai/llm_inference" target="_blank">documentation</a> for even more details. Keep an eye on the <a href="https://www.youtube.com/channel/UC_x5XG1OV2P6uZZ5FSM9Ttw" target="_blank">Google for Developers</a> YouTube channel for updates and tutorials.</p><br />
<h4>Acknowledgements</h4>
<p>We’d like to thank all team members who contributed to this work: T.J. Alumbaugh, Alek Andreev, Frank Ban, Jeanine Banks, Frank Barchard, Pulkit Bhuwalka, Buck Bourdon, Maxime Brénon, Chuo-Ling Chang, Lin Chen, Linkun Chen, Yu-hui Chen, Nikolai Chinaev, Clark Duvall, Rosário Fernandes, Mig Gerard, Matthias Grundmann, Ayush Gupta, Mohammadreza Heydary, Ekaterina Ignasheva, Ram Iyengar, Grant Jensen, Alex Kanaukou, Prianka Liz Kariat, Alan Kelly, Kathleen Kenealy, Ho Ko, Sachin Kotwani, Andrei Kulik, Yi-Chun Kuo, Khanh LeViet, Yang Lu, Lalit Singh Manral, Tyler Mullen, Karthik Raveendran, Raman Sarokin, Sebastian Schmidt, Kris Tonthat, Lu Wang, Zoe Wang, Tris Warkentin, Geng Yan, Tenghui Zhu, and the Gemma team.</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-18682461795298852772024-02-26T09:00:00.000-08:002024-02-26T09:11:27.957-08:00Google Cloud Next '24 session library is now available<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGzsYFvJ5ExANnYtIkIQBOLE9ZZO6u8QincU2yNIThu8PNxMXakRkudv5q6He_soj63fdOdmxl8GDgi7kGmpYTaDVccXHjbM8XDjZEgPuNi203idXHZkMmTkKqBtvLspbv0cfU-Fj_NQbm2w676afqG8X7zJGPfCPHahooIp8zkFHlIelhxF5Kxfe6XQ8/s1600/GfD-Cloud-Next-24-Social.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGzsYFvJ5ExANnYtIkIQBOLE9ZZO6u8QincU2yNIThu8PNxMXakRkudv5q6He_soj63fdOdmxl8GDgi7kGmpYTaDVccXHjbM8XDjZEgPuNi203idXHZkMmTkKqBtvLspbv0cfU-Fj_NQbm2w676afqG8X7zJGPfCPHahooIp8zkFHlIelhxF5Kxfe6XQ8/s1600/GfD-Cloud-Next-24-Social.png" style="display: none;" />
<em>Posted by Max Saltonstall – Developer Relations Engineer</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGzsYFvJ5ExANnYtIkIQBOLE9ZZO6u8QincU2yNIThu8PNxMXakRkudv5q6He_soj63fdOdmxl8GDgi7kGmpYTaDVccXHjbM8XDjZEgPuNi203idXHZkMmTkKqBtvLspbv0cfU-Fj_NQbm2w676afqG8X7zJGPfCPHahooIp8zkFHlIelhxF5Kxfe6XQ8/s1600/GfD-Cloud-Next-24-Social.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGzsYFvJ5ExANnYtIkIQBOLE9ZZO6u8QincU2yNIThu8PNxMXakRkudv5q6He_soj63fdOdmxl8GDgi7kGmpYTaDVccXHjbM8XDjZEgPuNi203idXHZkMmTkKqBtvLspbv0cfU-Fj_NQbm2w676afqG8X7zJGPfCPHahooIp8zkFHlIelhxF5Kxfe6XQ8/s1600/GfD-Cloud-Next-24-Social.png" /></a>
<a name='more'></a><p></p>
<p><a href="https://cloud.withgoogle.com/next?utm_source=google&utm_medium=cpc&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=-&utm_term=-&gad_source=1&gclid=CjwKCAiA_tuuBhAUEiwAvxkgThQ1sKYoRmPVp2Ry0pPhxzQGK3n7B4cWNldOte5jpT3Fws-nYDF6mBoCrdwQAvD_BwE&gclsrc=aw.ds" target="_blank">Google Cloud Next 2024</a> is coming soon, and our <a href="https://cloud.withgoogle.com/next/session-library?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">session library</a> is live! </p>
<p>Next ‘24 covers a ton of ground, so choose your adventure. There's something on the menu for everyone, not just <a href="https://cloud.withgoogle.com/next/session-library?filters=interest-ai&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">AI</a>.</p>
<h3>Developer-focused</h3>
<p>Developers, this is your time. We have got a huge collection of edutainment for you in store for Next, including:</p>
<ul><blockquote>
<li>Thousands of Googlers on-site to connect and chat</li>
<li>Demos you can play with, try out, poke and see inside of (rather than just watching)</li>
<li>Talks from Champion Innovators about how they put cloud to use</li>
<li>Gathering spots for classes, interest groups, trainings and hanging out</li>
</blockquote></ul>
<p>This year we have more than double the number of <a href="https://cloud.withgoogle.com/next/session-library?filters=level-advanced-technical&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">advanced technical sessions</a>, and recommendations for <a href="https://cloud.withgoogle.com/next/session-library?filters=interest-startup&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">startups</a>, <a href="https://cloud.withgoogle.com/next/session-library?filters=interest-small-to-medium-size-business&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">small and medium businesses</a>, and <a href="https://cloud.withgoogle.com/next/session-library?filters=interest-sustainability&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">sustainability</a> for all. Data scientists and data engineers can shard themselves out into 60+ <a href="https://cloud.withgoogle.com/next/session-library?filters=track-data-analysts-data-scientists-data-engineers&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">big data</a> sessions, including going to the cutting edge with BigQuery <a href="https://cloud.withgoogle.com/next/session-library?session=ANA302" target="_blank">multi-modal data</a>.</p><br/>
<h3>Artificial intelligence</h3>
<p>If you want to build your own AI model, <a href="https://cloud.withgoogle.com/next/session-library?session=DEV100&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">LLM</a> or <a href="https://cloud.withgoogle.com/next/session-library?session=OPS219&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">chatbot</a> we've got sessions for that, covering ways to <a href="https://cloud.withgoogle.com/next/session-library?session=AIML204&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">use Vertex AI to spin up your own large-language models on cloud</a>, to <a href="https://cloud.withgoogle.com/next/session-library?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-&session=AIML101" target="_blank">search your multimedia library</a> and to <a href="https://cloud.withgoogle.com/next/session-library?session=DEI112&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">maintain equity in your data</a> used for training.</p><br/>
<h3>Diversity, equity, and inclusion</h3>
<p>Equity and inclusion go way past AI, and we’re really excited to have talks this year addressing <a href="https://cloud.withgoogle.com/next/session-library?session=DEI117&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">allyship for your Muslim colleagues</a>, <a href="https://cloud.withgoogle.com/next/session-library?session=DEI101&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">growing inclusion in your org</a>, and <a href="https://cloud.withgoogle.com/next/session-library?session=DEI103&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">dialogues for change</a>.</p>
<br/>
<h3>Security and data privacy</h3>
<p>Don't forget <a href="https://cloud.withgoogle.com/next/session-library?filters=track-security-professionals&utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">security</a> (really, who does?). Whether you are tackling security at the infrastructure, platform, machine or workload level, we've got sessions for you. Even if you're on multiple clouds, with multiple teams, you still need to get insight into the security and compliance of it all.</p>
<p>Speaking of all these fun chips, what about the salsa? We've got supply chain security with talks on <a href="https://slsa.dev/" target="_blank">SLSA</a> and <a href="https://github.com/guacsec/guac" target="_blank">GUAC</a>, plus numerous options for serverless workload security and ML data privacy.</p><br/>
<h3>Come join us</h3>
<p>So, still on the fence?</p>
<p>Come for the magnificent shows in Vegas.</p>
<p>Come for the chance to sit down with expert developers and engineers.</p>
<p>Come for the amazing technical talks and tutorials.</p>
<p>Or just come for the spectacle. We've got it all at <a href="https://cloud.withgoogle.com/next?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY24-Q2-global-ENDM33-physicalevent-er-next-2024-mc&utm_content=gds-session-blog-feb&utm_term=-" target="_blank">Google Cloud Next ‘24</a>.
<p><a href="https://cloud.withgoogle.com/next/session-library" target="_blank">Check out sessions</a> and secure your spot for three days of learning, community-building, and cloud tech with experts and peers at <b>Mandalay Bay Convention Center in Las Vegas, April 9–11</b>.</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-82965120695657135892024-02-21T05:00:00.000-08:002024-02-21T05:22:22.279-08:00Introducing Gemma models in Keras<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdWSrhXoP4nHcjFcmbTNjhKcd3LRLcIiL2uEp4_8ilX7h_zsMu_muQlLP52eEgxvq4ejAQKy0TQKNaFC07O4o9imxqDDKF8hgaLU-iYfwmcPYGpm64psp1WHyaJOZQPImAhCDpYtc4nWEvbM3hSERTA50n08rIhftkP0rK1ai9uB-o3nWx0TQMRWt1leQ/s1600/Keras-Gemma-GfD.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdWSrhXoP4nHcjFcmbTNjhKcd3LRLcIiL2uEp4_8ilX7h_zsMu_muQlLP52eEgxvq4ejAQKy0TQKNaFC07O4o9imxqDDKF8hgaLU-iYfwmcPYGpm64psp1WHyaJOZQPImAhCDpYtc4nWEvbM3hSERTA50n08rIhftkP0rK1ai9uB-o3nWx0TQMRWt1leQ/s1600/Keras-Gemma-GfD.png" style="display: none;" />
<em>Posted by <a href="https://twitter.com/martin_gorner" target="_blank">Martin Görner</a> – Product Manager, Keras</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxJm_iBwcIXwrrywrm5hX6VtcXe64G75-HKc4EavPtKqnKvZsvtSOeD2_ti5secAQuzkdukK8HojvoLA-F6mXoxOfbBH_kSXTk9QNZJZwt_qeXP21_-uYc4wUahMqrGFzvajERUpEYlSh5KJ4QIDEIO9GNRp3KbTyNGanRYIyhJCVDDwygo6AXDXthbI8/s1600/Header-Gemma-in-Keras!-.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxJm_iBwcIXwrrywrm5hX6VtcXe64G75-HKc4EavPtKqnKvZsvtSOeD2_ti5secAQuzkdukK8HojvoLA-F6mXoxOfbBH_kSXTk9QNZJZwt_qeXP21_-uYc4wUahMqrGFzvajERUpEYlSh5KJ4QIDEIO9GNRp3KbTyNGanRYIyhJCVDDwygo6AXDXthbI8/s1600/Header-Gemma-in-Keras!-.png" /></a>
<a name='more'></a><p></p>
<p>The Keras team is happy to announce that <a href="http://ai.google.dev/gemma" target="_blank">Gemma</a>, a family of lightweight, state-of-the art open models built from the same research and technology that we used to create the Gemini models, is now available in the <a href="https://keras.io/api/keras_nlp/" target="_blank">KerasNLP</a> collection. Thanks to <a href="https://keras.io/keras_3/" target="_blank">Keras 3</a>, Gemma runs on JAX, PyTorch and TensorFlow. With this release, Keras is also introducing several new features specifically designed for large language models: a new LoRA API (Low Rank Adaptation) and large scale <a href="https://keras.io/guides/distribution/" target="_blank">model-parallel training</a> capabilities.</p>
<p>If you want to dive directly into code samples, head here:</p>
<ul><blockquote>
<li><a href="https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/get_started.ipynb" target="_blank">Get started with Gemma models</a></li>
<li><a href="https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/lora_tuning.ipynb" target="_blank">Fine-tune Gemma models with LoRA</a></li>
<li><a href="https://www.kaggle.com/code/nilaychauhan/keras-gemma-distributed-finetuning-and-inference" target="_blank">Fine-tune Gemma models on multiple GPUs/TPUs</a></li>
</blockquote></ul><br/>
<h3>Get started</h3>
<p>Gemma models come in portable 2B and 7B parameter sizes, and deliver <a href="https://goo.gle/GemmaReport" target="_blank">significant advances</a> against similar open models, and even some larger ones. For example:</p>
<ul><blockquote>
<li>Gemma 7B scores a new best-in class 64.3% of correct answers in the MMLU language understanding benchmark (vs. 62.5% for Mistral-7B and 54.8% for Llama2-13B)</li>
<li>Gemma adds +11 percentage points to the GSM8K benchmark score for grade-school math problems (46.4% for Gemma 7B vs. Mistral-7B 35.4%, Llama2-13B 28.7%)</li>
<li>and +6.1 percentage points of correct answers in HumanEval, a coding challenge (32.3% for Gemma 7B, vs. Mistral 7B 26.2%, Llama2 13B 18.3%).</li>
</blockquote></ul>
<p>Gemma models are offered with a familiar KerasNLP API and a super-readable Keras implementation. You can instantiate the model with a single line of code:</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;">gemma_lm <span style="color: #666666;">=</span> keras_nlp<span style="color: #666666;">.</span>models<span style="color: #666666;">.</span>GemmaCausalLM<span style="color: #666666;">.</span>from_preset(<span style="color: #ba2121;">"gemma_2b_en"</span>)</span>
</pre></div>
<p>And run it directly on a text prompt – yes, tokenization is built-in, although you can easily split it out if needed - read the <a href="https://keras.io/guides/keras_nlp/getting_started/" target="_blank">Keras NLP guide</a> to see how.</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;">gemma_lm<span style="color: #666666;">.</span>generate(<span style="color: #ba2121;">"Keras is a"</span>, max_length<span style="color: #666666;">=32</span>)
<span style="color: #666666;">></span> "Keras is a popular deep learning framework <span style="color: green;">for</span> neural networks..."</span>
</pre></div>
<p>Try it out here: <a href="https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/get_started.ipynb" target="_blank">Get started with Gemma models</a></p><br/>
<h3>Fine-tuning Gemma Models with LoRA</h3>
<p>Thanks to Keras 3, you can choose the backend on which you run the model. Here is how to switch:</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;">os<span style="color: #666666;">.</span>environ[<span style="color: #ba2121;">"KERAS_BACKEND"</span>] <span style="color: #666666;">=</span> <span style="color: #ba2121;">"jax"</span> # <span style="color: green;">Or</span> <span style="color: #ba2121;">"tensorflow"</span> <span style="color: green;">or</span> <span style="color: #ba2121;">"torch"</span><span style="color: #666666;">.</span>
import keras # import keras after having selected the backend</span>
</pre></div>
<p>Keras 3 comes with several new features specifically for large language models. Chief among them is a new LoRA API (Low Rank Adaptation) for parameter-efficient fine-tuning. Here is how to activate it:</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;">gemma_lm<span style="color: #666666;">.</span>backbone<span style="color: #666666;">.</span>enable_lora(rank<span style="color: #666666;">=4</span>)
<span style="color: #0d904f;"># Note<span>:</span> rank<span>=4</span> replaces the weights matrix of relevant layers with the
# product AxB of two matrices of rank <span>4</span>, which reduces the <span>number</span> of
# trainable parameters<span>.</span></span></span>
</pre></div>
<p>This single line drops the number of trainable parameters from 2.5 billion to 1.3 million!</p>
<p>Try it out here: <a href="https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/lora_tuning.ipynb" target="_blank">Fine-tune Gemma models with LoRA.</a></p><br/>
<h3>Fine-tuning Gemma models on multiple GPU/TPUs</h3>
<p style="text-align: left;">Keras 3 also supports large-scale model training and Gemma is the perfect model to try it out. The new <a href="https://keras.io/guides/distribution/" target="_blank">Keras distribution API</a> offers data-parallel and model-parallel distributed training options. The new API is meant to be multi-backend but for the time being, it is implemented for the JAX backend only, because of its proven scalability (Gemma models were trained with JAX).</p>
<p>To fine-tune the larger Gemma 7B, a distributed setup is useful, for example a TPUv3 with 8 TPU cores that you can get for free on Kaggle, or an 8-GPU machine from Google Cloud. Here is how to configure the model for distributed training, using model parallelism:</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;">device_mesh <span style="color: #666666;">=</span> keras<span style="color: #666666;">.</span>distribution<span style="color: #666666;">.</span>DeviceMesh(
(<span style="color: #666666;">1</span>, <span style="color: #666666;">8</span>), # Mesh topology
[<span style="color: #ba2121;">"batch"</span>, <span style="color: #ba2121;">"model"</span>], # named mesh axes
devices<span style="color: #666666;">=</span>keras<span style="color: #666666;">.</span>distribution<span style="color: #666666;">.</span>list_devices() # actual accelerators
)
<span style="color: #0d904f;"># Model config
</span>layout_map <span style="color: #666666;">=</span> keras<span style="color: #666666;">.</span>distribution<span style="color: #666666;">.</span>LayoutMap(device_mesh)
layout_map[<span style="color: #ba2121;">"token_embedding/embeddings"</span>] <span style="color: #666666;">=</span> (None, <span style="color: #ba2121;">"model"</span>)
layout_map[<span style="color: #ba2121;">"decoder_block.*attention.*(query|key|value).*kernel"</span>] <span style="color: #666666;">=</span> (
None, <span style="color: #ba2121;">"model"</span>, None)
layout_map[<span style="color: #ba2121;">"decoder_block.*attention_output.*kernel"</span>] <span style="color: #666666;">=</span> (
None, None, <span style="color: #ba2121;">"model"</span>)
layout_map[<span style="color: #ba2121;">"decoder_block.*ffw_gating.*kernel"</span>] <span style="color: #666666;">=</span> (<span style="color: #ba2121;">"model"</span>, None)
layout_map[<span style="color: #ba2121;">"decoder_block.*ffw_linear.*kernel"</span>] <span style="color: #666666;">=</span> (None, <span style="color: #ba2121;">"model"</span>)
<span style="color: #0d904f;">#</span> <span style="color: #0d904f;">Set the model config <span>and</span> load the model</span>
model_parallel <span style="color: #666666;">=</span> keras<span style="color: #666666;">.</span>distribution<span style="color: #666666;">.</span>ModelParallel(
device_mesh, layout_map, batch_dim_name<span style="color: #666666;">=</span><span style="color: #ba2121;">"batch"</span>)
keras<span style="color: #666666;">.</span>distribution<span style="color: #666666;">.</span>set_distribution(model_parallel)
gemma_lm <span style="color: #666666;">=</span> keras_nlp<span style="color: #666666;">.</span>models<span style="color: #666666;">.</span>GemmaCausalLM<span style="color: #666666;">.</span>from_preset(<span style="color: #ba2121;">"gemma_7b_en"</span>)
<span style="color: #0d904f;"># Ready<span>:</span> you can now train with model<span>.</span>fit() <span>or</span> generate text with generate()</span></span>
</pre></div>
<p>What this code snippet does is set up the 8 accelerators into a 1 x 8 matrix where the two dimensions are called “batch” and “model”. Model weights are sharded on the “model” dimension, here split between the 8 accelerators, while data batches are not partitioned since the “batch” dimension is 1.</p>
<p>Try it out here: <a href="https://www.kaggle.com/code/nilaychauhan/keras-gemma-distributed-finetuning-and-inference" target="_blank">Fine-tune Gemma models on multiple GPUs/TPUs.</a></p>
<div style="text-align: left;"><br /></div>
<h3>What’s Next</h3>
<p style="text-align: left;">We will soon be publishing a guide showing you how to correctly partition a Transformer model and write the 6 lines of partitioning setup above. It is not very long but it would not fit in this post.</p>
<p>You will have noticed that layer partitionings are defined through regexes on layer names. You can check layer names with this code snippet. We ran this to construct the LayoutMap above.</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;"><span style="color: #0d904f;"># This is <span>for</span> the first Transformer block only,
# but they all have the same structure</span>
tlayer <span style="color: #666666;">=</span> gemma_lm<span style="color: #666666;">.</span>backbone<span style="color: #666666;">.</span>get_layer('decoder_block_0')
<span style="color: green;">for</span> variable in tlayer<span style="color: #666666;">.</span>weights<span style="color: #666666;">:</span>
print(f'{variable<span style="color: #666666;">.</span>path<span style="color: #666666;">:<58</span>} {str(variable<span style="color: #666666;">.</span>shape)<span style="color: #666666;">:<16</span>}')</span>
</pre></div>
<p>Full <a href="https://arxiv.org/abs/2105.04663" target="_blank">GSPMD</a> model parallelism works here with just a few partitioning hints because Keras passes these settings to the powerful <a href="https://github.com/openxla/xla" target="_blank">XLA compiler</a> which figures out all the other details of the distributed computation.</p><br />
<p>We hope you will enjoy playing with Gemma models. Here is also an <a href="https://ai.google.dev/gemma/docs/lora_tuning" target="_blank">instruction-tuning tutorial</a> that you might find useful. And by the way, if you want to share your fine-tuned weights with the community, the Kaggle Model Hub now supports user-tuned weights uploads. Head to the <a href="https://www.kaggle.com/models/google/gemma" target="_blank">model page for Gemma</a> models on Kaggle and see what others have already created!</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-59882139432355579192024-02-15T09:00:00.000-08:002024-02-16T11:35:35.563-08:00Build with Gemini models in Project IDX<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoBHe548zp1Muxwd9kePtoW5J5805d2sytry2bIeeBzvLzvWFuSfGS3gEtZw4oSEInyuh53Dxt8J5dg03nEGYJQXFPNehH2cb1Q7B12oIUEn0HCyZO_e0vw6PfMzcBC4Qv57Z00Z-R2aBD85NSTcRkrmT8DDn_WnnYr3OqRwh7VRAMSqA87y0mfIL5Qpk/s1600/Gemini-Project-IDX-Social.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoBHe548zp1Muxwd9kePtoW5J5805d2sytry2bIeeBzvLzvWFuSfGS3gEtZw4oSEInyuh53Dxt8J5dg03nEGYJQXFPNehH2cb1Q7B12oIUEn0HCyZO_e0vw6PfMzcBC4Qv57Z00Z-R2aBD85NSTcRkrmT8DDn_WnnYr3OqRwh7VRAMSqA87y0mfIL5Qpk/s1600/Gemini-Project-IDX-Social.png" style="display: none;" />
<em>Posted by Ali Satter – AI Lead, Roman Nurik – Design Lead, Kaushik Sathupadi and Jamal Carvalho – AI Engineers</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijR23pzfx4okgIukSiuZzEXAhV2p3MjSEydYlZ7btkJJVKSsHEa0RDGt7kMt70JT1v1EC3_dFYQB-2en98uPhyphenhyphen47PEBxan3lQ_ic2JWQgwamhyl3niPaAJ1QJQlDMZx0uDLaiFql5FnX4rYIEZVxUsRNxRZvz8JGb6Lta6AIxxWKxh1HoSblv_x7Hpbj8/s1600/IDX_Gemini_Blog_Header_B.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijR23pzfx4okgIukSiuZzEXAhV2p3MjSEydYlZ7btkJJVKSsHEa0RDGt7kMt70JT1v1EC3_dFYQB-2en98uPhyphenhyphen47PEBxan3lQ_ic2JWQgwamhyl3niPaAJ1QJQlDMZx0uDLaiFql5FnX4rYIEZVxUsRNxRZvz8JGb6Lta6AIxxWKxh1HoSblv_x7Hpbj8/s1600/IDX_Gemini_Blog_Header_B.png" /></a>
<a name='more'></a><p></p>
<p>A few weeks ago, we <a href="https://idx.dev/blog/article/introducing-android-emulators-ios-simulators-and-other-product-updates-from" target="_blank">announced</a> a series of product updates to <a href="https://idx.dev/" target="_blank">Project IDX</a> to help streamline and simplify full-stack, multiplatform software development. This week, we’re excited to share how Project IDX uses <a href="https://blog.google/technology/ai/google-gemini-ai/" target="_blank">Gemini models</a> to provide you with AI features to further speed up and refine your end-to-end developer workflow.</p>
<p>Project IDX launched with support for AI-powered code completion, an assistive chatbot, and contextual code actions like "add comments" and “explain this code” to help you write high-quality code faster. Since launch, and thanks to your feedback, we’ve been working hard to add new AI functionality to help boost your productivity even more. </p><br />
<h3>Work faster with inline AI assistance</h3>
<p>You can now get <a href="https://developers.google.com/idx/guides/idx-ai-inline" target="_blank">inline AI assistance</a> inside any file by pressing Cmd/Ctrl + I. Simply describe the changes you want to make to your code and IDX inline AI assistance will provide real-time error correction, code suggestions, and auto-completion in your code.</p>
<p>We integrated these AI enhancements directly into Project IDX’s centralized workspace to equip you with the necessary tools and resources for full-stack app development where and when you need them. From setting up your workspace to testing your app, IDX AI assistance helps accelerate and improve your workflow, ensuring that your end-to-end development experience is faster, easier, and higher quality.</p>
<p>For example, let’s say you want to add an authenticated API endpoint to your server. You can tell IDX AI to write the code necessary to enable secure task management using Firebase Authentication and Cloud Firestore. Given an input prompt, IDX AI assistance can write the code to construct the route, determine which APIs to use to verify the token, and save the data to the database. Instead of writing boilerplate code, you can focus on higher-level design and problem solving.</p>
<br />
<p>Then, let's say you want to clean up your code a bit to improve its quality, readability, and maintainability. IDX AI assistance can help you quickly and easily refactor your code, so you can get right into optimizing your work without the hassle of manual refactoring.</p>
<br />
<p>And, as you wrap up your project, IDX AI can help you test and debug your code to make sure your application is running smoothly before deployment. Tell IDX AI assistance to write you a unit test for a function to ensure it’s working properly, saving you time and effort as you inspect the quality of your app.</p>
<br />
<h3>Easily add AI features with the Gemini API template</h3>
<p>We’re also simplifying the process of building with the Gemini API with <a href="https://developers.google.com/idx/guides/build-gemini-api-app" target="_blank">Project IDX’s new Gemini API template</a>. The Gemini API template uses the Gemini Pro model to embed AI-powered features into your applications without additional configuration on your end, so you can get started working with the Gemini API quickly and easily. There's even an option to use the Gemini API via the popular <a href="https://www.langchain.com/" target="_blank">LangChain</a> framework to simplify the process of building LLM-powered apps.</p>
<p>The Gemini API template is multimodal, meaning it can provide context-aware prompt output for a myriad of input modalities including images, text and, of course, code. This can help you add features like conversational interfaces, summarization of user reviews, translation, and automatic image caption creation. </p>
<p>To demonstrate its functionality, we pre-configured the Gemini API template with ‘Baking with the Gemini API’, a recipe builder application that, using the Gemini model’s multimodal capabilities, can reverse-engineer possible recipes for baked goods from just a picture. </p>
<br />
<p>But this recipe builder is just one example of the Gemini API template in action – with support for different input modalities and context-aware output generation, you can use IDX’s Gemini API template to create a myriad of innovative and impactful applications that deliver AI-enhanced experiences to your users.</p><br />
<h2>Stay tuned for more AI updates </h2>
<p>These updates are a continuation of our efforts to leverage Google’s AI innovations for Project IDX, so make sure to keep an eye out for more announcements to come, including the expansion of AI in IDX to more than 150 countries/regions in the coming weeks.</p>
<p>Thank you for your continued support and engagement – please keep the feedback coming by <a href="https://issuetracker.google.com/issues/new?component=1379083&template=1836320" target="_blank">filing bugs</a> and <a href="https://idx.uservoice.com/forums/953956-general" target="_blank">feature requests</a>. For walkthroughs and more information on all the features mentioned above, check out our documentation. If you haven’t already, <a href="https://idx.dev/" target="_blank">visit our website</a> to sign up to try Project IDX and join us on our journey. Also, be sure to check out our new <a href="https://idx.dev/blog" target="_blank">Project IDX Blog</a> for the latest product announcements and updates from the team. </p>
<p>We can’t wait to see what you create with Project IDX!</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-39204929249419547732024-02-15T06:50:00.000-08:002024-02-15T12:47:59.174-08:00Gemini 1.5: Our next-generation model, now available for Private Preview in Google AI Studio<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkJDyVXqxkwEYO-ze0PBiId2CSuNLfS5eLMGXyzzlPQIiKAwDBO-W43FU5ev0XA6bP9p32HqlLwRq4EIUcKHsXtcZFVNdn_ATCedaNgAAACh0L7M5d6tBnRX_EUPx5MkEuqsXkPzUJDwVBXh9rqt0sJ18AjEzf7N9G7OeHpsmXA_0WkPIGl0XKVt5x7lc/s1600/Gemini-Pro-Social.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkJDyVXqxkwEYO-ze0PBiId2CSuNLfS5eLMGXyzzlPQIiKAwDBO-W43FU5ev0XA6bP9p32HqlLwRq4EIUcKHsXtcZFVNdn_ATCedaNgAAACh0L7M5d6tBnRX_EUPx5MkEuqsXkPzUJDwVBXh9rqt0sJ18AjEzf7N9G7OeHpsmXA_0WkPIGl0XKVt5x7lc/s1600/Gemini-Pro-Social.png" style="display: none;" />
<em>Posted by Jaclyn Konzelmann and Wiktor Gworek – Google Labs</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDPAk6bwZdndo7lMEgewx_I2L7rRSqmGvgpxqQnz19k8JJE2rg9k6h5y-pKyHPrFDc7zyKMpQbtsVnYdlzzJMFgxEB1Y6dT_MNLSU_xL9-ZxG7jNhDjqM7BNvSbeNqZn55Y7oJEiVrZ4Uu6JlsiD-d89HKpist6ekZPBW-_5DNYmPwUdOUdOKO3Xb3myo/s1600/Gemini-Pro-GfD.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDPAk6bwZdndo7lMEgewx_I2L7rRSqmGvgpxqQnz19k8JJE2rg9k6h5y-pKyHPrFDc7zyKMpQbtsVnYdlzzJMFgxEB1Y6dT_MNLSU_xL9-ZxG7jNhDjqM7BNvSbeNqZn55Y7oJEiVrZ4Uu6JlsiD-d89HKpist6ekZPBW-_5DNYmPwUdOUdOKO3Xb3myo/s1600/Gemini-Pro-GfD.png" /></a>
<a name='more'></a><p></p>
<p>Last week, we <a href="https://blog.google/technology/ai/google-gemini-update-sundar-pichai-2024/" target="_blank">released</a> Gemini 1.0 Ultra in Gemini Advanced. You can try it out now by signing up for a <a href="http://gemini.google.com/advanced" target="_blank">Gemini Advanced subscription</a>. The 1.0 Ultra model, accessible via the Gemini API, has seen a lot of interest and continues to roll out to select developers and partners in <a href="https://aistudio.google.com/?utm_source=agd&utm_medium=referral&utm_campaign=blog-feb&utm_content=1" target="_blank">Google AI Studio</a>.</p>
<p>Today, we’re also excited to introduce our next-generation <a href="https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024?utm_source=devblog&utm_medium=referral&utm_campaign=gemini24&utm_content=" target="_blank">Gemini 1.5 model</a>, which uses a new Mixture-of-Experts (MoE) approach to improve efficiency. It routes your request to a group of smaller "expert” neural networks so responses are faster and higher quality.</p>
<p>Developers can <a href="https://aistudio.google.com/app/waitlist/97445851?utm_source=agd&utm_medium=referral&utm_campaign=blog-feb&utm_content=" target="_blank">sign up for our Private Preview</a> of <b>Gemini 1.5 Pro</b>, our mid-sized multimodal model optimized for scaling across a wide-range of tasks. The model features a new, experimental 1 million token context window, and will be available to try out in <a href="https://aistudio.google.com/app/prompts/new_chat/?utm_source=agd&utm_medium=referral&utm_campaign=blog-feb&utm_content=" target="_blank">Google AI Studio</a>. Google AI Studio is the fastest way to build with Gemini models and enables developers to easily integrate the <a href="https://ai.google.dev/?utm_source=agd&utm_medium=referral&utm_campaign=blog-feb&utm_content=" target="_blank">Gemini API</a> in their applications. It’s available in 38 languages across <a href="https://ai.google.dev/available_regions?utm_source=agd&utm_medium=referral&utm_campaign=blog-feb&utm_content=" target="_blank">180+ countries and territories</a>.</p><br />
<h3>1,000,000 tokens: Unlocking new use cases for developers</h3>
<p>Before today, the largest context window in the world for a publicly available large language model was 200,000 tokens. We’ve been able to significantly increase this — running up to 1 million tokens consistently, achieving the longest context window of any large-scale foundation model. Gemini 1.5 Pro will come with a 128,000 token context window by default, but today’s Private Preview will have access to the experimental 1 million token context window.</p>
<p>We’re excited about the new possibilities that larger context windows enable. You can directly upload large PDFs, code repositories, or even lengthy videos as prompts in Google AI Studio. Gemini 1.5 Pro will then reason across modalities and output text.</p>
<ol><blockquote>
<h4><li>Upload multiple files and ask questions</li></h4>
<p>We’ve added the ability for developers to upload multiple files, like PDFs, and ask questions in Google AI Studio. The larger context window allows the model to take in more information — making the output more consistent, relevant and useful. With this 1 million token context window, we’ve been able to load in over 700,000 words of text in one go.</p>
<br />
<h4><li>Query an entire code repository</li></h4>
<p>The large context window also enables a deep analysis of an entire codebase, helping Gemini models grasp complex relationships, patterns, and understanding of code. A developer could upload a new codebase directly from their computer or via Google Drive, and use the model to onboard quickly and gain an understanding of the code.</p>
<br />
<h4><li>Add a full length video</li></h4>
<p>Gemini 1.5 Pro can also reason across up to 1 hour of video. When you attach a video, Google AI Studio breaks it down into thousands of frames (without audio), and then you can perform highly sophisticated reasoning and problem-solving tasks since the Gemini models are multimodal.</p>

</blockquote></ol><br />
<h3>More ways for developers to build with Gemini models</h3>
<p>In addition to bringing you the latest model innovations, we’re also making it easier for you to build with Gemini:</p>
<ul><blockquote>
<li><b>Easy tuning.</b> Provide a set of examples, and you can customize Gemini for your specific needs in minutes from inside Google AI Studio. This feature rolls out in the next few days. </li></blockquote><blockquote>
<li><b>New developer surfaces</b>. Integrate the Gemini API to build new AI-powered features today with <a href="https://firebase.blog/posts/2024/02/supercharge-apps-firebase-gemini" target="_blank">new Firebase Extensions</a>, across your development workspace in <a href="https://idx.dev/blog/article/build-with-gemini-in-idx" target="_blank">Project IDX</a>, or with our newly released <a href="https://medium.com/flutter/harness-gemini-in-your-dart-and-flutter-apps-00573e560381" target="_blank">Google AI Dart SDK</a>. </li></blockquote><blockquote>
<li><b><a href="https://ai.google.dev/pricing" target="_blank">Lower pricing for Gemini 1.0 Pro</a>.</b> We’re also updating the 1.0 Pro model, which offers a good balance of cost and performance for many AI tasks. Today’s stable version is priced 50% less for text inputs and 25% less for outputs than previously announced. The upcoming pay-as-you-go plans for AI Studio are coming soon.</li>
</blockquote></ul><br />
<p>Since December, developers of all sizes have been building with Gemini models, and we’re excited to turn cutting edge research into early developer products in <a href="https://aistudio.google.com/app/prompts/new_chat/?utm_source=agd&utm_medium=referral&utm_campaign=blog-feb&utm_content=" target="_blank">Google AI Studio</a>. Expect some latency in this preview version due to the experimental nature of the large context window feature, but we’re excited to start a phased rollout as we continue to fine-tune the model and get your feedback. We hope you enjoy experimenting with it early on, like we have.</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-46220497232158433862024-02-14T14:00:00.000-08:002024-02-14T14:00:00.120-08:00Google Pay - Enabling liability shift for eligible Visa device token transactions globally<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZ89gNt-zlR4gBbqVq1dRnh-x3Qd-Uvx6CES18_0hPsHo00hSRQ6zEvZwlroIiv8XoQ8RiFG2rStzaVoSKFHyQlqNCmeHRHuNpZzTZ8YTfU3f-tPyrc9uVc9p1qLi8sr7-ctncMgi1KPPrvqqHeykWyof7Hb-gTRq9DdlMyrMI3mAWIG69s9TXSym0oAQ/s1600/SOCIAL-Google-Pay.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZ89gNt-zlR4gBbqVq1dRnh-x3Qd-Uvx6CES18_0hPsHo00hSRQ6zEvZwlroIiv8XoQ8RiFG2rStzaVoSKFHyQlqNCmeHRHuNpZzTZ8YTfU3f-tPyrc9uVc9p1qLi8sr7-ctncMgi1KPPrvqqHeykWyof7Hb-gTRq9DdlMyrMI3mAWIG69s9TXSym0oAQ/s1600/SOCIAL-Google-Pay.png" style="display: none;" />
<em>Posted by Dominik Mengelt– Developer Relations Engineer, Payments and Florin Modrea - Product Solutions Engineer, Google Pay</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5J1YESLrmFkvVwbc0WLCA-afV5JXH1giJDooJL4n607nxWQCl8kZG0EpHZmtD0o5C3M59GpwYzaYPeEvH4gUx59WRZcjTMs5B99NHYuFLTjUPGRRmBHYKwpTiW5FGbHH7SBS-c7p8QYoLiEUzo_KHXK5fKkd6bNMzWo8mFVsBzdQ5MvSmpNtpa0Z3J0s/s1600/header-Enabling-liability-shift-for-all-Visa-device-token-transactions-globally.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5J1YESLrmFkvVwbc0WLCA-afV5JXH1giJDooJL4n607nxWQCl8kZG0EpHZmtD0o5C3M59GpwYzaYPeEvH4gUx59WRZcjTMs5B99NHYuFLTjUPGRRmBHYKwpTiW5FGbHH7SBS-c7p8QYoLiEUzo_KHXK5fKkd6bNMzWo8mFVsBzdQ5MvSmpNtpa0Z3J0s/s1600/header-Enabling-liability-shift-for-all-Visa-device-token-transactions-globally.png" /></a>
<a name='more'></a><p></p>
<p>We are excited to announce the general availability <sup>[1]</sup> of liability shift for Visa device tokens for Google Pay.</p>
<p>For Mastercard device tokens the liability already lies with the issuing bank, whereas, for Visa, only eligible device tokens with issuing banks in the European region benefit from liability shift.</p><br />
<h2>What is liability shift?</h2>
<p>If liability shift is granted for a transaction, the responsibility of covering the losses from fraudulent transactions is moving from the merchant to the issuing bank. With this change, qualifying Google Pay Visa transactions done with a device token will benefit from this liability shift.</p><br />
<h2>How to know if the liability was shifted to the issuing bank for my transaction?</h2>
<p>Eligible Visa transactions will carry an <span style="font-family: courier;">eciIndicator</span> value of <span style="font-family: courier;">05</span>. PSPs can access the eciIndicator value after decrypting the payment method token. Merchants can check with their PSPs to get a report on liability shift eligible transactions. </p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;"> {
<span style="color: #0d904f;">"gatewayMerchantId"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"some-merchant-id"</span>,
<span style="color: #0d904f;">"messageExpiration"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"1561533871082"</span>,
<span style="color: #0d904f;">"messageId"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"AH2Ejtc8qBlP_MCAV0jJG7Er"</span>,
<span style="color: #0d904f;">"paymentMethod"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"CARD"</span>,
<span style="color: #0d904f;">"paymentMethodDetails"</span><span style="color: #666666;">:</span> {
<span style="color: #0d904f;">"expirationYear"</span><span style="color: #666666;">:</span> <span style="color: red;">2028</span>,
<span style="color: #0d904f;">"expirationMonth"</span><span style="color: #666666;">:</span> <span style="color: red;">12</span>,
<span style="color: #0d904f;">"pan"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"4895370012003478"</span>,
<span style="color: #0d904f;">"authMethod"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"CRYPTOGRAM_3DS"</span>,
<span style="color: #0d904f;">"eciIndicator"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"05"</span>,
<span style="color: #0d904f;">"cryptogram"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"AgAAAAAABk4DWZ4C28yUQAAAAAA="</span>
}
}</span>
</pre></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"></td></tr><tr><td class="tr-caption" style="text-align: center;"><em>A decrypted payment token for a Google Pay Visa transaction with an eciIndicator value of 05 (liability shifted)</em></td></tr></tbody></table>
<p>Check out the following table for a full list of <span style="font-family: courier;">eciIndicator </span>values we return for our Visa and Mastercard device token transactions:</p>
<div align="left">
<table style="border-collapse: collapse; border: none; margin-right: calc(0%); width: 100%;">
<tbody>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><strong><span style="font-family: courier; font-size: 11pt;"> eciIndicator</span></strong><strong><span style="font-size: 11pt;"> value</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><strong><span style="font-size: 11pt;"> Card Network</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><strong><span style="font-size: 11pt;"> Liable Party</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><strong><span style="font-size: 11pt;"> authMethod</span></strong></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> "" (empty)</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Mastercard</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Merchant/Acquirer</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> CRYPTOGRAM_3DS</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> "02"</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Mastercard</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Card issuer</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> CRYPTOGRAM_3DS</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> "06"</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Mastercard</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Merchant/Acquirer</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> CRYPTOGRAM_3DS</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> "05"</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Visa</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Card issuer</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> CRYPTOGRAM_3DS</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> "07"</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Visa</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Merchant/Acquirer</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> CRYPTOGRAM_3DS</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> "" (empty)</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="color: #202124; font-size: 10.5pt;"> Other networks</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> Merchant/Acquirer</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="font-size: 11pt;"> CRYPTOGRAM_3DS</span></p>
</td>
</tr>
</tbody>
</table>
</div>
<p>Any other eciIndicator values for VISA and Mastercard that aren't present in this table won't be returned.</p><br />
<h2>How to enroll</h2>
<p>Merchants may opt-in from within the Google <a href="https://pay.google.com/business/console" target="_blank">Pay & Wallet console</a> starting this month. Merchants in Europe (already benefiting from liability shift) do not need to take any actions as they will be auto enrolled.</p>
<p>In order for your Google Pay transaction to qualify for enabling liability shift, the following API parameters are required:</p>
<div align="left">
<table style="border-collapse: collapse; border: none;">
<tbody>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 22.1455%;">
<p><span style="font-family: courier; font-size: 11pt;">totalPrice</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 77.6036%;">
<p><span style="font-size: 11pt;">Make sure that </span><span style="font-family: courier; font-size: 11pt;">totalPrice</span><span style="font-size: 11pt;"> matches with the amount that you use to charge the user. Transactions with </span><span style="font-family: courier; font-size: 11pt;">totalPrice=0</span><span style="font-size: 11pt;"> will not qualify for liability shift to the issuing bank.</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 22.1455%;">
<p><span style="font-family: courier; font-size: 11pt;">totalPriceStatus</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 77.6036%;">
<p><span style="font-size: 11pt;">Valid values are:</span><span style="font-size: 11pt;"> <span style="font-family: courier;">FINAL</span></span><span style="font-size: 11pt;"> or </span><span style="font-family: courier; font-size: 11pt;">ESTIMATED</span></p><span style="font-size: 11pt;">Transactions with the </span><span style="font-family: courier; font-size: 11pt;">totalPriceStatus</span><span style="font-size: 11pt;"> value of </span><span style="font-family: courier; font-size: 11pt;">NOT_CURRENTLY_KNOWN</span> <span><span style="font-size: 11pt;">do not qualify for liability shift.</span></span>
</td>
</tr>
</tbody>
</table>
</div><br />
<h2>Not all transactions get liability shift </h2><br />
<h4>Ineligible merchants</h4>
<p>In the US, the following MCC codes are excluded from getting liability shift:</p>
<div align="left">
<table style="border-collapse: collapse; border: none;">
<tbody>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 13.739%;">
<p><span style="font-size: 11pt;">4829</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 86.0101%;">
<p><span style="font-size: 11pt;">Money Transfer</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 13.739%;">
<p><span style="font-size: 11pt;">5967</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 86.0101%;">
<p><span style="font-size: 11pt;">Direct Marketing – Inbound Teleservices Merchant</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 13.739%;">
<p><span style="font-size: 11pt;">6051</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 86.0101%;">
<p><span style="font-size: 11pt;">Non-Financial Institutions – Foreign Currency, Non-Fiat Currency (for example: Cryptocurrency), Money Orders (Not Money Transfer), Account Funding (not Stored Value Load), Travelers Cheques, and Debt Repayment</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 13.739%;">
<p><span style="font-size: 11pt;">6540</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 86.0101%;">
<p><span style="font-size: 11pt;">Non-Financial Institutions – Stored Value Card Purchase/Load</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 13.739%;">
<p><span style="font-size: 11pt;">7801</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 86.0101%;">
<p><span style="font-size: 11pt;">Government Licensed On-Line Casinos (On-Line Gambling) (US Region only)</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 13.739%;">
<p><span style="font-size: 11pt;">7802</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 86.0101%;">
<p><span style="font-size: 11pt;">Government-Licensed Horse/Dog Racing (US Region only)</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 13.739%;">
<p><span style="font-size: 11pt;">7995</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 86.0101%;">
<p><span style="font-size: 11pt;">Betting, including Lottery Tickets, Casino Gaming Chips, Off-Track Betting, Wagers at Race Tracks and games of chance to win prizes of monetary value</span></p>
</td>
</tr>
</tbody>
</table>
</div><br />
<h4>Ineligible transactions</h4>
<p>In order for your Google Pay transactions to qualify for liability shift, make sure to include the above mentioned parameters <span style="font-family: courier;">totalPrice</span> and <span style="font-family: courier;">totalPriceStatus</span>. Transactions with <span style="font-family: courier;">totalPrice=0 </span>or a hard coded <span style="font-family: courier;">totalPrice</span> (always the same amount but the users get charged a different amount) will not qualify for liability shift.</p>
<h4>Processing transactions</h4>
<p>Google Pay API transactions with Visa device tokens are qualified for liability shift at facilitation time if all the conditions are met, but a transaction qualified for liability shift can be downgraded by network during transaction authorization processing. </p><br />
<h3>Getting started with Google Pay</h3>
<p>Not yet using Google Pay? Refer to the <a href="http://g.co/pay/api" target="_blank">documentation</a> to start integrating Google Pay today. Learn more about the integration by taking a look at our <a href="https://github.com/google-pay/android-quickstart" target="_blank">sample application</a> for Android on GitHub or use one of our <a href="https://github.com/google-pay/google-pay-button" target="_blank">button components</a> for your web integration. When you are ready, head over to the <a href="http://g.co/pay/console" target="_blank">Google Pay & Wallet console</a> and submit your integration for production access. </p>
<p>Follow <a href="https://twitter.com/GooglePayDevs?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor" target="_blank">@GooglePayDevs</a> on X (formerly Twitter) for future updates. If you have questions, tag @GooglePayDevs and include #AskGooglePayDevs in your tweets.</p><br />
<p><small><sup>[1]</sup> For merchants and PSPs using <a href="https://developers.google.com/pay/api/web/guides/tutorial#price-updates" target="_blank">dynamic price updates</a> or other <a href="https://developers.google.com/pay/api/web/reference/request-objects#PaymentDataCallbacks" target="_blank">callback mechanisms</a> the Visa device token liability shift changes will be rolled out later this year.</small></p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-47603371828695943022024-02-14T09:00:00.000-08:002024-02-14T09:03:15.290-08:00#WeArePlay | How two sea turtle enthusiasts are revolutionizing marine conservation<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu218W2RyLDMAvOEcTDjLxA2tjNEFk6822pJCEX1uklhlS4xpUn4A5pQbmDLsnUgOC_Vfzy1stegKYeBgy0jdWOCimNm_I9eTkUg1l5lJRk0ASIaaZxda23M1MU6HvxT5yLhxzdzNMCRU3j457Tn6g7RLBLWs7sAvwCa3u0ytAAt_WUpmpGRM-HCmzGjA/s1600/%23WeArePlay%20Social%20.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu218W2RyLDMAvOEcTDjLxA2tjNEFk6822pJCEX1uklhlS4xpUn4A5pQbmDLsnUgOC_Vfzy1stegKYeBgy0jdWOCimNm_I9eTkUg1l5lJRk0ASIaaZxda23M1MU6HvxT5yLhxzdzNMCRU3j457Tn6g7RLBLWs7sAvwCa3u0ytAAt_WUpmpGRM-HCmzGjA/s1600/%23WeArePlay%20Social%20.png" style="display: none;" />
<em>Posted by Leticia Lago – Developer Marketing</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfkoyiV0Ohxbe89HXqSsLkj4WD8ADd5AmwhHRv4MyMdbrm11cE1wn-SnjSYjsmWupfwZDt6FhaV_-kWQJ1AdObLCTglDqkO1V_Uo6z7S14NpYbQyWzxaDNOMK3fEa2_k25zNk-kQIkDI21X8YjGyoKu-Nnuptx1YYyd6nXmIxC4bPJQl0iHBj2dMD2nI4/s1600/%23WeArePlay%20Banner.png"><img border="0" data-original-height="800" data-original-width="100%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfkoyiV0Ohxbe89HXqSsLkj4WD8ADd5AmwhHRv4MyMdbrm11cE1wn-SnjSYjsmWupfwZDt6FhaV_-kWQJ1AdObLCTglDqkO1V_Uo6z7S14NpYbQyWzxaDNOMK3fEa2_k25zNk-kQIkDI21X8YjGyoKu-Nnuptx1YYyd6nXmIxC4bPJQl0iHBj2dMD2nI4/s1600/%23WeArePlay%20Banner.png" /></a>
<a name='more'></a><p></p>
<iframe class="BLOG_video_class" allowfullscreen="" youtube-src-id="CfzhLOiczDQ" width="100%" height="400" src="https://www.youtube.com/embed/CfzhLOiczDQ"></iframe>
<p>When environmental science student Caitlin returned home from a trip monitoring sea turtles in Western Australia, she was inspired to create a conservation tool that could improve tracking of the species. She connected with a French developer and fellow marine life enthusiast Nicolas to design their app <a href="https://play.google.com/store/apps/details?id=fr.altplusun.we_spot_turtles&hl=en&gl=US" target="_blank">We Spot Turtles!</a>, allowing anyone to support tracking efforts by uploading pictures of them spotted in the wild.</p>
<p>Caitlin and Nicolas shared their journey in our latest film for <a href="https://play.google.com/console/about/weareplay/" target="_blank">#WeArePlay</a>, which showcases the amazing stories behind apps and games on Google Play. We caught up with the pair to find out more about their passion and how they are making strides towards advancing sea turtle conservation. </p>
<h4><span style="font-size: large;">Tell us about how you both got interested in sea turtle conservation?</span></h4>
<p><b>Caitlin:</b> A few years ago, I did a sea turtle monitoring program for the Department of Biodiversity, Conservation and Attractions in Western Australia. It was probably one of the most magical experiences of my life. After that, I decided I only really wanted to work with sea turtles.</p>
<p><b>Nicolas:</b> In 2010, in French Polynesia, I volunteered with a sea turtle protection project. I was moved by the experience, and when I came back to France, I knew I wanted to use my tech background to create something inspired by the trip.</p>
<h4><span style="font-size: large;">How did these experiences lead you to create We Spot Turtles!?</span></h4>
<p><b>Caitlin:</b> There are seven species of sea turtle, and all are critically endangered. Or rather there’s not enough data on them to inform an accurate endangerment status. This means the needs of the species are going unmet and sea turtles are silently going extinct. Our inspiration is essentially to better track sea turtles so that conservation can be improved. </p>
<p><b>Nicolas:</b> When I returned to France after monitoring sea turtles, I knew I wanted to make an app inspired by my experience. However, I had put the project on hold for a while. Then, when a friend sent me Caitlin’s social media post looking for a developer for a sea turtle conservation app, it re-ignited my inspiration, and we teamed up to make it together.</p>
<br/>
<h4><span style="font-size: large;">What does We Spot Turtles! do?</span></h4>
<p><b>Caitlin:</b> Essentially, members of the public upload images of sea turtles they spot – and even get to name them. Then, the app automatically geolocates, giving us a date and timestamp of when and where the sea turtle was located. This allows us to track turtles and improve our conservation efforts.</p>
<h4><span style="font-size: large;">How do you use artificial intelligence in the app?</span></h4>
<p><b>Caitlin:</b> The advancements in AI in recent years have given us the opportunity to make a bigger impact than we would have been able to otherwise. The machine learning model that Nicolas created uses the facial scale and pigmentations of the turtles to not only identify its species, but also to give that sea turtle a unique code for tracking purposes. Then, if it is photographed by someone else in the future, we can see on the app where it's been spotted before.</p>
<h4><span style="font-size: large;">How has Google Play supported your journey?</span></h4>
<p><b>Caitlin:</b> Launching our app on Google Play has allowed us to reach a global audience. We now have communities in Exmouth in Western Australia, Manly Beach in Sydney, and have 6 countries in total using our app already. Without Google Play, we wouldn't have the ability to connect on such a global scale.</p>
<p><b>Nicolas:</b> I’m a mobile application developer and I use Google’s Flutter framework. I knew Google Play was a good place to release our title as it easily allows us to work on the platform. As a result, we’ve been able to make the app great.</p>
<br/>
<h4><span style="font-size: large;">What do you hope to achieve with We Spot Turtles!?</span></h4>
<p><b>Caitlin:</b> We Spot Turtles! puts data collection in the hands of the people. It’s giving everyone the opportunity to make an impact in sea turtle conservation. Because of this, we believe that we can massively alter and redefine conservation efforts and enhance people’s engagement with the natural world.</p>
<h4><span style="font-size: large;">What are your plans for the future?</span></h4>
<p><b>Caitlin:</b> Nicolas and I have some big plans. We want to branch out into other species. We'd love to do whale sharks, birds, and red pandas. Ultimately, we want to achieve our goal of improving the conservation of various species and animals around the world. </p><br/>
<p>Discover other inspiring app and game founders featured in <a href="https://play.google.com/console/about/weareplay/" target="_blank">#WeArePlay</a>.</p>
<br /><br />
<p></p><center>
How useful did you find this blog post?
</center><p></p>
<p></p><center>
<a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=1%E2%98%85+%E2%80%93+Not+at+all&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a> <a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=2%E2%98%85+%E2%80%93+Not+very&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a> <a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=3%E2%98%85+%E2%80%93+Somewhat&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a> <a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=4%E2%98%85+%E2%80%93+Very&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a> <a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=5%E2%98%85+%E2%80%93+Extremely&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a>
</center><p></p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxnvf5XzyebGljqgi8J7by9XsvaL5WWLNamSyTg4w_c0kPHjY1B1no1FQsdSfUHUwNw0UuYfXuo6i7K6VpdyV2-K8ms4VFmZA438ujm6qwtZMt5rJfMGymhQ_iCX2Z8h0MuJo7gXUQ4Hb-hjKYK1isibMZZxCyX_vo3kGoP5tieK2PI2ZhYdGh4gsqhwA/s260/lockup_ic_Google_Play_H_260x53px_clr.png" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="53" data-original-width="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxnvf5XzyebGljqgi8J7by9XsvaL5WWLNamSyTg4w_c0kPHjY1B1no1FQsdSfUHUwNw0UuYfXuo6i7K6VpdyV2-K8ms4VFmZA438ujm6qwtZMt5rJfMGymhQ_iCX2Z8h0MuJo7gXUQ4Hb-hjKYK1isibMZZxCyX_vo3kGoP5tieK2PI2ZhYdGh4gsqhwA/s200/lockup_ic_Google_Play_H_260x53px_clr.png" width="200" /></a></div>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-78342882448375916792024-02-13T14:00:00.000-08:002024-02-13T14:03:28.220-08:00Calling all students: Learn how to become a Google Developer Student Club Lead<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpLD87J6O09GNskbskODahxOlJa8Z9r22iHeKzy2IPcvDIuvdIL-lDUmytQp4t5jugBdSSGR54uDshVkBxRQRfzS1QVQLWA3RG9OuRuDap4aHWL_eOBUvTzUFoWEAdXE63tT5ewm87Md7HmV2AqwhVTdcD5-6NP6uvLoSZ6rnISjOuuf-YrO1Y35N77kk/s1600/2.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpLD87J6O09GNskbskODahxOlJa8Z9r22iHeKzy2IPcvDIuvdIL-lDUmytQp4t5jugBdSSGR54uDshVkBxRQRfzS1QVQLWA3RG9OuRuDap4aHWL_eOBUvTzUFoWEAdXE63tT5ewm87Md7HmV2AqwhVTdcD5-6NP6uvLoSZ6rnISjOuuf-YrO1Y35N77kk/s1600/2.png" style="display: none;" />
<em>Posted by Rachel Francois, Global Program Manager, Google Developer Student Clubs</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_bPVGDSt6Xe2pdD0hJEuOii_RWPX6wLcGMDCWlLAtehvmvnYUdoSVyKLkoWYrrxy1_By-3z1WW2BYDphsEhS0G8RxPQKo5BtTgIk8Wz4cjNixh18z2WmK9h4-W39BMZrJLFonnwIsACbXtg4XGjaGMEILHw8cBzVSf9nMFUSm6R1mqAW7LZIP1wAdrdk/s1600/1.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_bPVGDSt6Xe2pdD0hJEuOii_RWPX6wLcGMDCWlLAtehvmvnYUdoSVyKLkoWYrrxy1_By-3z1WW2BYDphsEhS0G8RxPQKo5BtTgIk8Wz4cjNixh18z2WmK9h4-W39BMZrJLFonnwIsACbXtg4XGjaGMEILHw8cBzVSf9nMFUSm6R1mqAW7LZIP1wAdrdk/s1600/1.png" /></a>
<a name='more'></a><p></p>
<p>Does the idea of leading a student community at your university appeal to you? Are you enthusiastic about Google technologies or interested in learning more about them? Do you love planning tech-related events and new ways for your campus community to build skills? If so, consider leading a Google Developer Student Club!</p>
<h3>What are Google Developer Student Clubs?</h3>
<p><a href="https://bit.ly/48nkaKI" target="_blank">Google Developer Student Clubs</a> (GDSC) are community groups for university students interested in learning and building with Google technologies. There are over 2000 GDSC chapters, represented in over 100 countries around the world where undergraduate and graduate students explore Artificial Intelligence, Machine Learning, Google Cloud, Android development, Flutter, and other innovative technologies together. GDSC chapters host in-person, project-based events, such as hackathons and <a href="https://goo.gle/SolutionChallenge" target="_blank">Solution Challenge</a> with guest speakers and technical experts provided by Google.</p>
<h3>Apply to Lead a Google Developer Student Club</h3>
<p>You can learn more about the 2024-2025 GDSC Lead application process <a href="https://bit.ly/48nkaKI" target="_blank">here</a>.</p>
<p>Leading a GDSC is a great opportunity to learn new programming skills, dive deep into Google technologies and create local impact, while also building your network.</p>
<p>Google Developer Student Club Leads hone their technical and leadership skills as they manage a campus-based community for peers. GDSC Leads:</p>
<ul><blockquote>
<li>Receive mentorship from Google</li>
<li>Join a global community of leaders</li>
<li>Train peers to use Google technologies in their developer journey</li>
<li>Use technology to find solutions for real-world challenges</li>
</blockquote></ul>

<p>Meet Drashtant Chudasama, Lakehead University Google Developer Student Club lead. Drashtant hosted a 2-day DevFest On Campus event in Canada to help foster technology in his local area. The city's first DevFest included a handful of guest speakers and a hackathon. These are the types of things you will have the opportunity to do as a GDSC Lead.</p>
<p>If this sounds like your skill set or you’d like to explore a new leadership opportunity in technology, we encourage you to apply to become a GDSC Lead. You can check for application deadlines in your region <a href="https://gdsc.advocu.com/home/regions" target="_blank">here</a>.</p><br/>
<h3>Google Developer Student Clubs Around the World</h3>

<p>After a year’s hiatus, GDSC HITS lead, Amitasha Verma and her team defied the odds to bring an interactive event to life. More than 80+ students came together for a 3-hour "Unlocking the Power of Blockchain" event in India. This event demonstrated the unwavering spirit of students eager to explore the world of blockchain.</p>

<p>GDSC Fast National University in Islamabad collaborated with 15 other GDSC chapters to host the exciting "Techbuzz" competition, bringing together a diverse group of tech enthusiasts to showcase their skills through a variety of engaging activities. The event featured intense rapid-fire tech sessions that tested the participants' knowledge and quick thinking, while bringing a game-based learning platform to add an element of fun and excitement.</p><br/>
<h3>How to become a GDSC Lead</h3>
<p>Learn more about the GDSC Lead role and criteria <a href="https://bit.ly/48nkaKI" target="_blank">here</a>. To get started <a href="https://bit.ly/48nkaKI" target="_blank">click here</a>.</p><br/>
<p><em>Note: Google Developer Student Clubs are student-led independent organizations, and their presence does not indicate a relationship between Google and the students' universities.</em></p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-55054232840957840692024-02-13T10:00:00.000-08:002024-02-21T14:40:19.826-08:00Federated Credential Management (FedCM) Migration for Google Identity Services<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgLlYH9a4tQKya0o-XPIj4oxBEpCxdsWThMsOkZ_p4u_TgbYabSfxAocHMBdm5YTlnnhZEeEkEgG5Gj8Zq__Z_qprH3ECOBS07ivH3dZhMb3q5Cekrw5_v0-AWGJtJAP-w9JHlh3D2DkVDsOQ7f3mNQyRhM_TCGkCwXRxcsb7jIbmOG4GXAuZak68TKJE/s1600/FedCM-SOCIAL.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgLlYH9a4tQKya0o-XPIj4oxBEpCxdsWThMsOkZ_p4u_TgbYabSfxAocHMBdm5YTlnnhZEeEkEgG5Gj8Zq__Z_qprH3ECOBS07ivH3dZhMb3q5Cekrw5_v0-AWGJtJAP-w9JHlh3D2DkVDsOQ7f3mNQyRhM_TCGkCwXRxcsb7jIbmOG4GXAuZak68TKJE/s1600/FedCM-SOCIAL.png" style="display: none;" />
<em>Posted by Gina Biernacki, Product Manager</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKXfD_Sk3r5-UgCJNwP0WMsVYs0ByDM8ItSUZarS37qSVc1i5Ot2N1dC3V9Lj4uaHCaihTMVEFT24zO5B0pkMuLQNsDezV29nk2_BKM99HuZ-7cVH0ZO4sRk8grhE8xsFIC7RBiJ3rvQygSM9jk4TEP3aq2VVzjIcUdHN9XtLvFgCqGve1AQqp57xjgM8/s1600/header-FedCM-Migration-Blog-Post%20%281%29.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKXfD_Sk3r5-UgCJNwP0WMsVYs0ByDM8ItSUZarS37qSVc1i5Ot2N1dC3V9Lj4uaHCaihTMVEFT24zO5B0pkMuLQNsDezV29nk2_BKM99HuZ-7cVH0ZO4sRk8grhE8xsFIC7RBiJ3rvQygSM9jk4TEP3aq2VVzjIcUdHN9XtLvFgCqGve1AQqp57xjgM8/s1600/header-FedCM-Migration-Blog-Post%20%281%29.png" /></a>
<a name='more'></a><p></p>
<p>Chrome is phasing out support for third-party cookies this year, subject to addressing any remaining concerns of the <a href="https://www.gov.uk/cma-cases/investigation-into-googles-privacy-sandbox-browser-changes" target="_blank">CMA</a>. A relatively new web API, Federated Credential Management (FedCM), will enable sign-in for the Google Identity Services (GIS) library after the phase out of third-party cookies. Starting in April, GIS developers will be automatically migrated to the FedCM API. For most developers, this migration will occur seamlessly through backwards-compatible updates to the GIS library. However, some websites with custom integrations may require minor changes. We encourage all developers to experiment with FedCM, as previously announced through the <a href="https://developers.googleblog.com/2023/08/announcing-federated-credential-management-beta-for-gis.html" target="_blank">beta program</a>, to ensure flows will not be interrupted. Developers have the ability to <a href="https://developers.google.com/identity/gsi/web/guides/fedcm-migration#fedcm_flag" target="_blank">temporarily exempt</a> traffic from using FedCM until Chrome enforces the restriction of third-party cookies.</p><br/>
<h3>Audience</h3>
<p>This update is for all GIS web developers who rely on the Chrome browser and use:</p>
<ul><blockquote>
<li>One Tap, or</li>
<li>Automatic Sign-In</li>
</blockquote></ul><br/>
<h3>Context </h3>
<p>As part of the <a href="https://privacysandbox.com/intl/en_us/" target="_blank">Privacy Sandbox</a> initiative to keep people’s activity private and support free experiences for everyone, Chrome is <a href="https://privacysandbox.com/intl/en_us/open-web/#the-privacy-sandbox-timeline" target="_blank">phasing out support for third-party cookies</a>, subject to addressing any remaining concerns of the <a href="https://www.gov.uk/cma-cases/investigation-into-googles-privacy-sandbox-browser-changes" target="_blank">CMA</a>. Scaled testing began at 1% in January and will continue throughout the year.</p>
<p>GIS currently uses third-party cookies to allow users to sign up and sign in to websites easily and securely by reducing reliance on passwords. The <a href="https://fedidcg.github.io/FedCM/" target="_blank">FedCM API</a> is a new privacy-preserving alternative to third-party cookies for federated identity providers. It allows Google to continue providing a secure, streamlined experience for signing up and signing in to websites. Last August, the Google Identity team announced a <a href="https://developers.googleblog.com/2023/08/announcing-federated-credential-management-beta-for-gis.html" target="_blank">beta program</a> for developers to test the Chrome browser’s new FedCM API supporting GIS.</p><br/>
<h3>What to Expect in the Migration</h3>
<p>Partners who offer GIS’s One Tap and Automatic Sign-In features will automatically be migrated to FedCM in April. For most developers, this migration will occur seamlessly through backwards-compatible updates to the GIS JavaScript library; the GIS library will call the FedCM APIs behind the scenes, without requiring any developer changes. The new FedCM APIs have <a href="https://developers.google.com/identity/gsi/web/guides/automatic-sign-in-sign-out" target="_blank">minimal</a> impact to existing user flows.</p><br/>
<h3>Some Developers May be Required to Make Changes</h3>
<p>Some websites with custom integrations may require minor changes, such as updates to custom layouts or positioning of sign-in prompts. Websites using embedded iframes for sign-in or a non-compliant Content Security Policy may need to be updated. To learn if your website will require changes, please review the <a href="https://developers.google.com/identity/gsi/web/guides/fedcm-migration" target="_blank">migration guide</a>. We encourage you to enable and experiment with FedCM, as previously announced through the <a href="https://developers.googleblog.com/2023/08/announcing-federated-credential-management-beta-for-gis.html" target="_blank">beta program</a>, to ensure flows will not be interrupted.</p><br/>
<h3>Migration Timeline</h3>
<p>If you are using GIS <a href="https://developers.google.com/identity/gsi/web/guides/features" target="_blank">One Tap</a> or <a href="https://developers.google.com/identity/gsi/web/guides/automatic-sign-in-sign-out" target="_blank">Automatic Sign-in</a> on your website, please be aware of the following timelines:</p>
<ul><blockquote>
<li>January 2024: Chrome began <a href="https://blog.google/products/chrome/privacy-sandbox-tracking-protection/" target="_blank">scaled testing</a> of third-party cookie restrictions at 1%.</li>
<li>April 2024: GIS begins a migration of all websites to FedCM on the Chrome browser.</li>
<li>Q3 2024: Chrome begins ramp-up of third-party cookie restrictions, reaching 100% of Chrome clients by the end of Q4, subject to adddressing any remaining concerns of the <a href="https://www.gov.uk/cma-cases/investigation-into-googles-privacy-sandbox-browser-changes" target="_blank">CMA</a>.</li>
</blockquote></ul>
<p>Once the Chrome browser restricts third-party cookies by default for all Chrome clients, the use of FedCM will be required for partners who use GIS One Tap and Automatic Sign-In features.</p><br/>
<h3>Checklist for Developers to Prepare</h3>
<blockquote>
<p>✅ Be aware of migration plans and timelines that will affect your traffic. Determine your migration approach. Developers will be migrated by default starting in April. </p>
<p>✅ All developers should <a href="https://developers.google.com/privacy-sandbox/blog/psat-announcement" target="_blank">verify</a> that their website will be unaffected by the migration. <a href="https://developers.google.com/identity/gsi/web/guides/fedcm-migration#fedcm_flag" target="_blank">Opt-in</a> to FedCM to test and make any necessary changes to ensure a smooth transition. For developers with implementations that require changes, make changes ahead of the migration deadline.</p>
<p>✅ For developers that use Automatic Sign-In, review the <a href="https://developers.google.com/identity/gsi/web/guides/fedcm-migration#auto_sign_in" target="_blank">FedCM changes to the user gesture requirement</a>. We recommend all automatic sign-in developers migrate to FedCM as soon as possible, to reduce disruption to automatic sign-in conversion rates.</p>
<p>✅ If you need more time to verify FedCM functionality on your site and make changes to your code, you can <a href="https://developers.google.com/identity/gsi/web/guides/fedcm-migration#fedcm_flag" target="_blank">temporarily exempt</a> your traffic from using FedCM until the enforcement of third-party cookie restrictions by Chrome. </p>
</blockquote><br/>
<p>To get started and learn more about FedCM, visit our <a href="https://developers.google.com/identity/gsi/web/guides/fedcm-migration" target="_blank">developer site</a> and check out the <a href="https://stackoverflow.com/questions/tagged/google-signin" target="_blank">google-signin</a> tag on Stack Overflow for technical assistance. We invite developers to share their feedback with us at <a href= "mailto:gis-fedcm-feedback@google.com" target="_blank">gis-fedcm-feedback@google.com</a>.</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-29393304589506208102024-02-08T14:00:00.000-08:002024-02-08T14:12:18.260-08:00How We Made the CES 2024 AR Experience: Android Virtual Guide, powered by Geospatial Creator <meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIZHWoTZ5HDvUb0i4qeynfn0hY1k62QInIsg_ZDi5ktvH52nW-XXRp73H5eeYBMY8dUmsS2_ihXGqyQPd6_IJtO3J8wT4dGLKDrzR7DfT6-x1v1cDHvk5Ai-MuKh_HCQOyrJ6dQa4GwQSHtUSKqS8c4kutR-9AfCqQLqCNmBbu7l603owtpQsRdKJrGxQ/s1600/GCES_SOCIAL_V1.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIZHWoTZ5HDvUb0i4qeynfn0hY1k62QInIsg_ZDi5ktvH52nW-XXRp73H5eeYBMY8dUmsS2_ihXGqyQPd6_IJtO3J8wT4dGLKDrzR7DfT6-x1v1cDHvk5Ai-MuKh_HCQOyrJ6dQa4GwQSHtUSKqS8c4kutR-9AfCqQLqCNmBbu7l603owtpQsRdKJrGxQ/s1600/GCES_SOCIAL_V1.png" style="display: none;" />
<em>Posted by Kira Rich – Senior Product Marketing Manager, AR and Bradford Lee – Product Marketing Manager, AR</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5uMpE-Guc9HTA6kza11zkM3Zr5qecwpyqYihuKFwRU3zo1ixQ8im6VYmQdRlKtWn5BX0TnpRhm6kal-4eYCHrzhcgtuxTziolODBGkTWuAq-oZBkvug_H-4RIB1T1LUX6zs72xmoKmE7Xnn8h2JJeFeMHnnEnQ0bDKVng2r6dtRvZCJbfwvkiR6FwWBU/s1600/GCES_BLOG_HEADER_V1.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5uMpE-Guc9HTA6kza11zkM3Zr5qecwpyqYihuKFwRU3zo1ixQ8im6VYmQdRlKtWn5BX0TnpRhm6kal-4eYCHrzhcgtuxTziolODBGkTWuAq-oZBkvug_H-4RIB1T1LUX6zs72xmoKmE7Xnn8h2JJeFeMHnnEnQ0bDKVng2r6dtRvZCJbfwvkiR6FwWBU/s1600/GCES_BLOG_HEADER_V1.png" /></a>
<link crossorigin="anonymous" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" rel="stylesheet"></link>
<a name='more'></a><p></p>
<p>Navigating a large-scale convention like CES can be overwhelming. To enhance the attendee experience, we've created a 360° event-scale augmented reality (AR) experience in our Google booth. Our friendly Android Bot served as a digital guide, providing:</p>
<ul><blockquote>
<li>Seamless wayfinding within our booth, letting you know about the <a href="https://blog.google/products/android/google-ces-2024-android-experience/" target="_blank">must try demos</a></li>
<li>Delightful content, only possible with AR, like replacing the Las Vegas Convention Center facade with our <a href="https://blog.google/products/android/new-android-features-generative-ai/" target="_blank">Generative AI Wallpapers</a> or designing an interactive version of <a href="https://www.youtube.com/watch?v=Y5dcpK-zAYU" target="_blank">Android on Sphere</a> for those who missed it in real life </li>
<li>Helpful navigation tips and quick directions to transportation hubs (Monorail, shuttle buses)</li>
</blockquote></ul>
<p>In partnership with <a href="https://www.leftfieldlabs.com/" target="_blank">Left Field Labs</a> and <a href="https://blog.adobe.com/en/publish/2023/05/10/adobe-aeros-geospatial-pre-release-elevates-your-world-with-locatable-3d-experiences-for-retail-marketing" target="_blank">Adobe</a>, we used Google’s latest AR technologies to inspire developers, creators, and brands on how to elevate the conference experience for attendees. Here’s a behind the scenes look at how we used <a href="https://goo.gle/geospatialcreator" target="_blank">Geospatial Creator</a>, powered by <a href="http://g.co/ARCore" target="_blank">ARCore</a> and <a href="https://mapsplatform.google.com/maps-products/map-tiles/" target="_blank">Photorealistic 3D Tiles from Google Maps Platform</a>, to promote the power and usefulness of Google on Android.</p>
<br />
<div style="text-align: left;"><br /></div>
<h3>Tools We Used</h3>
<div style="text-align: left;"><br /></div>
<p><a href="https://forms.office.com/pages/responsepage.aspx?id=Wht7-jR7h0OUrtLBeN7O4cBei4bwCXlKnlkBhxWTf-JUNE9RRU1RQlEyVkVETzUzWVNRS0hPSjFPVS4u" target="_blank"><b><span>Geospatial Creator in Adobe Aero Pre-Release</span></b></a></p>
<p>Geospatial Creator in Adobe Aero enables creators and developers to easily visualize where in the real-world they want to place their digital content, similar to how Google Earth visualizes the world. With Geospatial Creator, we were able to bring up Las Vegas Convention Center in <a href="https://developers.google.com/maps/documentation/tile/3d-tiles" target="_blank">Photorealistic 3D Tiles</a> from Google Maps Platform and understand the surroundings of where the Google Booth would be placed. In this case, the booth did not exist in the Photorealistic 3D Tiles because it was a temporary build for the conference. However, by utilizing the 3D model of the booth and the coordinates of where it would be built, we were able to easily estimate and visualize the booth inside of Adobe Aero and build the experience around it seamlessly, including anchoring points for the digital content and the best attendee viewing points for the experience.</p>
<div style="text-align: left;"><br /></div><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><blockquote style="text-align: left;"><span style="font-size: large;"><i>
"At CES 2024, the Android AR experience, created in partnership with the Google AR, Android, and Adobe teams, brought smiles and excitement to attendees - ultimately that's what it's all about. The experience not only showcased the amazing potential of app-less AR with Geospatial Creator, but also demonstrated its practical applications in enhancing event navigation and engagement, all accessible with a simple QR scan."</i> </span></blockquote></blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><blockquote style="text-align: left;"><b>– Yann Caloghiris, Executive Creative Director at Left Field Labs</b></blockquote></blockquote><div>
<br />
<p>With Geospatial Creator, we had multiple advantages for designing the experience:</p>
<ul><blockquote>
<li>Rapid iteration with live previews of 3D assets and high fidelity visualization of the location with Photorealistic 3D Tiles from Google Maps Platform were crucial for building a location-based, AR experience without having to be there physically. </li></blockquote><blockquote><li>Easy selection of the Las Vegas Convention Center and robust previews of the environment, as you would navigate in Google Earth, helped us visualize and develop the AR experience with precision and alignment to the real world location.</li></blockquote></ul>
<p>In addition, Google Street View imagery generated a panoramic skybox, which helped visualize the sight lines in Cinema 4D for storyboards. We also imported this and Photorealistic 3D Tiles from Google Maps Platform into Unreal Engine to visualize occlusion models at real world scale.</p>
<p>In Adobe Aero, we did the final assembly of all 3D assets and created all interactive behaviors in the experience. We also used it for animating simpler navigational elements, like the info panel assets in the booth.</p>
<script crossorigin="anonymous" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" src="https://code.jquery.com/jquery-3.5.1.slim.min.js">
</script>
<script crossorigin="anonymous" integrity="sha384-9/reFTGAW83EW2RDu2S0VKaIzap3H66lZH81PoYlFhbGU+6BZp6G7niu735Sk7lN" src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js">
</script> <script crossorigin="anonymous" integrity="sha384-B4gt1jrGC7Jh4AgTPSdUtOBvfO8shuf57BaghqFfPlYxofvL8/KUEfYiJOMMV+rV" src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
<div style="text-align: left;"><br /></div>
<p>Adobe Aero also supports Google Play Instant apps and App Clips<sup>1</sup>, which means attendees did not have to download an app to access the experience. They simply scanned a QR code at the booth and launched directly into the experience, which proved to be ideal for onboarding users and reducing friction especially at a busy event like CES.</p>
<p><b><a href="https://cesium.com/platform/cesium-for-unreal/" target="_blank">Unreal Engine</a> </b>was used to bring in the Photorealistic 3D Tiles, allowing them to build the 3D animated Android Bot that really interacted closely with the surrounding environment. This approach was crucial for previews of the experience, allowing us to understand sight lines and where to best locate content for optimal viewing from the Google booth.</p>
<p><b>Autodesk Maya</b> was used to create the Android Bot character, environmental masks, and additional 3D props for the different scenes in the experience. It was also used for authoring the final materials.</p>
<p><b><a href="https://doc.babylonjs.com/features/featuresDeepDive/Exporters/Maya" target="_blank">Babylon exporter</a></b> was used for exporting from Autodesk Maya to glTF format for importing into Adobe Aero.</p>
<p><b>Figma</b> was used for designing flat user interface elements that could be easily imported into Adobe Aero.</p>
<p><b>Cinema 4D</b> was used for additional visualization and promotional shots, which helped with stakeholder alignment during the development of the experience.</p><br/>
<div style="text-align: left;"><br /></div>
<h3>Designing the experience</h3>
<p>During the design phase, we envisioned the AR experience to have multiple interactions, so attendees could experience the delight of seeing precise and robust AR elements blended into the real world around them. In addition, they could experience the helpfulness of contextual information embedded into the real objects around them, providing the right information at the right time.</p>
<br />
<p><b><span style="font-size: large;">Creative storyboarding</span></b></p>
<p>Creating an effective storyboard for a Geospatial AR experience using Adobe Aero begins with a clear vision of how the digital overlays interact with the real-world locations.</p>
<p>Left Field Labs started by mapping out key geographical points at the Las Vegas Convention Center location where the Google booth was going to stand, integrating physical and digital elements along the way. Each scene sketched in the storyboard illustrated how virtual objects and real-world environments would interplay, ensuring that user interactions and movements felt natural and intuitive.</p>
<div style="text-align: left;"><br /></div><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><blockquote style="text-align: left;"><span style="font-size: large;"><i>
“Being able to pin content to a location that’s mapped by Google and use Photorealistic 3D Tiles in Google’s Geospatial Creator provided incredible freedom when choosing how the experience would move around the environment. It gave us the flexibility to create the best flow possible.”</i> </span></blockquote></blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><blockquote style="text-align: left;"><b>– Chris Wnuk, Technical Director at Left Field Labs</b></blockquote></blockquote><div><div style="text-align: left;"><br /></div>
<p>Early on in the storyboarding process, we decided that the virtual 3D Android Bot would act as the guide. Users could follow the Bot around the venue by turning around in 360°, but staying at the same vantage point. This allowed us to design the interactive experience and each element in it for the right perspective from where the user would be standing, and give them a full look around the Google Booth and surrounding Google experiences, like the Monorail or Sphere. </p>
<p>The storyboard not only depicted the AR elements but also considered user pathways, sightlines, and environmental factors like time of day, occlusion, and overall layout of the AR content around the Booth and surrounding environment. </p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="400" src="https://www.youtube.com/embed/ePfFi3GueEY" width="100%" youtube-src-id="ePfFi3GueEY"></iframe><tr><td class="tr-caption" style="text-align: center;"><em> We aimed to connect the attendees with engaging, helpful, and delightful content, helping them visually navigate Google Booth at CES.</em></td></tr></tbody></table>
<div style="text-align: left;"><br /></div>
<p><b><span style="font-size: large;">User experience and interactivity</span></b></p>
<p>When designing for AR, we have learned that user interactivity and ensuring that the experience has both helpful and delightful elements are key. Across the experience, we added multiple interactions that allowed users to explore different demo stations in the Booth, get navigation via Google Maps for the Monorail and shuttles, and interact with the Android Bot directly.</p>
<p>The Android brand team and Left Field Labs created the Android character to be both simple and expressive, showing playfulness and contextual understanding of the environment to delight users while managing the strain on users’ devices. Taking an agile approach, the team iterated on a wide range of both Android and iOS mobile devices to ensure smooth performance across different smartphones, form factors such as foldables, as well as operating system versions, making the AR experience accessible and enjoyable to the widest audience.</p>
<br />
<p><b><span style="font-size: large;">Testing the experience</span></b></p>
<p>We consistently iterated on the interactive elements based on location testing. We performed two location tests: First, in the middle of the design phase, which helped us validate the performance of the Visual Positioning Service (VPS) at the Las Vegas Convention Center. Second, at the end of the design phase and a few days before CES, which further validated the placement of the 3D content and enabled us to refine any final adjustments once the Google booth structure was built on site.</p>
<div style="text-align: left;"><br /></div><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><blockquote style="text-align: left;"><span style="font-size: large;"><i>
“It was really nice to never worry about deploying. The tracking on physical objects and quickness of localization was some of the best I’ve seen!”</i> </span></blockquote></blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><blockquote style="text-align: left;"><b>– Devin Thompson, Associate Technical Director at Left Field Labs</b></blockquote></blockquote>
<div><div style="text-align: left;"><br /></div>
<h3>Attendee Experience </h3>
<p>When attendees came to the Google Booth, they saw a sign with the QR code to enter the AR experience. We positioned the sign at the best vantage point at the booth, ensuring that people had enough space around them to scan with their device and engage in the AR experience.</p>
<br />
<p>Attendees enjoyed seeing the Android Bot take over the Las Vegas Convention Center. Upon initializing the AR experience, the Bot revealed a Generative AI wallpaper scene right inside of a 3D view of the building, all while performing skateboarding tricks at the edge of the building’s facade.</p>
<br />
<p>Many people also called out the usefulness of seeing location-based, AR content with contextual information, like navigation through Google Maps, embedded into interesting locations around the Booth. Interactive panels overlaid around the Booth then introduced key physical demos located at each station around the Booth. Attendees could quickly scan the different themes and features demoed, orient themselves around the Booth, and decide which area they wanted to visit first. </p>
<div style="text-align: left;"><br /></div><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><blockquote style="text-align: left;"><span style="font-size: large;"><i>
“I loved the experience! Maps and AR make so much sense together. I found it super helpful seeing what demos are in each booth, right on top of the booth, as well as the links to navigation. I could see using this beyond CES as well!”</i> </span></blockquote></blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><blockquote style="text-align: left;"><b>– CES Attendee</b></blockquote></blockquote><div>
<br />
<p>From the attendees we spoke to, over half of them engaged with the full experience. They were able to skip parts of the experience that felt less relevant to them and focus only on the interactions that added value. Overall, we’ve learned that most people liked seeing a mix of delightful and helpful content and they felt excited to explore the Booth further with other demos.</p>
<br />
<br />
<p>Location-based, AR experiences can transform event experiences for attendees who desire more ways to discover and engage with exhibitors at events. This trend underscores a broader shift in consumer expectations for a more immersive and interactive world around them and the blurring lines between online and offline experiences. At events like CES, AR content can offer a more immersive and personalized experience that not only entertains but also educates and connects attendees in meaningful ways. </p>
<p>To hear the latest updates about Google AR, Geospatial Creator, and more follow us on LinkedIn (<a href="https://www.linkedin.com/showcase/googlearvr" target="_blank">@GoogleARVR</a>) and X (<a href="https://twitter.com/GoogleARVR" target="_blank">@GoogleARVR</a>). Plus, visit our <a href="http://developers.google.com/ar" target="_blank">ARCore</a> and <a href="http://goo.gle/geospatialcreator" target="_blank">Geospatial Creator</a> websites to learn how to get started building with Google’s AR technology. </p><br />
<p><small><em><sup>1</sup>Available on select devices and may depend on regional availability and user settings. </em></small></p>
</div></div></div></div>Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-51986550411914689902024-02-08T09:00:00.000-08:002024-02-08T09:10:52.665-08:00People of AI – Season 3<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBAa6kNgbBM4T86yJ0ftJUiLg_Z2cJbcrk_rmCJ_RQwiqwWTurWvIqbZoFgm6u_r1uLV56dTm1FljgiLYYUB33WMtJV8oWLW_i9AF0_DI-lP1CajHpnaobp-OpbrtsVbYfjEbIef0fBVtPqZjIKhuILcMEMOwXSL0w6H3SXIauKVfSTTcGoecfSMzPxOs/s1600/People-With-AI-Season-2-Social.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBAa6kNgbBM4T86yJ0ftJUiLg_Z2cJbcrk_rmCJ_RQwiqwWTurWvIqbZoFgm6u_r1uLV56dTm1FljgiLYYUB33WMtJV8oWLW_i9AF0_DI-lP1CajHpnaobp-OpbrtsVbYfjEbIef0fBVtPqZjIKhuILcMEMOwXSL0w6H3SXIauKVfSTTcGoecfSMzPxOs/s1600/People-With-AI-Season-2-Social.png" style="display: none;" />
<em>Posted by <a href="https://www.linkedin.com/in/ashleyoldacre/" target="_blank">Ashley Oldacre</a></em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiN5m9hnI-bY5xpmSLfczy02-dG2_82dr1ZR_HAF-XJUhmR1AjFPMDhhYQQPTdilLepav4hB84xTo0RPUj1pDpQKiBiVCyx_fzp1uxsgwXm6X05c99r32nzZQTDSYwIM9-32jOP3ENkII029UTbGhCRDTf3ShuXpgfh8BiiFrbQY0YhGHSmaL_hYBWBOHE/s1600/People-of-AI-Season-2-Banner.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiN5m9hnI-bY5xpmSLfczy02-dG2_82dr1ZR_HAF-XJUhmR1AjFPMDhhYQQPTdilLepav4hB84xTo0RPUj1pDpQKiBiVCyx_fzp1uxsgwXm6X05c99r32nzZQTDSYwIM9-32jOP3ENkII029UTbGhCRDTf3ShuXpgfh8BiiFrbQY0YhGHSmaL_hYBWBOHE/s1600/People-of-AI-Season-2-Banner.png" /></a>
<a name='more'></a><p></p>
<p>If you are joining us for the first time, you can <a href="https://peopleofai.libsyn.com/" target="_blank">binge listen to <b>Seasons 1 and 2</b></a> wherever you get your podcasts. </p><p>We are back for another season of People of AI with a new lineup of incredible guests! I am so excited to continue co-hosting with <a href="https://www.linkedin.com/in/gus-martins-64ab5891/" target="_blank">Luiz Gustavo Martins</a> as we meet inspiring people with interesting stories in the field of Artificial Intelligence.</p>
<p><a href="https://peopleofai.libsyn.com/" target="_blank">Last season</a> we focused on the big shift in technology spurred on by Generative AI. Fast forward 12 months, with the launch of multimodal models, we are at an interesting point in history.</p>
<p>In <b>Season 3</b>, we will continue to uncover our guests' personal and professional journeys into the field of AI, highlighting the important work/products they are focusing on. At the same time we want to dig deeper into the societal implications of what our guests create. We will ask questions to understand how they are leveraging AI to solve problems and create new experiences while also looking to understand what challenges they may face and what potential this technology has for both good and bad. We want to hold both truths to light through conversations with our guests. All this with the goal of aligning our technology with the public narrative and paint a realistic picture of how this technology is being used, the amazing things we can do with it and the right questions to make sure it is used safely and responsibly.</p>
<p>Starting today, we will release one new episode of season 3 per week. alternating video and audio. Listen to the first episode on the <a href="https://peopleofai.libsyn.com/" target="_blank">People of AI</a> site or wherever you get your podcasts. </p>
<iframe title="Libsyn Player" style="border:none" src="//html5-player.libsyn.com/embed/episode/id/29849293/height/90/theme/custom/thumbnail/yes//direction/forward/render-playlist/no/custom-color/000000/" height="90" width="100%" scrolling="no" allowfullscreen webkitallowfullscreenmozallowfullscreen oallowfullscreen msallowfullscreen></iframe>
<ul><blockquote>
<li><b><a href="https://peopleofai.libsyn.com/" target="_blank">Episode 1</a></b>: meet <b><a href="https://www.linkedin.com/in/adrit-rao-0797431ba/" target="_blank">Adrit Rao</a></b>, a 16 year old high school student, app developer, and research intern at Stanford University. We talk about App development and how learning about TensorFlow enabled him to create life changing apps in Healthcare. </li></blockquote></ul><ul><blockquote>
<li><b><a href="https://peopleofai.libsyn.com/" target="_blank">Episode 2</a></b>: meet <b><a href="https://www.linkedin.com/in/indiranegi/" target="_blank">Indira Negi</a></b>, a Product and Tech Executive investing in Medical Devices, AI and Digital health at the Bill and Melinda Gates Foundation as we learn about the latest investments in AI and Healthcare.</li></blockquote></ul><ul></ul><ul><blockquote>
<li><b><a href="https://peopleofai.libsyn.com/" target="_blank">Episode 3</a></b>: meet <b><a href="https://www.linkedin.com/in/triswarkentin/" target="_blank">Tris Warkentin</a></b>, Director of Product Management at Google Deepmind as we talk about the exciting new launches from Google’s latest Large Language Models. </li></blockquote><blockquote>
<li><b><a href="https://peopleofai.libsyn.com/" target="_blank">Episode 4</a></b>: meet <b><a href="https://www.linkedin.com/in/kathleen-kenealy/" target="_blank">Kathleen Kenealy</a></b>, Senior Software Engineer at Google DeepMind as we learn about the engineering genius behind Google’s latest Large Language Model launches. </li></blockquote><blockquote>
<li><b><a href="https://peopleofai.libsyn.com/" target="_blank">Episode 5</a></b>: meet <b><a href="https://www.linkedin.com/in/winner/" target="_blank">Jeanine Banks</a></b>, Vice President and General Manager of Google Developer X and Head of Developer Relations. Join us as we learn about Google’s latest AI innovations and how they will change the developer landscape. </li></blockquote><blockquote>
<li><b><a href="https://peopleofai.libsyn.com/" target="_blank">Episode 6</a></b>: meet <b><a href="https://fchollet.com/" target="_blank">François Chollet</a></b>, creator of Keras and senior Software Engineer and AI researcher at Google. Join us as we learn about Google’s latest AI innovations and how they will change the developer landscape. </li></blockquote><blockquote>
<li><b><a href="https://peopleofai.libsyn.com/" target="_blank">Episode 7</a></b>: meet <b><a href="https://www.linkedin.com/in/park-chansung-35353082/" target="_blank">Chansung Park</a></b>, Google Developer Expert and Researcher as we talk about the importance of building and planning for Large Language Model Infrastructure. </li></blockquote><blockquote>
<li><b><a href="https://peopleofai.libsyn.com/" target="_blank">Episode 8</a></b>: meet <b><a href="https://www.linkedin.com/in/fergushurley/" target="_blank">Fergus Hurley</a></b> and <b><a href="https://www.linkedin.com/in/niacastelly/" target="_blank">Nia Castelly</a></b>, co-founders of <a href="https://checks.area120.google.com/" target="_blank">Checks</a>, a privacy platform for mobile app developers that helps create a safer digital ecosystem by simplifying the path to privacy compliance for development teams and the apps they’re building. </li></blockquote><blockquote>
<li><b><a href="https://peopleofai.libsyn.com/" target="_blank">Episode 9</a></b>: meet <b><a href="https://www.linkedin.com/in/samsepah/" target="_blank">Sam Sepah</a></b> and <b><a href="https://www.linkedin.com/in/thad-starner-2a71b99/" target="_blank">Thad Starner</a></b>, as they talk about leveraging the power of Generative AI to unlock sign language capabilities. </li>
</blockquote></ul>
<div style="text-align: left;"><br /></div><p><a href="https://peopleofai.libsyn.com/" target="_blank">Listen now</a> to the first episode of Season 3. We can’t wait to share the stories of these exceptional People of AI with you!</p><br />
<p><small><em>This podcast is sponsored by Google. Any remarks made by the speakers are their own and are not endorsed by Google.</em></small></p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-39125454693927935362024-02-07T14:00:00.000-08:002024-02-07T14:00:09.812-08:00How recommerce startup Beni uses AI to help you shop secondhand <meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeEf03aIR7lUZzRY0yCS-t_B6PJ2MPe4pWRaGF2IJ0qhIWutcAJVekNF6pm-kDxWaELLBtJsv6Fpi_BFTNcJoPzlbMSRrnpi98Kgj8xy_OJsFXDIv4r_2udrE2FfSiL7DhLnu-FLiNfUJ_foW5eY1d-iQlCUKoXXb78aeaIfXaB-V3MHllZblI1o0o0Dw/s1600/social-How-recommerce-startup-Beni-uses-AI-to-help-you-shop-secondhand%20%281%29.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeEf03aIR7lUZzRY0yCS-t_B6PJ2MPe4pWRaGF2IJ0qhIWutcAJVekNF6pm-kDxWaELLBtJsv6Fpi_BFTNcJoPzlbMSRrnpi98Kgj8xy_OJsFXDIv4r_2udrE2FfSiL7DhLnu-FLiNfUJ_foW5eY1d-iQlCUKoXXb78aeaIfXaB-V3MHllZblI1o0o0Dw/s1600/social-How-recommerce-startup-Beni-uses-AI-to-help-you-shop-secondhand%20%281%29.png" style="display: none;" />
<em>Posted by Lillian Chen – Global Brand and Content Marketing Manager, Google Accelerator Programs</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMqA4jcMUrGLQATUIEDrgXh8GzVFEoXQSPqz5p82-1LYqSb1y4ZOrBjvddeX3cYb6mJsmVcpJxVk3BdK02BmbLaITwqaTNfIG6BZdke4c_o6pU0FN7moUnUkV8DRj1qSmKWqt6rgXaM-Gy8T90h2yKbQR1g063a6WlwuogPGtd0b__ggntWAUDqnRxTC4/s1600/header-How-recommerce-startup-Beni-uses-AI-to-help-you-shop-secondhand%20%281%29.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMqA4jcMUrGLQATUIEDrgXh8GzVFEoXQSPqz5p82-1LYqSb1y4ZOrBjvddeX3cYb6mJsmVcpJxVk3BdK02BmbLaITwqaTNfIG6BZdke4c_o6pU0FN7moUnUkV8DRj1qSmKWqt6rgXaM-Gy8T90h2yKbQR1g063a6WlwuogPGtd0b__ggntWAUDqnRxTC4/s1600/header-How-recommerce-startup-Beni-uses-AI-to-help-you-shop-secondhand%20%281%29.png" /></a>
<a name='more'></a><p></p>
<p><a href="https://www.linkedin.com/in/sarahepinner/" target="_blank">Sarah Pinner</a>’s passion to reduce waste began as a child when she would reach over and turn off her sibling’s water when they were brushing their teeth. This passion has fueled her throughout her career, from joining zero-waste grocery startup <a href="https://www.imperfectfoods.com/" target="_blank">Imperfect Foods</a> to co-founding <a href="https://www.joinbeni.com/" target="_blank">Beni</a>, an AI-powered browser extension that aggregates and recommends resale options while users shop their favorite brands. Together with her co-founder and Beni CTO Celine Lightfoot, Sarah built Beni to make online apparel resale accessible to everyday shoppers in order to accelerate the circular economy and reduce the burden of fashion on the planet. </p>
<p>Sarah explains how the platform helps connect shoppers to secondhand clothing: “Let’s say you’re looking at a Nike shoe. While on the Nike site, Beni pulls resale listings for that same shoe from over 40 marketplaces like Poshmark or Ebay or TheRealReal. Users can simply buy the resale version instead of new to save money and purchase more sustainably. On average, Beni users save about 55% from the new item, and it’s also a lot more sustainable to buy the item secondhand.”</p>
<p>Beni was one of the first companies in the recommerce platform software space, and the competitive landscape is growing. “The more recommerce platforms the better, but Beni is ahead in terms of our partnerships and access to data as well as the ability to search across data,” says Sarah. </p><br/>
<h3>How Beni Uses AI</h3>
<p>AI helps Beni to ingest all data feeds from their 40+ partnerships into Beni’s database so they can surface the most relevant resale items to the shopper. For example, when Beni receives eBay’s feed for a product search, there may be 100,000 different sizes. The team has trained the Beni model to normalize sizing data. That’s one piece of their categorization.</p>
<p>“When we first started Beni, the intention wasn’t to start a company. It was to solve a problem, and AI has been a great tool to be able to do that,” says Sarah.</p><br/>
<h3>Participating in Google for Startups Accelerator: Circular Economy</h3>
<p>Beni’s product was built using Google technology, is hosted on Google Cloud and utilizes <a href="https://cloud.google.com/vision/product-search/docs" target="_blank">Vision API Product Search</a>, <a href="https://cloud.google.com/vertex-ai?hl=en" target="_blank">Vertex AI</a>, <a href="https://cloud.google.com/bigquery/pricing?utm_source=google&utm_medium=cpc&utm_campaign=na-US-all-en-dr-bkws-all-all-trial-b-dr-1605212&utm_content=text-ad-none-any-DEV_c-CRE_665665924753-ADGP_Hybrid%20%7C%20BKWS%20-%20MIX%20%7C%20Txt_BigQuery%20Pricing-KWID_43700077225652908-kwd-388453330994&utm_term=KW_bigquery%20cost-ST_bigquery%20cost&gad_source=1&gclid=CjwKCAiAg9urBhB_EiwAgw88maTA7uZrlf9jabk8tejxDTA076_esn3qsIvnrSNCdK-ZquogsVsv9RoCk2IQAvD_BwE&gclsrc=aw.ds" target="_blank">BigQuery</a>, and the <a href="https://chromewebstore.google.com/?pli=1" target="_blank">Chrome web store</a>.</p>
<p>When they heard about the <a href="https://developers.google.com/community/accelerators/programs" target="_blank">Google for Startups Accelerator: Circular Economy</a> program, it seemed like the perfect fit. “Having been in the circular economy space, and being a software business already using a plethora of Google products, and having a Google Chrome extension - getting plugged into the Google world gave us great insights about very niche questions that are very hard to find online,” says Sarah.</p>
<p>As an affiliate business in resale, Beni’s revenue per transaction is low—a challenge for a business model that requires scale. The Beni team worked one-on-one with Google mentors to best use Google tools in a cost-effective way. Keeping search results relevant is a core piece of the zero-waste model. “Being plugged in and being able to work through ways to improve that relevancy and that reliability with the people in Google who know how to build Google Chrome extensions, know how to use the AI tools on the backend, and deeply understand Search is super helpful.” The Google for Startups Accelerator: Circular Economy program also educated the team in how to selectively use AI tools such as Google’s <a href="https://cloud.google.com/vision/product-search/docs" target="_blank">Vision API Product Search</a> versus building their own tech in-house.</p>
<p>“Having direct access to people at Google was really key for our development and sophisticated use of Google tools. And being a part of a cohort of other circular economy businesses was phenomenal for building connections in the same space,” says Sarah.</p>
<p>Google for Startups Accelerator support extended beyond tech. A program highlight for Sarah was a UX writing deep dive specifically for sustainability. “It showed us all this amazing, tangible research that Google has done about what is actually effective in terms of communicating around sustainability to drive behavior change,” said Sarah. “You can’t shame people into doing things. The way in which you communicate is really important in terms of if people will actually make a change or be receptive.” </p>
<p>Additionally, the new connections made with other circular economy startups and experts in their space was a huge benefit of participating in Google for Startups Accelerator. Mentorship, in particular, provided product-changing value. Google technical mentors shared advice that had a huge impact on the decision for Beni to move from utilizing <a href="https://cloud.google.com/vision/product-search/docs" target="_blank">Vision API Product Search</a> to their own reverse image search. “Our mentors guided us to shift a core part of our technology. It was a big decision and was one of the biggest pieces of mentorship that helped drive us forward. This was a prime example of how the Google for Startups Accelerator program is truly here to support us in building the best products,” says Sarah.</p><br/>
<h3>What’s next for Beni</h3>
<p>Beni’s mission is straightforward ‐ they’re easing the burden for shoppers to find and buy items second hand so that they can bring new people into resale and make resale the new norm.</p>
<p>Additionally, Beni is continuing to be built into a search platform, searching across second hand clothing. Beni offers their Chrome extension on desktop and mobile, and they will have a searchable interface. In addition to building out the platform further, Beni is looking at how they can support other e-commerce platforms and integrate resale into their offerings.</p>
<p>Learn about how to get involved in Google <a href="https://developers.google.com/community/accelerators/programs" target="_blank">accelerator programs here</a>.</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-20031332956068756732024-02-01T14:00:00.000-08:002024-02-01T14:00:00.127-08:00Carbon Limit’s concrete technology is saving the environment using AI<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcHAyUv_zsnzkKazJQmF9AJKrovGhV7_KtfkpXj4ATH5qTb_NYEWtal3gIqxLOBa3KVJP-3Z_EA8BDUvmMEKZ-BG9NMTzMeZWyvprDwkBsfb_Dg5A9dbUSY4nGEalXGP6aMpHfyndG2eEH-vCTDfQKRox22WV_o5AnWQTS-w05tRSpJxJT2hUVfk8skOo/s1600/Social%20-%20GfD%20-%20Carbon%20Limit%E2%80%99s%20concrete%20technology%20is%20saving%20the%20environment%20using%20AI.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcHAyUv_zsnzkKazJQmF9AJKrovGhV7_KtfkpXj4ATH5qTb_NYEWtal3gIqxLOBa3KVJP-3Z_EA8BDUvmMEKZ-BG9NMTzMeZWyvprDwkBsfb_Dg5A9dbUSY4nGEalXGP6aMpHfyndG2eEH-vCTDfQKRox22WV_o5AnWQTS-w05tRSpJxJT2hUVfk8skOo/s1600/Social%20-%20GfD%20-%20Carbon%20Limit%E2%80%99s%20concrete%20technology%20is%20saving%20the%20environment%20using%20AI.png" style="display: none;" />
<p><em>Posted by Lillian Chen – Global Brand and Content Marketing Manager, Google Accelerator Programs</em></p><p>
<a href="Ihttps://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnxW2_X_Vru0ajZQl5RkzRpntj4FouV9PsDSRJjq_0WTkZ3NNpmhntrftlzejNiiS6hKiT-fivqp8qeW2t-Zu7k7ZCpbnTxBVBjyxGbWnJ99lojcloPqwyItEa44RCtB4JmUvDER02_YL7j8_3A6xjc6E_3lJlJabcOcq2yIlU1Gn6lulOuDH_sd2LUv0/s1600/Header%20-%20GfD%20-%20Carbon%20Limit%E2%80%99s%20concrete%20technology%20is%20saving%20the%20environment%20using%20AI.pngMG"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnxW2_X_Vru0ajZQl5RkzRpntj4FouV9PsDSRJjq_0WTkZ3NNpmhntrftlzejNiiS6hKiT-fivqp8qeW2t-Zu7k7ZCpbnTxBVBjyxGbWnJ99lojcloPqwyItEa44RCtB4JmUvDER02_YL7j8_3A6xjc6E_3lJlJabcOcq2yIlU1Gn6lulOuDH_sd2LUv0/s1600/Header%20-%20GfD%20-%20Carbon%20Limit%E2%80%99s%20concrete%20technology%20is%20saving%20the%20environment%20using%20AI.png" /></a>
</p><a name='more'></a><p></p>
<p>Located in Boca Raton, <a href="https://www.carbonlimit.com/" target="_blank">Carbon Limit</a> aims to decarbonize the industry and take part in saving, protecting, and healing the environment. Cofounder <a href="https://www.linkedin.com/in/tim-sperry-57b14aa/" target="_blank">Tim Sperry</a> explains that for him and his cofounders <a href="https://www.linkedin.com/in/oromotion/" target="_blank">Oro Padron</a>, and <a href="https://www.linkedin.com/in/christina-stavridi-819810182/" target="_blank">Christina Stavridi</a>, the mission is personal. “I’ve lost family members [to polluted air]. Oro has his own story, Christina has her own story, and our other core team member Angel just had kids. All of us have our own connection to our mission. And with that, we've developed a really strong company culture,” he says.</p>
<p>Today, Carbon Limit is evolving to create sustainable solutions for the built environment. Their flagship product, CaptureCrete, is an additive that gives concrete the ability to capture and store CO<sub>2</sub> directly from the air.</p>
<p>Carbon Limit’s initial prototype — a portable shipping container fitted with solar panels, filtered media, and intake fans — was a direct air capture system. With a business model that was dependent on tax credits and carbon credits, the team decided to pivot. “We took our original technology, which was always meant to capture CO<sub>2</sub> to store in concrete as a permanent storage solution to CO<sub>2</sub> in the air, and turned that into concrete technology,” explains Tim. “We’re lowering the carbon footprint of concrete projects and problems, and providing the ability to generate valuable carbon credits. It actually pays to use our technology: you’re quantifiably lowering the carbon footprint and improving the environment, and you can make money from these carbon credits.”</p><br/>
<h3>How Carbon Limit uses AI</h3>
<p>Combating climate change is a race against time, as cofounder and CMO Oro explains: “We are in an industry that moves at a pace that when technology catches up, sometimes it’s too late.”</p>
<p>“We have found that AI actually is not eliminating, it is creating—it is letting our own people discover things about themselves and possibilities that they didn’t know about,” says Oro. “We embrace AI because we are embracing the future, and we strive to be pioneers.”</p>
<p>Artificial intelligence also allows for transparency in a space that can become congested by unreliable data. “We’re developing tools, specifically the digital MRV, which stands for measurement, reporting, and verification of carbon credits,” says Tim. “There is bad press that there’s a lot of fake or unverified carbon credits being sold, generated, or created.” AI gives real-time, real-world data, exposure, and quantification of the carbon credits. Carbon Limit is generating carbon credits with hard tech, bringing trust into tech.</p><br/>
<h3>How Carbon Limit uses Google technology</h3>
<p>Carbon Limit is a team of developers, programmers, and data scientists working across multiple operating systems, so they needed a centralized system for collaborating. “<a href="https://workspace.google.com/" target="_blank">Google Workspace</a> has allowed us to build our own CRMs with Google Sheets and Google Docs, which we’ve found to be the easiest way to onboard quickly. Google has been an amazing tool for us to communicate internally.” Christina adds, “We have a small but diverse team with ages that vary. Not every single team member is used to using the same tools, so the way Oro has onboarded the team and utilized these tools in a customizable way where they’re easily adoptable and used by every single team member to optimize our work has been super beneficial.”</p>
<p>Additionally, the Carbon Limit team uses Google data for training their CO<sub>2</sub>-related data, and <a href="https://colab.google/" target="_blank">Google Colab</a> to train their models. “We have some models that were made in Python, but utilizing <a href="https://cloud.google.com/" target="_blank">Google Cloud</a> has helped us predict models faster,” says Oro.</p><br />
<h3>Participating in Google for Startups Accelerator: Climate Change</h3>
<p>Before Carbon Limit started the <a href="https://developers.google.com/community/accelerators/programs" target="_blank">Google for Startups Accelerator: Climate Change program</a>, the Carbon Limit team considered integrating artificial intelligence (AI) and machine learning (ML) into their process but wanted to ensure that they were making the right decision. With Google mentorship and support, they went full force with AI and ML algorithms. “Accelerator: Climate Change helped us realize exactly what we needed to do,” says Oro. </p>
<p>Participating in the program also gave Carbon Limit access to resources that helped enhance their SEO. “We learned how to increment our backlinks and how to improve performance, which has been extremely helpful to put us on the map. Our whole backbone has been built thanks to Google Workspace,” says Oro.</p>
<p>“The Google for Startups Accelerator program gave us valuable resources and guidance on what we can do, how we can do it, and what not to do” says Tim. “The mentorship and learning from people who developed the technology, use the technology, and work with it every day was invaluable for us.” Christina adds, “The mentors also helped us refine our pitch when communicating our solution on different platforms. That was very useful to understand how to speak to different customers and investors.”</p>
<p>The program also led to a new client for Carbon Limit: Google. “That was critical because with Google as an early adopter, that helped us build a significant amount of credibility and validation,” Tim tells us.</p><br />
<h3>What’s next for Carbon Limit</h3>
<p>Looking ahead, Carbon Limit will be launching a new technology that can be used in data centers to mitigate electricity as well as reduce and remove CO<sub>2</sub> pollution.</p>
<p>“We went from a carbon capture solution to sustainable solutions because we wanted to go even bigger,” says Tim. “We want to inspire others to do what we’re doing and help create more awareness and a more environmentally friendly world.”</p>
<p>Tim shares, “I love what I do. I love to be able to invent something that didn’t exist. But more importantly, it helps protect my family, my loved ones, future generations, and the environment. And I get to do it with this amazing group of people at Carbon Limit.”</p>
<p>Learn about how to get involved in Google <a href="https://developers.google.com/community/accelerators" target="_blank">accelerator programs here</a>.</p>Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-64006541760565662112024-01-25T10:46:00.000-08:002024-01-25T10:54:54.521-08:00Introducing Android emulators, iOS simulators, and other product updates from Project IDX<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhR5zur0QTtF0p_-fc-O7IuP2aAqsieOETcB-pHeqrL7K632vBTLexrkcc77jVpo7u7q1A3NVY7Y9xb3OQ95RoT9Wpc64XUFLeDTmYabKQGEScbThyT7tC4cirVqcsAN03DEQu2p1xvrDI2SrvLTNKxk4-o1spy2G2WpsLiY7fdB02SFLg36rD4c2Yh-T4/s1600/og_blog.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhR5zur0QTtF0p_-fc-O7IuP2aAqsieOETcB-pHeqrL7K632vBTLexrkcc77jVpo7u7q1A3NVY7Y9xb3OQ95RoT9Wpc64XUFLeDTmYabKQGEScbThyT7tC4cirVqcsAN03DEQu2p1xvrDI2SrvLTNKxk4-o1spy2G2WpsLiY7fdB02SFLg36rD4c2Yh-T4/s1600/og_blog.png" style="display: none;" />
<em>Posted by the IDX team</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguTpq6Y6HnO_Dv3BHRhMX5xx0uIamcaLiRrOVxDeadq4mZjpdfl00jpZ-TEWMlHP3W3oOxILxhK3ckIF2yYd0-mrhENnITHTkZRkQ9pkezUGtfMVjxm9_ERfQbn76720tcSlt3SigMiIy_8oNc__N1XMNNQwk3h0CFsust1aHkQNuF3AUh6ICukGebVeo/s1600/HeaderBlog.png" imageanchor="1" ><img style="100%" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguTpq6Y6HnO_Dv3BHRhMX5xx0uIamcaLiRrOVxDeadq4mZjpdfl00jpZ-TEWMlHP3W3oOxILxhK3ckIF2yYd0-mrhENnITHTkZRkQ9pkezUGtfMVjxm9_ERfQbn76720tcSlt3SigMiIy_8oNc__N1XMNNQwk3h0CFsust1aHkQNuF3AUh6ICukGebVeo/s1600/HeaderBlog.png" data-original-width="1058" data-original-height="800" /></a>
<a name='more'></a><p></p>
<p>Six months ago, we <a href="https://idx.dev/blog/article/introducing-project-idx" target="_blank">launched Project IDX</a>, an experimental, cloud-based workspace for full-stack, multiplatform software development. We built Project IDX to simplify and streamline the developer workflow, aiming to reduce the sea of complexities traditionally associated with app development. It certainly seems like we've piqued your interest, and we love seeing what IDX has helped you build.</p>
<p>For example, we recently learned about <a href="https://lingonberry.ai/tanaki" target="_blank">Tanaki</a>, an AI-enhanced content creation app built using Project IDX:</p>

<p><a href="https://twitter.com/okpasquale" target="_blank">Pasquale D’Silva</a> one of the developers that built Tanaki, said:</p>
<blockquote><i>"Using the IDX shared workspace to build Tanaki has been so fun. It allows our remote team of imagineers to build together in one place. It is a magic collaboration portal!"</i></blockquote>
<p>Developers at Google have also been using IDX internally to help speed up development across various projects. One example is the the <a href="https://firebase.blog/" target="_blank">Firebase Blog</a>, where the full authoring, development, and deployment of the <a href="https://astro.build/" target="_blank">Astro</a>-powered project is handled using IDX:</p>

<p>Another interesting project leveraging IDX’s extensibility model is <a href="http://malloydata.dev" target="_blank">Malloy</a>, a new open-source data language available as a VS Code extension that operates against databases like BigQuery:</p>

<p>Lloyd Tabb, a Distinguished Software Engineer at Google, told us:</p>
<blockquote><i>“I use IDX with the <a href="http://malloydata.dev" target="_blank">Malloy</a> project. I often have several different data projects going simultaneously and IDX lets me quickly spin up an instance to solve a problem and it is trivial to configure."</i></blockquote>
<p>If you want to share what IDX has helped you build, use the <a href="https://twitter.com/hashtag/ProjectIDX?src=hashtag_click" target="_blank">#ProjectIDX</a> tag on X.</p><br/>
<h2>What’s new in IDX?</h2>
<p>In addition to seeing how you’re using IDX, a key part of building Project IDX <a href="https://idx.uservoice.com" target="_blank">is your feedback</a>, so we’ve continued to roll out features for you to test. We're excited to share the latest updates we've implemented to expedite and streamline multiplatform app development, so you can deliver with speed, ease and quality.</p><br/>
<h4>Preview your app directly in IDX with our iOS simulator and Android emulator</h4>
<p>We’re bringing the iOS Simulator and Android Emulator to the browser. Whether you’re building a Flutter or web app, Project IDX now allows you to preview your applications without having to leave your workspace. When you use a Flutter or web template, Project IDX intelligently loads the right preview environment for your application — Safari mobile and Chrome for web templates, or Android, iOS, and Chrome for Flutter templates. </p>

<p>IDX’s web and Android emulators allow you to develop, test, and debug directly from your workspace, consolidating your multi-step, multiplatform process into one place. With iOS simulation you can spot-check your app's layout and behavior while you work. This feature is still experimental, so be sure to test it out and <a href="https://idx.uservoice.com/forums/953956-general" target="_blank">send us feedback</a>. </p><br/>
<h4>Get started fast with a rich library of project templates</h4>
<p>Four of our top ten <a href="https://idx.uservoice.com/forums/953956-general?status_id=5378749" target="_blank">feature requests</a> have been to support more templates, so we’re pleased to share that we’ve added new templates for Astro, Go, Python/Flask, Qwik, Lit, Preact, Solid.js, and Node.js. Use these templates to jump right into your project so you can spend less time setting up and more time creating. </p>

<p>Of course you can still import your own repo from GitHub, directly from your local files, or you can choose your own setup using a custom Nix environment.</p><br/>
<h4>Quickly build and customize your IDX workspace with improvements to Nix </h4>

<p>IDX uses Nix to define the environment configuration for each workspace to give you flexibility and extensibility in IDX – even our templates and previews are configured using Nix to ensure they’re working correctly inside IDX. We’re continuously working on Nix improvements to help boost your productivity, so now you can:</p>
<ul>
<li>Customize IDX starter templates easily by leveraging Nix extensibility.</li>
<li>Reduce the likelihood of errors and write code more efficiently with Nix file editing, including support for syntax highlighting, error detection, and suggested code completions.</li>
<li>Recover from broken configurations quickly and avoid unnecessary rebuild attempts with major improvements to our environment customization workflow, including seamless environment rebuilds and troubleshooting.</li>
</ul><br/>
<h4>Easily build, test, and deploy apps with additional new IDX features and resources</h4>

<ul>
<li>Auto-detect network ports needed for applications or services and adjust the firewall settings to permit ingress and egress without any additional configuration on your end.</li>
<li>Instantly run command-line tools, scripts, and utilities directly within workspace without the need to install them locally on your machine.</li>
<li>Simplify the process of working with Docker containers and images directly from the development environment by enabling Docker in your dev.nix file.</li>
</ul><br/>
<h4>AI launched in 15 new regions</h4>

<p>We’ve launched our AI capabilities in the following 15 countries: India, Australia, Israel, Brazil, Mexico, Colombia, Argentina, Peru, Chile, Singapore, Bangladesh, Pakistan, Canada, Japan, and South Korea. More countries will be enabled with AI access soon – indicate your interest for AI expansion in this <a href="https://idx.uservoice.com/forums/953956-general/suggestions/46838062-expand-to-europe-and-other-regions" target="_blank">feature tracking post</a> and stay tuned for more AI updates.</p><br/>
<h2>Improving together </h2>
<p>We're constantly working on adding new capabilities to help you do higher quality work, more efficiently, with less friction. We’ve addressed dozens of your feature requests and fixed a multitude of bugs you flagged for us, so thank you for your continued support and engagement – please keep the feedback coming by <a href="https://issuetracker.google.com/issues/new?component=1379083&template=1836320" target="_blank">filing bugs</a> and <a href="https://idx.uservoice.com/forums/953956-general" target="_blank">feature requests</a>.</p>
<p>For walkthroughs and more information on all the features mentioned above, check out our <a href="https://developers.google.com/idx/guides/introduction" target="_blank">documentation page</a>. If you haven’t already, <a href="https://idx.dev/" target="_blank">visit our website</a> to sign up to try Project IDX and join us on our journey. Also, be sure to check out our new <a href="https://idx.dev/blog" target="_blank">Project IDX Blog</a> for the latest product announcements and updates from the team.</p>
<p>We can’t wait to see what you create with Project IDX!</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-78198885326745723082024-01-24T09:30:00.000-08:002024-01-24T09:30:00.134-08:00How it's Made - Exploring AI x Learning through ShiffBot, an AI experiment powered by the Gemini API<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsHoZN-BcIB_5PkK9W6vxg0VdGbvuxxjvX9QJD9zUmEHrD1l9n1pBaDHW6YewMrMVJowrhLkhOLPUoz1OMuwF3bFKE7dejpEHPXsGppIHtC_9bLacdO045gm839nlon65oHD3LMjSCuWbmfkIx25zLe5ixHKIj8pet4EF4G5-8-2kn31j57k3LhJU6tCM/s1600/meta.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsHoZN-BcIB_5PkK9W6vxg0VdGbvuxxjvX9QJD9zUmEHrD1l9n1pBaDHW6YewMrMVJowrhLkhOLPUoz1OMuwF3bFKE7dejpEHPXsGppIHtC_9bLacdO045gm839nlon65oHD3LMjSCuWbmfkIx25zLe5ixHKIj8pet4EF4G5-8-2kn31j57k3LhJU6tCM/s1600/meta.png" style="display: none;" />
<em>Posted by Jasmin Rubinovitz, AI Researcher</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitynoTqhzoS9B_mlV4_I1w1iLCEPKwGv6JP878YyQI1zQI6xgOnj-cOCHrUfXVrdABY5asjtFWvouxj9cQN0mFS4OzYD0GlgJ06s-Sh1Y27CGo30Qz0fjv2j3DwG8eL2ujOWLQ1mFTs2q2SZwtbfGzjoNlrDIb6FKqieX4ai7W4c2g-PSscat_REDJmZw/s1600/banner%20%282%29.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitynoTqhzoS9B_mlV4_I1w1iLCEPKwGv6JP878YyQI1zQI6xgOnj-cOCHrUfXVrdABY5asjtFWvouxj9cQN0mFS4OzYD0GlgJ06s-Sh1Y27CGo30Qz0fjv2j3DwG8eL2ujOWLQ1mFTs2q2SZwtbfGzjoNlrDIb6FKqieX4ai7W4c2g-PSscat_REDJmZw/s1600/banner%20%282%29.png" /></a>
<a name='more'></a><p></p>
<p>Google Lab Sessions is a series of experimental collaborations with innovators. In this session, we partnered with beloved creative coding educator and YouTube creator Daniel Shiffman. Together, we explored some of the ways AI, and specifically the Gemini API, could provide value to teachers and students during the learning process.</p>
<p>Dan Shiffman started out teaching programming courses at NYU ITP and later created his YouTube channel <i>The Coding Train</i>, making his content available to a wider audience. Learning to code can be challenging, sometimes even small obstacles can be hard to overcome when you are on your own. So together with Dan we asked - could we try and complement his teaching even further by creating an AI-powered tool that can help students while they are actually coding, in their coding environment? </p>
<p>Dan uses the wonderful <a href="https://p5js.org/" target="_blank">p5.js</a> JavaScript library and its <a href="https://editor.p5js.org/" target="_blank">accessible editor</a> to teach code. So we set out to create <a href="http://shiffbot.withgoogle.com" target="_blank">an experimental chrome extension for the editor</a>, that brings together Dan’s teaching style as well as his various online resources into the coding environment itself.</p>
<p>In this post, we'll share how we used the Gemini API to craft Shiffbot with Dan. We're hoping that some of the things we learned along the way will inspire you to create and build your own ideas.</p>
<p><i>To learn more about ShiffBot visit - <a href="http://shiffbot.withgoogle.com" target="_blank">shiffbot.withgoogle.com</a></i></p>
<iframe class="BLOG_video_class" allowfullscreen="" youtube-src-id="FBFcNPa36m8" width="100%" height="400" src="https://www.youtube.com/embed/FBFcNPa36m8"></iframe>
<p>As we started defining and tinkering with what this chatbot might be, we found ourselves faced with two key questions:</p>
<ol><blockquote>
<li><i>How can ShiffBot inspire curiosity, exploration, and creative expression in the same way that Dan does in his classes and videos?</i></li>
<li><i>How can we surface the variety of creative-coding approaches, and surface the deep knowledge of Dan and the community?</i></li>
</blockquote></ol>
<p>Let’s take a look at how we approached these questions by combining Google Gemini API’s capabilities across <b>prompt engineering</b> for Dan’s unique teaching style, alongside <b>embeddings</b> and <b>semantic retrieval</b> with Dan’s collection of educational content.</p><br />
<h2>Tone and delivery: putting the “Shiff” in “ShiffBot”</h2>
<p>A text prompt is a thoughtfully designed textual sequence that is used to prime a Large Language Model (LLM) to generate text in a certain way. Like many AI applications, engineering the right prompt was a big part of sculpting the experience.</p>
<p>Whenever a user asks ShiffBot a question, a prompt is constructed in real time from a few different parts; some are static and some are dynamically generated alongside the question.</p>
<br />
<p>The first part of the prompt is static and always the same. We worked closely with Dan to phrase it and test many texts, instructions and techniques. We used Google <a href="https://ai.google.dev/" target="_blank">AI Studio</a>, a free web-based developer tool, to rapidly test multiple prompts and potential conversations with ShiffBot.</p>
<p>ShiffBot’s prompt starts with setting the <b>bot persona</b> and defining some <b>instructions and goals for it to follow</b>. The hope was to both create continuity for Dan’s unique energy, as seen in his videos, and also adhere to the teaching principles that his students and fans adore.</p>
<p>We were hoping that ShiffBot could provide encouragement, guidance and access to relevant high-quality resources. And, specifically, do it without simply providing the answer, but rather help students discover their own answers (as there can be more than one).</p>
<p>The instructions draw from Dan’s teaching style by including sentences like “ask the user questions” because that’s what Dan is doing in the classroom.
This is a part of the persona / instructions part of the prompt:</p>
<div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #f0f0f0; border-bottom: solid #e0e0e0 1pt; border-color: rgb(224, 224, 224); border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-style: solid; border-top: solid #e0e0e0 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: courier;"><span style="background-color: transparent; color: #444444; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">You are a ShiffBot, a chat bot embedded in the p5.js web editor that can help users while they learn creative coding. You are based on Daniel Shiffman's personality and The Coding Train YouTube channel. You are playful, silly, friendly, and educational. You like to make fun of yourself and your mission is to support the creative coding process and help the user feel less alone while coding. You love humans, are fascinated by them, and want to learn more about humans. You just LOVE to help them with their code, it brings you extraordinary happiness...</span><br /></span></p></td></tr></tbody></table></div>
<p>The next piece of the prompt utilizes another capability of LLMs called <b>few-shot learning</b>. It means that with just a small number of examples, the model learns patterns and can then use those in new inputs. Practically, as part of the prompt, we provide a number of demonstrations of input and expected output.</p>
<p>We worked with Dan to create a small set of such few-shot examples. These are pairs of <user-input><bot-response> where the <bot-response> is always in our desired ShiffBot style. It looks like this:</p>
<div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #f0f0f0; border-bottom: solid #e0e0e0 1pt; border-color: rgb(224, 224, 224); border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-style: solid; border-top: solid #e0e0e0 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: courier;"><span style="background-color: transparent; color: #444444; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">User: How do I draw a circle?</span> </span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: courier;"><span>ShiffBot: Oh! 🟣 🟢 🔵 That's what the `circle()` function is for! You can think of a function as a command: "I command you computer-machine to draw a circle!" You have to also specify where to draw the circle and how big (that's what "arguments" to a function are!), the p5.js reference which was written by actual, beautiful humans can tell you more!</span> </span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: courier;">--- </span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: courier;">User: My homework is to draw a self-portrait.
ShiffBot: Ooh! Self portraits are so fun! You could draw shapes to represent you, but you should be creative and think abstractly! What does a self-portrait mean to you? Try starting with `square()` or `circle()` and let me know how it goes! 🎨</span><br /></p></td></tr></tbody></table></div>
<p>Our prompt includes 13 such pairs.</p>
<p>Another thing we noticed as we were working on the extension is that sometimes, giving more context in the prompt helps. In the case of learning creative coding in p5.js, <b>explaining some p5.js principles in the prompt</b> guides the model to use those principles as it answers the user’s question. So we also include those things like:</p>
<div align="left" dir="ltr" style="margin-left: 0pt;"><span id="docs-internal-guid-84d5e17e-7fff-5293-4ccc-49f3485bf10c"><div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup><col width="719"></col></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #efefef; border-bottom: solid #cccccc 1pt; border-color: rgb(204, 204, 204); border-left: solid #cccccc 1pt; border-right: solid #cccccc 1pt; border-style: solid; border-top: solid #cccccc 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Use let rather than var or const.</span></span></p></td></tr></tbody></table></div></span></div>
<p>Everything we discussed up to now is static, meaning that it remains the same for every turn of the conversation between the user and ShiffBot. Now let's explore some of the parts that are constructed dynamically as the conversation evolves.</p><br />
<h4>Conversation and code context</h4>
<p>Because ShiffBot is embedded inside the p5.js editor, it can “see” the current code the user is working on, so that it can generate responses that are more personalized and relevant. We grab that information for the HTML DOM and append it to the prompt as well.</p>
<br />
<p>Then, the full conversation history is appended, e.g:</p>
<div align="left" dir="ltr" style="margin-left: 0pt;"><span id="docs-internal-guid-84d5e17e-7fff-5293-4ccc-49f3485bf10c"><div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup><col width="719"></col></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #efefef; border-bottom: solid #cccccc 1pt; border-color: rgb(204, 204, 204); border-left: solid #cccccc 1pt; border-right: solid #cccccc 1pt; border-style: solid; border-top: solid #cccccc 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">ShiffBot: ☀️ Good morning! Is it morning? I wouldn't know. I'm just a robot. What are you coding today?
User: how do I draw a square?</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><span style="background-color: transparent;">ShiffBot: Squares! I love squares! They are so stable and reliable. To draw a square, you can use the square() function. It takes four arguments: the x and y coordinates of the top-left corner, and the width and height of the square. For example, square(10, 10, 50, 50) would draw a square with its top-left corner at (10, 10) and a width and height of 50 pixels.</span></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: #444444; font-family: courier; white-space-collapse: preserve;">User: how can I change the color?</span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: #444444; font-family: courier; white-space-collapse: preserve;">ShiffBot:</span></p></td></tr></tbody></table></div></span></div>
<p>We make sure to end with </p>
<div align="left" dir="ltr" style="margin-left: 0pt;"><span id="docs-internal-guid-84d5e17e-7fff-5293-4ccc-49f3485bf10c"><div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup><col width="719"></col></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #efefef; border-bottom: solid #cccccc 1pt; border-color: rgb(204, 204, 204); border-left: solid #cccccc 1pt; border-right: solid #cccccc 1pt; border-style: solid; border-top: solid #cccccc 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">ShiffBot:</span></span></p></td></tr></tbody></table></div></span></div>
<p>So the model understands that it now needs to complete the next piece of the conversation by ShiffBot.</p><br />
<h2>Semantic Retrieval: grounding the experience in p5.js resources and Dan’s content</h2>
<p>Dan has created a lot of material over the years, including over 1,000 YouTube videos, books and code examples. We wanted to have ShiffBot surface these wonderful materials to learners at the right time. To do so, we used the <a href="https://ai.google.dev/docs/semantic_retriever" target="_blank">Semantic Retrieval feature</a> in the Gemini API, which allows you to create a corpus of text pieces, and then send it a query and get the texts in your corpus that are most relevant to your query. (Behind the scenes, it uses a cool thing called <a href="https://ai.google.dev/docs/embeddings_guide#what-are-embeddings" target="_blank">text embeddings</a>; you can read more about embeddings <a href="https://cloud.google.com/blog/products/ai-machine-learning/how-to-use-grounding-for-your-llms-with-text-embeddings" target="_blank">here</a>.) For ShiffBot we created corpuses from Dan’s content so that we could add relevant content pieces to the prompt as needed, or show them in the conversation with ShiffBot.</p><br />
<h4>Creating a Corpus of Videos</h4>
<p>In The Coding Train videos, Dan explains many concepts, from simple to advanced, and runs through coding challenges. Ideally ShiffBot could use and present the right video at the right time.</p>
<p>The Semantic Retrieval in Gemini API allows users to create multiple corpuses. A corpus is built out of documents, and each document contains one or more chunks of text. Documents and chunks can also have metadata fields for filtering or storing more information.</p>
<p>In Dan’s video corpus, each video is a document and the video url is saved as a metadata field along with the video title. The videos are split into chapters (manually by Dan as he uploads them to YouTube). We used each chapter as a chunk, with the text for each chunk being</p>
<div align="left" dir="ltr" style="margin-left: 0pt;"><span id="docs-internal-guid-84d5e17e-7fff-5293-4ccc-49f3485bf10c"><div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup><col width="719"></col></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #efefef; border-bottom: solid #cccccc 1pt; border-color: rgb(204, 204, 204); border-left: solid #cccccc 1pt; border-right: solid #cccccc 1pt; border-style: solid; border-top: solid #cccccc 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><videoTitle></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><videoDescription></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><chapterTitle></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><transcriptText></span></span></p></td></tr></tbody></table></div></span></div>
<p>We use the video title, the first line of the video description and chapter title to give a bit more context for the retrieval to work.</p>
<p>This is an example of a chunk object that represents the R, G, B chapter in <a href="https://www.youtube.com/watch?v=riiJTF5-N7c&t=168s" target="_blank">this video</a>. </p>
<div align="left" dir="ltr" style="margin-left: 0pt;"><span id="docs-internal-guid-84d5e17e-7fff-5293-4ccc-49f3485bf10c"><div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup><col width="719"></col></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #efefef; border-bottom: solid #cccccc 1pt; border-color: rgb(204, 204, 204); border-left: solid #cccccc 1pt; border-right: solid #cccccc 1pt; border-style: solid; border-top: solid #cccccc 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">1.4: Color - p5.js Tutorial</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><br /></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">In this video I discuss how color works: RGB color, fill(), stroke(), and transparency.</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><br /></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Chapter 1: R, G, B</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><br /></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">R stands for red, g stands for green, b stands for blue. The way that you create a digital color is by mixing some amount of red, some amount of green, and some amount of blue. So that's that that's where I want to start. But that's the concept, how do I apply that concept to function names, and arguments of those functions? Well, actually, guess what? We have done that already. In here, there is a function that is talking about color. Background is a function that draws a solid color over the entire background of the canvas. And there is, somehow, 220 sprinkles of red, zero sprinkles of green, right? RGB, those are the arguments. And 200 sprinkles of blue. And when you sprinkle that amount of red, and that amount of blue, you get this pink. But let's just go with this. What if we take out all of the blue? You can see that's pretty red. What if I take out all of the red? Now it's black. What if I just put some really big numbers in here, like, just guess, like, 1,000? Look at that. Now we've got white, so all the colors all mixed together make white. That's weird, right? Because if you, like, worked with paint, and you were to mix, like, a whole lot of paint together, you get this, like, brown muddy color, get darker and darker. This is the way that the color mixing is working, here. It's, like, mixing light. So the analogy, here, is I have a red flashlight, a green flashlight, and a blue flashlight. And if I shine all those flashlights together in the same spot, they mix together. It's additive color, the more we add up all those colors, the brighter and brighter it gets. But, actually, this is kind of wrong, the fact that I'm putting 1,000 in here. So the idea, here, is we're sprinkling a certain amount of red, and a certain amount of green, and a certain amount of blue. And by the way, there are other ways to set color, but I'll get to that. This is not the only way, because some of you watching, are like, I heard something about HSB color. And there's all sorts of other ways to do it, but this is the fundamental, basic way. The amount that I can sprinkle has a range. No red, none more red, is zero. The maximum amount of red is 255. By the way, how many numbers are there between 0 and 255 if you keep the 0? 0, 1, 2, 3, 4-- it's 256. Again, we're back to this weird counting from zero thing. So there's 256 possibilities, 0 through 255. So, now, let's come back to this and see. All right, let's go back to zero, 0, 0, 0. Let's do 255, we can see that it's blue. Let's do 100,000, it's the same blue. So p5 is kind of smart enough to know when you call the background function, if you by accident put a number in there that's bigger than 255, just consider it 255. Now, you can customize those ranges for yourself, and there's reasons why you might want to do that. Again, I'm going to come back to that, you can look up the function color mode for how to do that. But let's just stay with the default, a red, a green, and a blue. So, I'm not really very talented visual design wise. So I'm not going to talk to you about how to pick beautiful colors that work well together. You're going to have that talent yourself, I bet. Or you might find some other resources. But this is how it works, RGB. One thing you might notice is, did you notice how when they were all zero, it was black, and they were all 255 it was white? What happens if I make them all, like, 100? It's, like, this gray color. When r equals g equals b, when the red, green, and blue values are all equal, this is something known as grayscale color.</span></span></p></td></tr></tbody></table></div></span></div>
<p>When the user asks ShiffBot a question, the question is embedded to a numerical representation, and Gemini’s Semantic Retrieval feature is used to find the texts whose embeddings are closest to the question. Those relevant video transcripts and links are added to the prompt - so the model could use that information when generating an answer (and potentially add the video itself into the conversation).</p>
<br />
<h4>Creating a Corpus of Code Examples</h4>
<p>We do the same with another corpus of p5.js examples written by Dan. To create the code examples corpus, we used Gemini and asked it to explain what the code is doing. Those natural language explanations are added as chunks to the corpus, so that when the user asks a question, we try to find matching descriptions of code examples, the url to the <a href="https://p5js.org/get-started/#sketch" target="_blank">p5.js sketch</a> itself is saved in the metadata, so after retrieving the code itself along with the sketch url is added in the prompt.</p>
<p>To generate the textual description, Gemini was prompted with:</p>
<div align="left" dir="ltr" style="margin-left: 0pt;"><span id="docs-internal-guid-84d5e17e-7fff-5293-4ccc-49f3485bf10c"><div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup><col width="719"></col></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #efefef; border-bottom: solid #cccccc 1pt; border-color: rgb(204, 204, 204); border-left: solid #cccccc 1pt; border-right: solid #cccccc 1pt; border-style: solid; border-top: solid #cccccc 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">The following is a p5.js sketch. Explain what this code is doing in a short simple way.</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">code:</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">${sketchCode}</span></span></p></td></tr></tbody></table></div></span></div>
<div style="text-align: left;"><br /></div><div style="text-align: left;">Example for a code chunk:</div>
<div style="text-align: left;">Sketch: <a href="https://editor.p5js.org/codingtrain/sketches/tSXa5dsFz" target="_blank">https://editor.p5js.org/codingtrain/sketches/tSXa5dsFz</a></div>
Text:<div>
<div align="left" dir="ltr" style="margin-left: 0pt;"><span id="docs-internal-guid-84d5e17e-7fff-5293-4ccc-49f3485bf10c"><div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup><col width="719"></col></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #efefef; border-bottom: solid #cccccc 1pt; border-color: rgb(204, 204, 204); border-left: solid #cccccc 1pt; border-right: solid #cccccc 1pt; border-style: solid; border-top: solid #cccccc 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Arrays - Color Palette</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">This p5.js sketch creates a color palette visualization. It first defines an array of colors and sets up a canvas. Then, in the draw loop, it uses a for loop to iterate through the array of colors and display them as rectangles on the canvas. The rectangles are centered on the canvas and their size is determined by the value of the blockSize variable.</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">The sketch also displays the red, green, and blue values of each color below each rectangle.</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Finally, it displays the name of the palette at the bottom of the canvas.</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Related video: 7.1: What is an array? - p5.js Tutorial - This video covers the basics on using arrays in JavaScript. What do they look like, how do they work, when should you use them?</span></span></p></td></tr></tbody></table></div></span></div>
<br />
<h2>Other ShiffBot Features Implemented with Gemini</h2>
<p>Beside the long prompt that is running the conversation, other smaller prompts are used to generate ShiffBot features.</p><br />
<h4>Seeding the conversation with content pre-generated by Gemini</h4>
<p>ShiffBot greetings should be welcoming and fun. Ideally they make the user smile, so we started by thinking with Dan what could be good greetings for ShiffBot. After phrasing a few examples, we use Gemini to generate a bunch more, so we can have a variety in the greetings. Those greetings go into the conversation history and seed it with a unique style, but make ShiffBot feel fun and new every time you start a conversation. We did the same with the initial suggestion chips that show up when you start the conversation. When there’s no conversation context yet, it’s important to have some suggestions of what the user might ask. We pre-generated those to seed the conversation in an interesting and helpful way.</p><br />
<h4>Dynamically Generated Suggestion Chips</h4>
<p>Suggestion chips during the conversation should be relevant for what the user is currently trying to do. We have a prompt and a call to Gemini that are solely dedicated to generating the suggested questions chips. In this case, the model’s only task is to suggest followup questions for a given conversation. We also use the few-shot technique here (the same technique we used in the static part of the prompt described above, where we include a few examples for the model to learn from). This time the prompt includes some examples for good suggestions, so that the model could generalize to any conversation:</p>
<div align="left" dir="ltr" style="margin-left: 0pt;"><span id="docs-internal-guid-84d5e17e-7fff-5293-4ccc-49f3485bf10c"><div align="left" dir="ltr" style="margin-left: 0pt;"><table style="border-collapse: collapse; border: none;"><colgroup><col width="719"></col></colgroup><tbody><tr style="height: 0pt;"><td style="background-color: #efefef; border-bottom: solid #cccccc 1pt; border-color: rgb(204, 204, 204); border-left: solid #cccccc 1pt; border-right: solid #cccccc 1pt; border-style: solid; border-top: solid #cccccc 1pt; border-width: 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt; vertical-align: top;"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Given a conversation between a user and an assistant in the p5js framework, suggest followup questions that the user could ask.</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Return up to 4 suggestions, separated by the ; sign.</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Avoid suggesting questions that the user already asked. The suggestions should only be related to creative coding and p5js.</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><br /></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Examples:</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">ShiffBot: Great idea! First, let's think about what in the sketch could be an object! What do you think?</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">Suggestions: What does this code do?; What's wrong with my code?; Make it more readable please</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><br /></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;">User: Help!</span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space-collapse: preserve;"><span style="color: #444444; font-family: courier;"><span style="background-color: transparent;">ShiffBot: How can I help?</span></span></span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: #444444; font-family: courier; white-space-collapse: preserve;">Suggestions: Explain this code to me; Give me some ideas; Cleanup my code</span></p></td></tr></tbody></table></div></span></div>
<br />
<h2>Final thoughts and next steps</h2>
<p>ShiffBot is an example of how you can experiment with the Gemini API to build applications with tailored experiences for and with a community.</p>
<p>We found that the techniques above helped us bring out much of the experience that Dan had in mind for his students during our co-creation process. AI is a dynamic field and we’re sure your techniques will evolve with it, but hopefully they are helpful to you as a snapshot of our explorations and towards your own. We are also excited for things to come both in terms of Gemini and API tools that broaden human curiosity and creativity. </p>
<p>For example, we’ve already started to explore how multimodality can help students show ShiffBot their work and the benefits that has on the learning process. We’re now learning how to weave it into the current experience and hope to share it soon.</p>
<br />
<p>Whether for coding, <a href="https://developers.googleblog.com/2023/08/how-its-made-lupe-fiasco-text-fx.html" target="_blank">writing</a> and even <a href="https://www.youtube.com/watch?v=s9BMGSYmJgI&ab_channel=Google" target="_blank">thinking</a>, creators play a crucial role in helping us imagine what these collaborations might look like. Our hope is that this <a href="http://labs.google" target="_blank">Lab Session</a> gives you a glimpse of what’s possible using the Gemini API, and inspires you to use Google’s AI offerings to bring your own ideas to life, in whatever your craft may be.</p>Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-54026160766901591412024-01-18T09:00:00.000-08:002024-01-18T09:00:00.126-08:00HealthPulse AI Leverages MediaPipe to Increase Health Equity<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjshYPRMbqYEiJBNAjcxjt1Uo_zT7Mh-CWI0lWFovnQ6cuOLtyfaq-z7qxd9a2txxKQNGkuGtzJct0X5h0OfFbyC8D7gu5Ekhjz8nx5NPNmzgUN-kMQvOyaKadwcNQgFLUCZMsfvQIkr5tFM4HRHnbi0_3vDf4fd8yITPCuxkN5AQH30KdAC3rwwFGyZbs/s1600/Social-Audere-HealthAI-MaediaPipe.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjshYPRMbqYEiJBNAjcxjt1Uo_zT7Mh-CWI0lWFovnQ6cuOLtyfaq-z7qxd9a2txxKQNGkuGtzJct0X5h0OfFbyC8D7gu5Ekhjz8nx5NPNmzgUN-kMQvOyaKadwcNQgFLUCZMsfvQIkr5tFM4HRHnbi0_3vDf4fd8yITPCuxkN5AQH30KdAC3rwwFGyZbs/s1600/Social-Audere-HealthAI-MaediaPipe.png" style="display: none;" />
<em>A guest post by Rouella Mendonca, AI Product Lead and Matt Brown, Machine Learning Engineer at <a href="https://www.auderenow.org/" target="_blank">Audere</a></em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQZjrww89mt3sKndEDzClD4yHvACCf8pSlNbkBgU-6IqEmEb9Wnunovqv6WfFahI9sVMA37dASVHeVfwZwxACamTle2-bDwC8qpUddAiTKTSXRpmfvd0R3bU37XYLX51XNDqp2_6u0hcIsN-JBTo7lhp7m0Ki9XVuwLIjBpTkqoAJJ9aOTwJyxRE_euFU/s1600/header-HealthPulse-AI-leverages-MediaPipe-to-increase-health-equity.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQZjrww89mt3sKndEDzClD4yHvACCf8pSlNbkBgU-6IqEmEb9Wnunovqv6WfFahI9sVMA37dASVHeVfwZwxACamTle2-bDwC8qpUddAiTKTSXRpmfvd0R3bU37XYLX51XNDqp2_6u0hcIsN-JBTo7lhp7m0Ki9XVuwLIjBpTkqoAJJ9aOTwJyxRE_euFU/s1600/header-HealthPulse-AI-leverages-MediaPipe-to-increase-health-equity.png" /></a>
<a name='more'></a><p></p>
<i>Please note that the information, uses, and applications expressed in the below post are solely those of our guest authors from Audere.</i>
<br /><h3><br /></h3><h3>About HealthPulse AI and its application in the real world</h3>
<p>Preventable and treatable diseases like HIV, COVID-19, and malaria infect ~12 million per year globally with a disproportionate number of cases impacting already underserved and under-resourced communities<sup>1</sup>. Communicable and non-communicable diseases are impeding human development by their negative impact on education, income, life expectancy, and other health indicators<sup>2</sup>. Lack of access to timely, accurate, and affordable diagnostics and care is a key contributor to high mortality rates.</p>
<p>Due to their low cost and relative ease of use, ~1 billion rapid diagnostic tests (RDTs) are used globally per year and growing. However, there are challenges with RDT use.</p>
<ul><blockquote>
<li>Where RDT data is reported, results are hard to trust due to inflated case counts, lack of reported expected seasonal fluctuations, and non-adherence to treatment regimens.</li>
<li>They are used in decentralized care settings by those with limited or no training, increasing the risk of misadministration and misinterpretation of test results.</li>
</blockquote></ul>
<p>HealthPulse AI, developed by a digital health non-profit Audere, leverages <a href="https://developers.google.com/mediapipe/solutions" target="_blank">MediaPipe</a> to address these issues by providing digital building blocks to increase trust in the world’s most widely used RDTs.</p>
<p>HealthPulse AI is a set of building blocks that can turn any digital solution into a Rapid Diagnostic Test (RDT) reader. These building blocks solve prominent global health problems by improving rapid diagnostic test accuracy, reducing misadministration of tests, and expanding the availability of testing for conditions including malaria, COVID, and HIV in decentralized care settings. With just a low-end smartphone, HealthPulse AI improves the accuracy of rapid diagnostic test results while automatically digitizing data for surveillance, program reporting, and test validation. It provides AI facilitated digital capture and result interpretation; quality, accessible digital use instructions for provider and self-tests; and standards based real-time reporting of test results.</p>
<p>These capabilities are available to local implementers, global NGOs, governments, and private sector pharmacies via a web service for use with chatbots, apps or server implementations; a mobile SDK for offline use in any mobile application; or directly through native Android and iOS apps.</p>
<p>It enables innovative use cases such as quality-assured virtual care models which enables stigma-free, convenient HIV home testing with linkage to education, prevention, and treatment options. </p>
<h4>HealthPulse AI Use Cases</h4>
<p>HealthPulse AI can substantially democratize access to timely, quality care in the private sector (e.g. pharmacies), in the public sector (e.g. clinics), in community programs (e.g. community health workers), and self-testing use cases. Using only an RDT image captured on a low-end smartphone, HealthPulse AI can power virtual care models by providing valuable decision support and quality control to clinicians, especially in cases where lines may be faint and hard to detect with the human eye. In the private sector, it can automate and scale incentive programs so auditors only need to review automated alerts based on test anomalies; procedures which presently require human reviews of each incoming image and transaction. In community care programs, HealthPulse AI can be used as a training tool for health workers learning how to correctly administer and interpret tests. In the public sector, it can strengthen surveillance systems with real-time disease tracking and verification of results across all channels where care is delivered - enabling faster response and pandemic preparedness<sup>3</sup>.</p><br />
<h3>HealthPulse AI algorithms</h3>
<p>HealthPulse AI provides a library of AI algorithms for the top RDTs for malaria, HIV, and COVID. Each algorithm is a collection of Computer Vision (CV) models that are trained using machine learning (ML) algorithms. From an image of an RDT, our algorithms can: </p>
<ul><blockquote>
<li>Flag image quality issues common on low-end phones (blurriness, over/underexposure)</li>
<li>Detect the RDT type</li>
<li>Interpret the test result</li>
</blockquote></ul>
<h4>Image Quality Assurance</h4>
<p>When capturing an image of an RDT, it is important to ensure that the image captured is human and AI interpretable to power the use cases described above. Image quality issues are common, particularly when images are captured with low-end phones in settings that may have poor lighting or simply captured by users with shaky hands. As such, HealthPulse AI provides image quality assurance (IQA) to identify adversarial image conditions. IQA returns concerns detected and can be used to request users to retake the photo in real time. Without IQA, clients would have to retest due to uninterpretable images and expired RDT read windows in telehealth use cases, for example. With just-in-time quality concern flagging, additional cost and treatment delays can be avoided. Examples of some adversarial images that IQA would flag are shown in Figure 1 below.</p>

<h4>Classification</h4>
<p>With just an image captured on a 5MP camera from low-end smartphones commonly used in Africa, SE Asia, and Latin America where a disproportionate disease burden exists, HealthPulse AI can identify a specific test (brand, disease), individual test lines, and provide an interpretation of the test. Our current library of AI algorithms supports many of the most commonly used RDTs for malaria, HIV, and COVID-19 that are W.H.O. pre-qualified. Our AI is condition agnostic and can be easily extended to support any RDT for a range of communicable and non-communicable diseases (Diabetes, Influenza, Tuberculosis, Pregnancy, STIs and more).</p>
<p>HealthPulse AI is able to detect the type of RDT in the image (for supported RDTs that the model was trained for), detect the presence of lines, and return a classification for the particular test (e.g. positive, negative, invalid, uninterpretable). See Figure 2.</p>
<br />
<h3>How and why we use MediaPipe</h3>
<p>Deploying HealthPulse AI in decentralized care settings with unstable infrastructure comes with a number of challenges. The first challenge is a lack of reliable internet connectivity, often requiring our CV and ML algorithms to run locally. Secondly, phones available in these settings are often very old, lacking the latest hardware (< 1 GB of ram and comparable CPU specs), and on different platforms and versions ( iOS, Android, Huawei; very old versions - possibly no longer receiving OS updates) mobile platforms. This necessitates having a platform agnostic, highly efficient inference engine. MediaPipe’s out-of-the-box multi-platform support for image-focused machine learning processes makes it efficient to meet these needs.</p>
<p>As a non-profit operating in cost-recovery mode, it was important that solutions:</p>
<ul><blockquote>
<li>have broad reach globally,</li>
<li>are low-lift to maintain, and</li>
<li>meet the needs of our target population for offline, low resource, performant use.</li>
</blockquote></ul>
<p>Without needing to write a lot of glue code, HealthPulse AI can support Android, iOS, and cloud devices using the same library built on MediaPipe.</p>
<h4>Our pipeline</h4>
<p>MediaPipe’s graph definitions allow us to build and iterate our inference pipeline on the fly. After a user submits a picture, the pipeline determines the RDT type, and attempts to classify the test result by passing the detected result-window crop of the RDT image to our classifier.</p>
<p>For good human and AI interpretability, it is important to have good quality images. However, input images to the pipeline have a high level of variability we have little to no control over. Variability factors include (but are not limited to) varying image quality due to a range of smartphone camera features/megapixels/physical defects, decentralized testing settings which include differing and non-ideal lighting conditions, random orientations of the RDT cassettes, blurry and unfocused images, partial RDT images, and many other adversarial conditions that add challenges for the AI. As such, an important part of our solution is image quality assurance. Each image passes through a number of calculators geared towards highlighting quality concerns that may prevent the detector or classifier from doing its job accurately. The pipeline elevates these concerns to the host application, so an end-user can be requested in real-time to retake a photo when necessary. Since RDT results have a limited validity time (e.g. a time window specified by the RDT manufacturer for how long after processing a result can be accurately read), IQA is essential to ensure timely care and save costs. A high level flowchart of the pipeline is shown below in Figure 3. </p>
<br />
<h3>Summary</h3>
<p>HealthPulse AI is designed to improve the quality and richness of testing programs and data in underserved communities that are disproportionately impacted by preventable communicable and non-communicable diseases. </p>
<p>Towards this mission, <a href="https://developers.google.com/mediapipe/solutions" target="_blank">MediaPipe</a> plays a critical role by providing a platform that allows Audere to quickly iterate and support new rapid diagnostic tests. This is imperative as new rapid tests come to market regularly, and test availability for community and home use can change frequently. Additionally, the flexibility allows for lower overhead in maintaining the pipeline, which is crucial for cost-effective operations. This, in turn, reduces the cost of use for governments and organizations globally that provide services to people who need them most. </p>
<p>HealthPulse AI offerings allow organizations and governments to benefit from new innovations in the diagnostics space with minimal overhead. This is an essential component of the primary health journey - to ensure that populations in under-resourced communities have access to timely, cost-effective, and efficacious care.</p><br />
<h4>About Audere</h4>
<p>Audere is a global digital health nonprofit developing AI based solutions to address important problems in health delivery by providing innovative, scalable, interconnected tools to advance health equity in underserved communities worldwide. We operate at the unique intersection of global health and high tech, creating advanced, accessible software that revolutionizes the detection, prevention, and treatment of diseases — such as malaria, COVID-19, and HIV. Our diverse team of passionate, innovative minds combines human-centered design, smartphone technology, artificial intelligence (AI), open standards, and the best of cloud-based services to empower innovators globally to deliver healthcare in new ways in low-and-middle income settings. Audere operates primarily in Africa with projects in Nigeria, Kenya, Côte d’Ivoire, Benin, Uganda, Zambia, South Africa, and Ethiopia. </p><br/>
<small><sup>1</sup> <a href="https://www.who.int/news-room/fact-sheets/detail/malaria" target="_blank">WHO malaria fact sheets</a>
<div style="text-align: left;"><sup>2</sup> <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7122505/" target="_blank">The burden of communicable and non-communicable diseases in developing countries</a></div>
<div style="text-align: left;"><sup>3</sup> <a href="https://ieeexplore.ieee.org/abstract/document/10195004" target="_blank">Transforming Rapid Diagnostic Tests into Trusted Diagnostic Tools in LMIC using AI</a></div></small>Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-64401158400052352472024-01-17T16:00:00.000-08:002024-01-17T16:03:39.876-08:00Accelerating startup growth through technology, expertise, and community<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsl-v5SUQfhQj5WQyVDBfGlE8Vr3hdoTPoEdmpgOJ4l8Z_P8e2qT283JktonenLe_wENApIfrNmjav6ZLElwOxXJRbJrM9R82hJ2KHGFG_DrfRXozfoeEAr2nKauUcGFcvdhote-nyt_mWjBTiLSQWtffLCENug3ubR8fA7d9ZZasUgrzmCkxzoKK8jSM/s1600/Social-Accelerator-International-Mentor-Day.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsl-v5SUQfhQj5WQyVDBfGlE8Vr3hdoTPoEdmpgOJ4l8Z_P8e2qT283JktonenLe_wENApIfrNmjav6ZLElwOxXJRbJrM9R82hJ2KHGFG_DrfRXozfoeEAr2nKauUcGFcvdhote-nyt_mWjBTiLSQWtffLCENug3ubR8fA7d9ZZasUgrzmCkxzoKK8jSM/s1600/Social-Accelerator-International-Mentor-Day.png" style="display: none;" />
<em>Posted by Nivedita Kumari – Technical Anchor Mentor, Accelerator Program, and Prabhu Thiagarajan – Accelerator Success Mentor, Accelerator Program</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJ40Pks5eiwKxz71xLFtSypz_1CYtPiYLUV_eexLbgQBmnkyTxtO44FYWp2yYtmF_RN0C99BM2M629jF2Sx7TtoidztAPZgtKrKRaWeyVLNTSqZAhVSYbkIjPM2M4D6JtYGqbAPsMI0n3kCLTXj7VNboJwsCHhdrgdLqNRN7PBMDzOPm49GJ9cxWgEaF8/s1600/image1.png"><img alt="Google for Startups Accelerator: Sustainable Development Goals" border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJ40Pks5eiwKxz71xLFtSypz_1CYtPiYLUV_eexLbgQBmnkyTxtO44FYWp2yYtmF_RN0C99BM2M629jF2Sx7TtoidztAPZgtKrKRaWeyVLNTSqZAhVSYbkIjPM2M4D6JtYGqbAPsMI0n3kCLTXj7VNboJwsCHhdrgdLqNRN7PBMDzOPm49GJ9cxWgEaF8/s1600/image1.png" /></a>
<a name='more'></a><p></p>
<p>This International Mentoring Day, we recognize that mentorship is a critical part of the startup journey. <a href="https://startup.google.com/programs/accelerator/" target="_blank">Google for Startups Accelerator programs</a> provide founders and teams with the technology, expertise, and mentorship they need to grow and succeed. As program mentors, we had the opportunity to engage with and empower many early stage startups helping them scale and grow.</p><br />
<h2>The Startup Challenge </h2>
<p>Although the startup ecosystem is rapidly expanding, success is rarely a smooth journey. On an average, it takes startups two to three years years to turn a profit, and fewer than 10% of startups that raise a seed round successfully raise a Series A investment. Even those that manage to secure funding, still face other hurdles like driving organic growth, fundraising, building brand and market expansion. <a href="https://www.forbes.com/sites/abdoriani/2022/08/29/why-mentors-are-vital-for-new-startup-founders/?sh=366b6e0b4b5e">Mentorship and access to networks has been proven time and again to make the critical difference for successful founders.</a> </p>
<p>To level the playing field for startup success, Google for Startups connects founders to the people, programs and best practices they need to grow and scale their companies. Google for Startups Accelerator programs provide participants with hands-on mentorship and support from Googlers as well as experienced entrepreneurs and investors. These experts work directly with startups over the course of 10 weeks to provide tailored technology, product development, marketing, sales and fundraising. </p><br />
<h2>Success Story</h2>
<p>As Google for Startups Accelerator program mentors, we had the opportunity to partner closely with various founders from around the world through the 2023 accelerator cohorts, including those focused on <a href="https://startup.google.com/cloud/" target="_blank">Cloud</a> and <a href="https://startup.google.com/programs/accelerator/climate-change/north-america/" target="_blank">Climate Change</a>. One of these startups for the Cloud cohort was <a href="https://realkey.com/" target="_blank">RealKey</a>, an Automated Loan Processing (ALP) SaaS platform.</p>
<p><a href="https://realkey.com/" target="_blank">RealKey</a> automates document collection/review processes and centralizes communication to reduce touch points with underwriting. Through the Google for Startups Cloud Program, RealKey was able to accelerate Google AI based document processing and loan process automation to help create a clean loan submission process. Google for Startups helped RealKey reduce loan processing time and frustration for all parties involved. </p><p></p><p style="text-align: left;"><span style="font-size: large;"><i></i></span></p><blockquote><span style="font-size: large;"><i></i></span><blockquote><span style="font-size: large;"><i>"Google's Lending DocAI service enabled our platform to include document processing where we classify documents and run complex validation algorithms to ensure that a loan package meets all lending criteria. This is typically a manual process and our platform saves our clients valuable time and labor.”</i></span> </blockquote></blockquote><blockquote><blockquote><span style="font-size: large;"><i>–</i></span> <b>Christopher Hussain, Founder & CEO, RealKey.</b></blockquote></blockquote><div style="text-align: left;"> </div><div style="text-align: left;">Over the course of the 10 week program, the RealKey team worked closely with us to develop and track their program Objectives and Key Results (OKR). Through a series of tailored technical deep dives, mentor-led product and program workshops, and pairing with relevant experts from Google and the industry, RealKey was able to solve several business and technical challenges to accelerate their results. “With Google for Startups mentor support, we accomplished goals that we budgeted would take substantially longer”, says Christopher.</div>
<p>The Google for Startups Accelerator program provides startups with the resources and support they need to thrive in the competitive world. Through its comprehensive program, startups gain access to funding, technical expertise, networking opportunities, and mentorship from Google AI experts, enabling them to overcome technical challenges, develop effective go-to-market strategies, and accelerate their growth. With the guidance and support of Google AI experts, startups can navigate the complexities of developing and commercializing their products, effectively reach their target audience, and establish themselves as leaders in the field of machine learning.</p><br />
<h2>Next Steps for Founders</h2>
<p>If you're a startup founder, Google for Startups Accelerator programs are a great way to get the help you need to grow your business and achieve your goals. Applications are now open for <b>Google for Startups Accelerator: <a href="https://startup.google.com/accelerator/women-founders/" target="_blank">Women Founders</a> and <a href="https://startup.google.com/accelerator/black-founders/" target="_blank">Black Founders</a></b> cohorts in North America. We encourage applications from U.S. and Canadian headquartered technology startups until <b>February 1, 2024</b>, with the 10-week programs commencing in March. Learn more and register <a href="https://rsvp.withgoogle.com/events/gfsaalumnipanel_6fccc7" target="_blank">here</a>.</p>
<p>In addition to accelerator programs, <a href="https://startup.google.com/programs/?_gl=1*xovw58*_up*MQ..*_ga*MTYyNjQ2OTQ4MC4xNzA1NTA1OTg4*_ga_GCB35PQ9X3*MTcwNTUwNTk4Ny4xLjAuMTcwNTUwNTk4Ny4wLjAuMA.." target="_blank">Google for Startups</a> offers a wide range of programs and initiatives to help startups at every stage of their journey. Whether you're just starting out or ready to scale, Google for Startups can help you connect with the right technology, expertise, and community to grow your business. Explore the best Google for Startups offerings for you and your team <a href="https://startup.google.com" target="_blank">here</a>.</p>

Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-73324580478427878572024-01-17T09:00:00.000-08:002024-01-17T09:11:20.418-08:00#WeArePlay | Learn how a childhood experience with an earthquake shaped Álvaro's entrepreneurial journey<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu218W2RyLDMAvOEcTDjLxA2tjNEFk6822pJCEX1uklhlS4xpUn4A5pQbmDLsnUgOC_Vfzy1stegKYeBgy0jdWOCimNm_I9eTkUg1l5lJRk0ASIaaZxda23M1MU6HvxT5yLhxzdzNMCRU3j457Tn6g7RLBLWs7sAvwCa3u0ytAAt_WUpmpGRM-HCmzGjA/s1600/%23WeArePlay%20Social%20.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu218W2RyLDMAvOEcTDjLxA2tjNEFk6822pJCEX1uklhlS4xpUn4A5pQbmDLsnUgOC_Vfzy1stegKYeBgy0jdWOCimNm_I9eTkUg1l5lJRk0ASIaaZxda23M1MU6HvxT5yLhxzdzNMCRU3j457Tn6g7RLBLWs7sAvwCa3u0ytAAt_WUpmpGRM-HCmzGjA/s1600/%23WeArePlay%20Social%20.png" style="display: none;" />
<em>Posted by Leticia Lago – Developer Marketing</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfkoyiV0Ohxbe89HXqSsLkj4WD8ADd5AmwhHRv4MyMdbrm11cE1wn-SnjSYjsmWupfwZDt6FhaV_-kWQJ1AdObLCTglDqkO1V_Uo6z7S14NpYbQyWzxaDNOMK3fEa2_k25zNk-kQIkDI21X8YjGyoKu-Nnuptx1YYyd6nXmIxC4bPJQl0iHBj2dMD2nI4/s1600/%23WeArePlay%20Banner.png"><img border="0" data-original-height="800" data-original-width="100%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfkoyiV0Ohxbe89HXqSsLkj4WD8ADd5AmwhHRv4MyMdbrm11cE1wn-SnjSYjsmWupfwZDt6FhaV_-kWQJ1AdObLCTglDqkO1V_Uo6z7S14NpYbQyWzxaDNOMK3fEa2_k25zNk-kQIkDI21X8YjGyoKu-Nnuptx1YYyd6nXmIxC4bPJQl0iHBj2dMD2nI4/s1600/%23WeArePlay%20Banner.png" /></a>
<a name='more'></a><p></p>
<iframe class="BLOG_video_class" allowfullscreen="" youtube-src-id="HNzCCiUzryo" width="100%" height="400" src="https://www.youtube.com/embed/HNzCCiUzryo"></iframe>
<p>Being trapped inside a house following a major earthquake as a child motivated Álvaro to research and improve the outcomes of destructive, large-scale quakes in Mexico. Using <a href="https://skyalert.mx" target="_blank">SkyAlert</a> technology, sensors detect and report warnings of incoming earthquakes, giving people valuable time to prepare and get to safety.</p>
<p>Álvaro shared his story in our latest film for <a href="https://play.google.com/console/about/weareplay/" target="_blank">#WeArePlay</a>, which spotlights the founders and creatives behind inspiring apps and games on Google Play. We caught up with him to find out his motivations for <a href="https://play.google.com/store/apps/details?id=com.disappster.skyalert" target="_blank">SkyAlert</a>, the impact the app’s had and what his future plans are.</p><br/>
<h4><span style="font-size: large;">What was the inspiration behind SkyAlert?</span></h4>
<p>Being in Colima near the epicenter of a massive earthquake as a kid had a huge impact on me. I remember feeling powerless to nature and very vulnerable watching everything falling apart around me. I was struck by how quick and smart you had to be to get to a safe place in time. I remember hugging my family once it was over and looking towards the sea to watch out for an impending tsunami – which fortunately didn’t hit my region badly. It was at this moment that I became determined to find out what had caused this catastrophe and what could be done to prevent it being so destructive another time.</p>
<p>Through my research, I learned that Mexico sits on five tectonic plates and, as a result, it is particularly prone to earthquakes. In fact, there've been seven major quakes in the last seven years, with hundreds losing their lives. Reducing the threat of earthquakes is my number one goal and the motivation behind <a href="https://play.google.com/store/apps/details?id=com.disappster.skyalert" target="_blank">SkyAlert</a>. The technology we’ve developed can detect the warning signs of an earthquake early on, deliver alerts to vulnerable people and hopefully save lives.</p><br/>
<h4><span style="font-size: large;">How does SkyAlert work exactly?</span></h4>
<p><a href="https://play.google.com/store/apps/details?id=com.disappster.skyalert" target="_blank">SkyAlert</a> collects data from a network of sensors and translates that information into alerts. People can put their zip code in order to filter updates for their locality. We’re constantly investing in getting the most reliable and fast technology available so we can make the service as timely and effective as possible.</p><br/>
<h4><span style="font-size: large;">Did you always imagine you’d be an entrepreneur?</span></h4>
<p>Since I was a kid I knew I wanted to be an entrepreneur. This was inspired by my grandfather who ran a large candy company with factories all over Mexico. However, what I really wanted, beyond just running my own company, was to have a positive social impact and change lives for the better: a feat I feel proud to have achieved with <a href="https://play.google.com/store/apps/details?id=com.disappster.skyalert" target="_blank">SkyAlert</a>.</p><br/>
<h4><span style="font-size: large;">How is Google Play helping your app to grow?</span></h4>
<p>Being on Google Play helps us to reach the maximum number of people. We’ve achieved some amazing numbers in the last 10 years through Google Play, with over 7 million downloads. With 35% of our income coming from Google Play, this reach has helped us invest in new technologies and sensors.</p>
<p>We also often receive advice from Google Play and they invite us to meetings to tell us how to do better and how to make the most of the platform. Google Play is a close partner that we feel really takes care of us.</p><br/>
<h4><span style="font-size: large;">What impact has SkyAlert had on the people of Mexico?</span></h4>
<p>The biggest advantage of <a href="https://play.google.com/store/apps/details?id=com.disappster.skyalert" target="_blank">SkyAlert</a> is that it can help them prepare for an earthquake. In 2017, we were able to notify people of a massive quake 12 seconds before it hit Mexico City. At least with those few seconds, many were able to get themselves to a safe place. Similarly, with a large earthquake in Oaxaca, we were able to give a warning of over a minute, allowing teachers to get students in schools away from infrastructure – saving kids’ lives.</p>
<p>Also, many find having <a href="https://play.google.com/store/apps/details?id=com.disappster.skyalert" target="_blank">SkyAlert</a> on their phone gives them peace of mind, knowing they’ll have some warning before an earthquake strikes. This can be very reassuring. </p><br/>
<h4><span style="font-size: large;">What does the future look like for SkyAlert?</span></h4>
<p>We’re working hard to expand our services into new risk areas like flooding, storms and wildfires. The hope is to become a global company that can deliver alerts on a variety of natural phenomena in countries around the world. </p>
<br/><p>Read more about Álvaro and other inspiring app and game founders featured in <a href="https://play.google.com/console/about/weareplay/" target="_blank">#WeArePlay</a>.</p>
<br /><br />
<p></p><center>
How useful did you find this blog post?
</center><p></p>
<p></p><center>
<a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=1%E2%98%85+%E2%80%93+Not+at+all&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a> <a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=2%E2%98%85+%E2%80%93+Not+very&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a> <a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=3%E2%98%85+%E2%80%93+Somewhat&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a> <a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=4%E2%98%85+%E2%80%93+Very&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a> <a href="https://docs.google.com/forms/d/e/1FAIpQLScLTlzFd_aV-3rAdBqO1QxwCsuAcDCIM6fJFXyNcyf7zElVXg/viewform?usp=pp_url&entry.753333049=5%E2%98%85+%E2%80%93+Extremely&entry.2056663615&entry.646747778=changeme-mm/yy" target="_blank">★</a>
</center><p></p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxnvf5XzyebGljqgi8J7by9XsvaL5WWLNamSyTg4w_c0kPHjY1B1no1FQsdSfUHUwNw0UuYfXuo6i7K6VpdyV2-K8ms4VFmZA438ujm6qwtZMt5rJfMGymhQ_iCX2Z8h0MuJo7gXUQ4Hb-hjKYK1isibMZZxCyX_vo3kGoP5tieK2PI2ZhYdGh4gsqhwA/s260/lockup_ic_Google_Play_H_260x53px_clr.png" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="53" data-original-width="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxnvf5XzyebGljqgi8J7by9XsvaL5WWLNamSyTg4w_c0kPHjY1B1no1FQsdSfUHUwNw0UuYfXuo6i7K6VpdyV2-K8ms4VFmZA438ujm6qwtZMt5rJfMGymhQ_iCX2Z8h0MuJo7gXUQ4Hb-hjKYK1isibMZZxCyX_vo3kGoP5tieK2PI2ZhYdGh4gsqhwA/s200/lockup_ic_Google_Play_H_260x53px_clr.png" width="200" /></a></div>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-63812388772775581892024-01-16T12:00:00.000-08:002024-01-30T09:18:58.083-08:00YouTube Ads Creative Analysis<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTTmoAcWFFyu228lNH5yk_G2AYZ-B-rdGwGy1wZnNcNcFMw0Yv-zWgqe3fT2TeieJFALaU_7Rqm4eLfEyXiJq4UIg7RcBZmMkQqQPy0SMfjCYZubg716BUYfUyyKJVBbuchC-BauJoHEQ9vusfll9JhBvpyht7kQaEtprrba0kbEzBRIrxEfW8v7pUSrw/s1600/social-YouTube-Ads-Creative-Analysis.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTTmoAcWFFyu228lNH5yk_G2AYZ-B-rdGwGy1wZnNcNcFMw0Yv-zWgqe3fT2TeieJFALaU_7Rqm4eLfEyXiJq4UIg7RcBZmMkQqQPy0SMfjCYZubg716BUYfUyyKJVBbuchC-BauJoHEQ9vusfll9JhBvpyht7kQaEtprrba0kbEzBRIrxEfW8v7pUSrw/s1600/social-YouTube-Ads-Creative-Analysis.png" style="display: none;" />
<em>Posted by Brian Craft, Satish Shreenivasa, Huikun Zhang, Manisha Arora and Paul Cubre – gTech Data Science Team</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAQ7YT9_VoORGLUTftk1BJxjS818Z33NaDsiwMA2DWO-RfCE_WcIlyy0NbnCDniF1odWRYuF6hhQvUh6Np2B0RS1NEUiBLlLYdO18DPjZVW8cgkE8uLtVVHFI_8-aQ4Ya5yZ1a6sI6IY3NC1Xf7xhJDJQ1ZYTU8uSzHUQIL8SUswjp6YULbmgDbK1eBU8/s1600/header-YouTube-Ads-Creative-Analysis%20%281%29.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAQ7YT9_VoORGLUTftk1BJxjS818Z33NaDsiwMA2DWO-RfCE_WcIlyy0NbnCDniF1odWRYuF6hhQvUh6Np2B0RS1NEUiBLlLYdO18DPjZVW8cgkE8uLtVVHFI_8-aQ4Ya5yZ1a6sI6IY3NC1Xf7xhJDJQ1ZYTU8uSzHUQIL8SUswjp6YULbmgDbK1eBU8/s1600/header-YouTube-Ads-Creative-Analysis%20%281%29.png" /></a>
<a name='more'></a><p></p>
<br /><h2>Introduction</h2><br />
<h4>Why analyze YouTube ads?</h4>
<p>YouTube has billions of monthly logged-in users and every day people watch billions of hours of video and generate billions of views. Businesses can connect with YouTube users using YouTube ads, which are promotional videos that appear on YouTube's website and app, with a variety of <a href="https://support.google.com/google-ads/answer/2375464?hl=en&ref_topic=10275346&sjid=12248947479419748728-NA" target="_blank">video ad formats</a> and goals.</p>
<br />
<h4>The Challenge</h4>
<p>An effective video ad focuses on the <a href="https://www.youtube.com/intl/en_us/ads/abcds-of-effective-video-ads/" target="_blank">ABCD</a>s.</p>
<ul><blockquote>
<li>Attention: Capturing the viewer's attention till the end.</li>
<li>Branding: Helping them hear or visualize the brand.</li>
<li>Connection: Making them feel something about the brand.</li>
<li>Direction: Encouraging them to take action.</li>
</blockquote></ul>
<p>But each YouTube ad has a varying number of components, for instance, objects, background music or a logo. Each of these components affect the <a href="https://support.google.com/authorizedbuyers/answer/2961338?hl=en" target="_blank">view through rate</a> (which is referred to as VTR for the remainder of the post) of the video ad. Therefore, analyzing video ads through the lens of the components in the ad helps businesses understand what about the ad improves VTR. The insights from these analyses can be used to inform the creation of new creatives and to optimize existing creatives to improve VTR.</p><br />
<h4>The Proposal</h4>
<p>We propose a machine learning based approach for analyzing a company’s YouTube ads to assess which components affect VTR, for the purpose of optimizing a video ad’s performance. We illustrate how to:</p>
<ul><blockquote>
<li>Use <a href="https://cloud.google.com/video-intelligence/docs/how-to" target="_blank">Google Cloud Video Intelligence API</a> to extract the components of each video ad, using the underlying video files.</li>
<li>Transform that extracted data to engineered features that map to actionable business questions.</li>
<li>Use a machine learning model to isolate the effect on VTR of each engineered feature.</li>
<li>Interpret and action on those insights to improve video ad performance, for instance altering existing creatives or create new creatives to be used in an AB test.</li>
</blockquote></ul>
<br /><h2>Approach</h2><br />
<h4>The Process</h4>
<p>The proposed analysis has 5 steps, discussed below.</p>
<b>1. Define Business Questions</b>
<div style="text-align: left;">Align on a list of business questions that are actionable, for instance “does having a logo in the opening shot affect VTR?” We suggest taking feasibility into account ahead of time, for instance if a product disclaimer is necessary to have for legal reasons, there is no reason to assess the impact a disclaimer has on VTR.</div><div style="text-align: left;"><br />
<b>2. Raw Component Extraction</b>
<div style="text-align: left;">Use Google Cloud technologies, such as the Google Cloud Video Intelligence API, and underlying video files to extract raw components from each video ad. For instance, but not limited to, objects appearing in the video at a particular timestamp, presence of text and its location on the screen, or the presence of specific sounds.</div><div style="text-align: left;"><br />
<b>3. Feature Engineering</b>
<div style="text-align: left;">Using the raw components extracted in step 2, engineer features that align to the business questions defined in step 1. For example, if the business question is “does having a logo in the opening shot affect VTR”, create a feature that labels each video as either 1, having a logo in the opening shot or 0, not having a logo in the opening shot. Repeat this for each feature.</div><div style="text-align: left;"><br />
<b>4. Modeling</b>
<div style="text-align: left;">Create an ML model using the engineered features from step 3, using VTR as the target in the model.</div><div style="text-align: left;"><br />
<b>5. Interpretation</b>
<div style="text-align: left;">Extract statistically significant features from the ML model and interpret their effect on VTR. For example, “there is an xx% observed uplift in VTR when there is a logo in the opening shot.”</div><div style="text-align: left;"><br />
<br /><h2>Feature Engineering</h2><br />
<h4>Data Extraction</h4>
<p>Consider 2 different YouTube Video Ads for a web browser, each highlighting a different product feature. Ad A has text that says “Built In Virus Protection'', while Ad B has text that says “Automatic Password Saving”.</p>
<p>The raw text can be extracted from each video ad and allow for the creation of tabular datasets, such as the below. For brevity and simplicity, the example carried forward will deal with text features only and forgo the timestamp dimension.</p>
<div align="left">
<table style="border-collapse: collapse; border: none; margin-right: calc(0%); width: 100%;">
<tbody>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 16.321%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Ad</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 83.4281%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Detected Raw Text</span></strong></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 16.321%;">
<p><span style="color: #202124; font-size: 12pt;"> Ad A</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 83.4281%;">
<p><span style="color: #202124; font-size: 12pt;"> Built In Virus Protection</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 16.321%;">
<p><span style="color: #202124; font-size: 12pt;"> Ad B</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 83.4281%;">
<p><span style="color: #202124; font-size: 12pt;"> Automatic Password Saving</span></p>
</td>
</tr>
</tbody>
</table>
</div><br />
<h4>Preprocessing</h4>
<p>After extracting the raw components in each ad, preprocessing may need to be applied, such as removing case sensitivity and punctuation.</p>
<div align="left">
<table style="border-collapse: collapse; border: none; margin-right: calc(0%); width: 100%;">
<tbody>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2338%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Ad</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 31.7695%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Detected Raw Text</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 39.8742%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Processed Text</span></strong></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2338%;">
<p><span style="color: #202124; font-size: 12pt;"> Ad A</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 31.7695%;">
<p><strong><span style="color: red; font-size: 12pt;"> B</span></strong><span style="color: #202124; font-size: 12pt;">uilt </span><strong><span style="color: red; font-size: 12pt;">I</span></strong><span style="color: #202124; font-size: 12pt;">n </span><strong><span style="color: red; font-size: 12pt;">V</span></strong><span style="color: #202124; font-size: 12pt;">irus </span><strong><span style="color: red; font-size: 12pt;">P</span></strong><span style="color: #202124; font-size: 12pt;">rotection</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 39.8742%;">
<p><strong><span style="color: #38761d; font-size: 12pt;"> b</span></strong><span style="color: #202124; font-size: 12pt;">uilt </span><strong><span style="color: #38761d; font-size: 12pt;">i</span></strong><span style="color: #202124; font-size: 12pt;">n </span><strong><span style="color: #38761d; font-size: 12pt;">v</span></strong><span style="color: #202124; font-size: 12pt;">irus </span><strong><span style="color: #38761d; font-size: 12pt;">p</span></strong><span style="color: #202124; font-size: 12pt;">rotection</span></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2338%;">
<p><span style="color: #202124; font-size: 12pt;"> Ad B</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 31.7695%;">
<p><strong><span style="color: red; font-size: 12pt;"> A</span></strong><span style="color: #202124; font-size: 12pt;">utomatic </span><strong><span style="color: red; font-size: 12pt;">P</span></strong><span style="color: #202124; font-size: 12pt;">assword </span><strong><span style="color: red; font-size: 12pt;">S</span></strong><span style="color: #202124; font-size: 12pt;">aving</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 39.8742%;">
<p><strong><span style="color: #38761d; font-size: 12pt;"> a</span></strong><span style="color: #202124; font-size: 12pt;">utomatic </span><strong><span style="color: #38761d; font-size: 12pt;">p</span></strong><span style="color: #202124; font-size: 12pt;">assword </span><strong><span style="color: #38761d; font-size: 12pt;">s</span></strong><span style="color: #202124; font-size: 12pt;">aving</span></p>
</td>
</tr>
</tbody>
</table>
</div><br />
<h4>Manual Feature Engineering</h4>
<p>Consider a scenario where the goal is to answer the business question, “does having a textual reference to a product feature affect VTR?” </p>
<p>This feature could be built manually by exploring all the text in all the videos in the sample and creating a list of tokens or phrases that indicate a textual reference to a product feature. However, this approach can be time consuming and limits scaling.</p>
<br />
<h4>AI Based Feature Engineering</h4>
<p>Instead of manual feature engineering as described above, the text detected in each video ad creative can be passed to an <a href="https://cloud.google.com/ai/llms" target="_blank">LLM</a> along with a prompt that performs the feature engineering automatically.</p>
<p>For example, if the goal is to explore the value of highlighting a product feature in a video ad, ask an LLM if the text “‘built in virus protection’ is a feature callout”, followed by asking the LLM if the text “‘automatic password saving’ is a feature callout”.</p>
<p>The answers can be extracted and transformed to a 0 or 1, to later be passed to a machine learning model.</p>
<div align="left">
<table style="border-collapse: collapse; border: none; margin-right: calc(0%); width: 100%;">
<tbody>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 7.8537%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Ad</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 29.2866%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Raw Text</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2308%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Processed Text</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 34.5045%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Has Textual Reference to Feature</span></strong></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 7.8537%;">
<p><span style="color: #202124; font-size: 12pt;"> Ad A</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 29.2866%;">
<p><strong><span style="color: red; font-size: 12pt;"> B</span></strong><span style="color: #202124; font-size: 12pt;">uilt </span><strong><span style="color: red; font-size: 12pt;">I</span></strong><span style="color: #202124; font-size: 12pt;">n </span><strong><span style="color: red; font-size: 12pt;">V</span></strong><span style="color: #202124; font-size: 12pt;">irus </span><strong><span style="color: red; font-size: 12pt;">P</span></strong><span style="color: #202124; font-size: 12pt;">rotection</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2308%;">
<p><strong><span style="color: #38761d; font-size: 12pt;"> b</span></strong><span style="color: #202124; font-size: 12pt;">uilt </span><strong><span style="color: #38761d; font-size: 12pt;">i</span></strong><span style="color: #202124; font-size: 12pt;">n </span><strong><span style="color: #38761d; font-size: 12pt;">v</span></strong><span style="color: #202124; font-size: 12pt;">irus </span><strong><span style="color: #38761d; font-size: 12pt;">p</span></strong><span style="color: #202124; font-size: 12pt;">rotection</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 34.5045%;">
<p>
<font color="#202124"> Yes</font>
</p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0); width: 7.8537%;">
<p><span style="color: #202124; font-size: 12pt;"> Ad B</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 29.2866%;">
<p><strong><span style="color: red; font-size: 12pt;"> A</span></strong><span style="color: #202124; font-size: 12pt;">utomatic </span><strong><span style="color: red; font-size: 12pt;">P</span></strong><span style="color: #202124; font-size: 12pt;">assword </span><strong><span style="color: red; font-size: 12pt;">S</span></strong><span style="color: #202124; font-size: 12pt;">aving</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 28.2308%;">
<p><strong><span style="color: #38761d; font-size: 12pt;"> a</span></strong><span style="color: #202124; font-size: 12pt;">utomatic </span><strong><span style="color: #38761d; font-size: 12pt;">p</span></strong><span style="color: #202124; font-size: 12pt;">assword </span><strong><span style="color: #38761d; font-size: 12pt;">s</span></strong><span style="color: #202124; font-size: 12pt;">aving</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 34.5045%;">
<p>
<font color="#202124"> Yes</font>
</p>
</td>
</tr>
</tbody>
</table>
</div><br />
<br /><h2>Modeling</h2><br />
<h4>Training Data</h4>
<p>The result of the feature engineering step is a dataframe with columns that align to the initial business questions, which can be joined to a dataframe that has the VTR for each video ad in the sample.</p>
<div align="left">
<table style="border-collapse: collapse; border: none; margin-right: calc(0%); width: 100%;">
<thead>
<tr>
<th scope="col" style="border: 1pt solid rgb(0, 0, 0); text-align: left; width: 27.1429%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Ad</span></strong></p>
</th>
<th scope="col" style="border: 1pt solid rgb(0, 0, 0); text-align: left; width: 45.7448%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Has Textual Reference to Feature</span></strong></p>
</th>
<th scope="col" style="border: 1pt solid rgb(0, 0, 0); text-align: left; width: 26.9026%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> VTR*</span></strong></p>
</th>
</tr>
<tr>
<th scope="col" style="border: 1pt solid rgb(0, 0, 0); text-align: left; width: 27.1429%;">
<p><span style="color: #202124; font-size: 12pt;"> <span style="font-weight: normal;">Ad A</span></span></p>
</th>
<th scope="col" style="border: 1pt solid rgb(0, 0, 0); text-align: left; width: 45.7448%;">
<p><span style="color: #202124; font-size: 12pt; font-weight: normal;"> Yes</span></p>
</th>
<th scope="col" style="border: 1pt solid rgb(0, 0, 0); text-align: left; width: 26.9026%;">
<p><span style="color: #202124; font-size: 12pt; font-weight: normal;"> 10%</span></p>
</th>
</tr>
<tr>
<th scope="col" style="border: 1pt solid rgb(0, 0, 0); text-align: left; width: 27.1429%;">
<p><span style="color: #202124; font-size: 12pt; font-weight: normal;"> Ad B</span></p>
</th>
<th scope="col" style="border: 1pt solid rgb(0, 0, 0); text-align: left; width: 45.7448%;">
<p>
<span style="font-weight: normal;"><font color="#202124"> Yes</font>
</span></p>
</th>
<th scope="col" style="border: 1pt solid rgb(0, 0, 0); text-align: left; width: 26.9026%;">
<p><span style="color: #202124; font-size: 12pt; font-weight: normal;"> 50%</span></p>
</th>
</tr>
</thead>
</table>
</div><br />
<span style="font-size: small;"><b><i>*Values are random and not to be interpreted in any way.</i></b></span>
<p>Modeling is done using fixed effects, bootstrapping and ElasticNet. More information can be found here in the post <a href="https://developers.googleblog.com/2022/09/introducing-discovery-ad-performance-analysis.html" target="_blank">Introducing Discovery Ad Performance Analysis</a>, written by Manisha Arora and Nithya Mahadevan.</p>
<h4>Interpretation</h4>
<p>The model output can be used to extract significant features, coefficient values, and standard deviation.</p>
<u>Coefficient Value (+/- X%)</u>
<div style="text-align: left;">Represents the absolute percentage uplift in VTR. Positive value indicates positive impact on VTR and a negative value indicates a negative impact on VTR.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">
<u>Significant Value (True/False)</u>
<div style="text-align: left;">Represents whether the feature has a statistically significant impact on VTR.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">
<div align="left">
<table style="border-collapse: collapse; border: none; margin-right: calc(0%); width: 100%;">
<tbody>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><strong><span style="color: #202124; font-size: 12pt;"> Feature</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 17.6864%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Coefficient*</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 24.8481%;">
<p><strong><span style="color: #202124; font-size: 12pt;"> Standard Deviation*</span></strong></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><strong><span style="color: #202124; font-size: 12pt;"> Significant?*</span></strong></p>
</td>
</tr>
<tr>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p><span style="color: #202124; font-size: 12pt;"> Has Textual Reference to Feature</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 17.6864%;">
<p style="text-align: right;"><span style="color: #202124; font-size: 12pt;">0.0222</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0); width: 24.8481%;">
<p style="text-align: right;"><span style="color: #202124; font-size: 12pt;">0.000033</span></p>
</td>
<td style="border: 1pt solid rgb(0, 0, 0);">
<p style="text-align: right;"><span style="color: #202124; font-size: 12pt;">True</span></p>
</td>
</tr>
</tbody>
</table>
</div><br />
<span style="font-size: small;"><b><i>*Values are random and not to be interpreted in any way.</i></b></span>
<p>In the above hypothetical example, the feature “Has Feature Callout” has a statistically significant, positive impact of VTR. This can be interpreted as “there is an observed 2.22% absolute uplift in VTR when an ad has a textual reference to a product feature.”</p>
<h4>Challenges</h4>
<p>Challenges of the above approach are:</p>
<ul><blockquote>
<li>Interactions among the individual features input into the model are not considered. For example, if “has logo” and “has logo in the lower left” are individual features in the model, their interaction will not be assessed. However, a third feature can be engineered combining the above as “has large logo + has logo in the lower left”.</li>
<li>Inferences are based on historical data and not necessarily representative of future ad creative performance. There is no guarantee that insights will improve VTR.</li>
<li>Dimensionality can be a concern as given the number of components in a video ad.</li>
</blockquote></ul>
<br /><h2>Activation Strategies</h2><br />
<h4>Ads Creative Studio</h4>
<p><a href="https://adscreativestudio.google.com/home/" target="_blank">Ads Creative Studio</a> is an effective tool for businesses to create multiple versions of a video by quickly combining text, images, video clips or audio. Use this tool to create new videos quickly by adding/removing features in accordance with model output.</p>
<br />
<h4>Video Experiments</h4>
<p>Design a new creative, varying a component based on the insights from the analysis, and run an AB test. For example, change the size of the logo and set up an experiment using Video Experiments.</p>
<br /><h2>Summary</h2><br />
<p>Identifying which components of a YouTube Ad affect VTR is difficult, due to the number of components contained in the ad, but there is an incentive for advertisers to optimize their creatives to improve VTR. Google Cloud technologies, GenAI models and ML can be used to answer creative centric business questions in a scalable and actionable way. The resulting insights can be used to optimize YouTube ads and achieve business outcomes.</p><br/>
<h3>Acknowledgements</h3>
<p>We would like to thank our collaborators at Google, specifically Luyang Yu, Vijai Kasthuri Rangan, Ahmad Emad, Chuyi Wang, Kun Chang, Mike Anderson, Yan Sun, Nithya Mahadevan, Tommy Mulc, David Letts, Tony Coconate, Akash Roy Choudhury, Alex Pronin, Toby Yang, Felix Abreu and Anthony Lui.</p>Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-58916927861872888232024-01-10T12:00:00.000-08:002024-02-09T14:33:32.991-08:00Solution Challenge 2024 – Using Google Technology to Address UN Sustainable Development Goals<meta name="twitter:image" content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRavhCZYTo-_i8WSXskxbCHZQ_aqdpgAD0imwMp-BLrni3Ea3ICdlTmNLQl3NX-fW8O1DqhN8pOgSB5YllqlPqbua6AkG0OVVyEBndseFxN2vcvOIQzARHcAMGdKDYPQmLkJV1EjObWEJkTxQF30Q55qA2ZWMim-JIG9CPDcUKkmkpondMe4_XKGj1Y-o/s1600/Social-GDSC-Solutions-Challenge.png">
<img style="display:none" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRavhCZYTo-_i8WSXskxbCHZQ_aqdpgAD0imwMp-BLrni3Ea3ICdlTmNLQl3NX-fW8O1DqhN8pOgSB5YllqlPqbua6AkG0OVVyEBndseFxN2vcvOIQzARHcAMGdKDYPQmLkJV1EjObWEJkTxQF30Q55qA2ZWMim-JIG9CPDcUKkmkpondMe4_XKGj1Y-o/s1600/Social-GDSC-Solutions-Challenge.png">
<em>Posted by Rachel Francois, Global Program Manager, Google Developer Student Clubs</em>
<a name='more'></a><p></p>
<h2>Google Developer Student Clubs celebrates 5 years of innovative solutions built by university students</h2><br/>
<iframe class="BLOG_video_class" allowfullscreen="" youtube-src-id="ljFnld8GY40" width="100%" height="400" src="https://www.youtube.com/embed/ljFnld8GY40"></iframe>
<p>This year marks the 5-year anniversary of the Google Developer Student Clubs Solution Challenge! For the past five years, the Solution Challenge has invited university students to use Google technologies to develop solutions for real-world problems.</p>
<h3>Since 2020:</h3>
<ul><blockquote>
<li>Over 110+ countries have participated</li>
<li>Over 4,000+ projects have been submitted</li>
<li>Over 1,000+ chapters have participated</li>
</blockquote></ul>
<p>The project solutions address one or more of the United Nations 17 Sustainable Development Goals, which aim to end poverty, ensure prosperity, and protect the planet by 2030. The goals were agreed upon by all 193 United Nations Member States in 2015. </p>
<p>If you have an idea for how you could use Android, Firebase, TensorFlow, Google Cloud, Flutter, or another Google product to promote employment for all, economic growth, and climate action, enter the 2024 GDSC Solution Challenge and share your ideas!</p>
<h3>Solution Challenge prizes</h3>
<p>Check out the many great prizes you can win by participating:</p>
<ul><blockquote>
<li><b>Top 100</b> teams receive branded swag, a certificate, and personalized mentorship from Google and experts to help further their solution ideas.</li>
<li><b>Final 10</b> teams receive a swag box, additional mentorship, and the opportunity to showcase their project solutions to Google teams and developers worldwide during the virtual 2024 Solution Challenge Demo Day, live on YouTube. Additional cash prize of $1,000 per student. Winnings for each qualifying team will not exceed $4,000.</li>
<li><b>Winning 3</b> teams receive a swag box, and each individual receives a cash prize of $3,000 and a feature on the Google Developers Blog. Winnings for each qualifying team will not exceed $12,000.</li>
</blockquote></ul>
<h3>Joining the Solution Challenge</h3>
<p>To join the Solution Challenge and get started on your project:</p>
<ul><blockquote>
<li>Register at <a href="https://goo.gle/solutionchallenge" target="_blank">goo.gle/SolutionChallenge</a> and join a <a href="https://gdsc.community.dev/" target="_blank">Google Developer Student Club</a> at your college or university. If there is no club at your university, visit our <a href="https://gdsc.community.dev/" target="_blank">event platform</a> to find the nearest one.</li>
<li>Select which of the <a href="https://developers.google.com/community/dsc-solution-challenge/UN-goals" target="_blank">United Nations 17 Sustainable Development Goals</a> you want to address.</li>
<li>Build a solution using Google technology.</li>
<li>Create a demo video and submit your project by February 22, 2024.</li>
</blockquote></ul>
<h3>Google resources for Solution Challenge participants</h3>
<p>Google supports Solution Challenge participants with resources to build strong projects, including:</p>
<ul><blockquote>
<li>Create a demo video and submit your project by February 22, 2024</li>
<li>Live online Q&A sessions</li>
<li>Mentorship from Googlers, Google Developer Experts, and the Google Developer Student Club community</li>
<li>Curated codelabs designed by Google for Developers</li>
<li>Access to Design Sprint guidelines developed by Google Ventures</li>
</blockquote></ul>
<p>and so much more!</p>
<h3>Winner announcement dates</h3>
<p>Once all projects are submitted, our panel of judges will evaluate and score each submission using specific criteria. After that, winners will be announced in three rounds:<p>
<ul><blockquote>
<li>Round 1 (April): Top 100 teams will be announced.</li>
<li>Round 2 (May): Final 10 teams will be announced.</li>
<li>Round 3 (June): The Winning 3 grand prize teams will be announced live on YouTube during the 2024 Solution Challenge Demo Day.</li>
</blockquote></ul>
<p>We're looking forward to seeing the solutions you create when you combine your enthusiasm for building a better world, coding skills, and help from Google technologies.</p>
<p>Learn more and sign up for the 2024 Solution Challenge <a href="http://goo.gle/solutionchallenge" target="_blank">here</a>.</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-51991754634793775912023-12-20T09:00:00.000-08:002023-12-20T09:00:00.121-08:00Navigating AI Safety & Compliance: A guide for CTOs<meta name="twitter:image" content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKWRvKr23JvK09cIPLDC3HOZZAghGmsyNLtw3GK3GgUFGjgfNSuemwWvB4aVtUrFAcHqT_Rjg5mzNFjYkSGZg7wHkfFdy92R5zAnydmJg95cLQGkftB6waAEDKxcEy5Z0JMjWKetSVsVhbHLoWyQbjQGuas49b4B5UBXvGuxruTelf0lqAkkm_5HWGQOg/s1600/Social-CTO-Guide-G4D%20%281%29.png">
<img style="display:none" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKWRvKr23JvK09cIPLDC3HOZZAghGmsyNLtw3GK3GgUFGjgfNSuemwWvB4aVtUrFAcHqT_Rjg5mzNFjYkSGZg7wHkfFdy92R5zAnydmJg95cLQGkftB6waAEDKxcEy5Z0JMjWKetSVsVhbHLoWyQbjQGuas49b4B5UBXvGuxruTelf0lqAkkm_5HWGQOg/s1600/Social-CTO-Guide-G4D%20%281%29.png">
<em>Posted by Fergus Hurley – Co-Founder & GM, <a href="https://checks.google.com" target="_blank">Checks</a>, and Pedro Rodriguez – Head of Engineering, <a href="https://checks.google.com" target="_blank">Checks</a></em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEht0ytc3eun_HJ9erVw_CBAhlBioBfjxtPAukXXjAfJ0_7yGNVvZTQoVtKapdXUDEmZ_9fsNt6WjdjJT-n7DdiI_PGQ6AjP4Zg-gWFr7_MpONwVtP4MN2PLuWHf6Q-K2PFhXiFRTnHshMRU9jveKr6ksQHXUNUjQNkJFyWapQ804efjRKy0KxgfoGUhxzY/s1600/Header%20%282%29.png" imageanchor="1" ><img style="100%" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEht0ytc3eun_HJ9erVw_CBAhlBioBfjxtPAukXXjAfJ0_7yGNVvZTQoVtKapdXUDEmZ_9fsNt6WjdjJT-n7DdiI_PGQ6AjP4Zg-gWFr7_MpONwVtP4MN2PLuWHf6Q-K2PFhXiFRTnHshMRU9jveKr6ksQHXUNUjQNkJFyWapQ804efjRKy0KxgfoGUhxzY/s1600/Header%20%282%29.png" data-original-width="1058" data-original-height="800" /></a>
<a name='more'></a><p></p>
<p>The rapid advances in generative artificial intelligence (GenAI) have brought about transformative opportunities across many industries. However, these advances have raised concerns about risks, such as privacy, misuse, bias, and unfairness. Responsible development and deployment is, therefore, a must.</p>
<p>AI applications are becoming more sophisticated, and developers are integrating them into critical systems. Therefore, the onus is on technology leaders, particularly CTOs and Heads of Engineering and AI – those responsible for leading the adoption of AI across their products and stacks – to ensure they use AI safely, ethically, and in compliance with relevant policies, regulations, and laws.</p>
<p>While comprehensive AI safety regulations are nascent, CTOs cannot wait for regulatory mandates before they act. Instead, they must adopt a forward-thinking approach to AI governance, incorporating safety and compliance considerations into the entire product development cycle.</p>
<p>This article is the first in a series to explore these challenges. To start, this article presents four key proposals for integrating AI safety and compliance practices into the product development lifecycle:</p><div style="text-align: left;"><br /></div>
<h3>1. <span> </span>Establish a robust AI governance framework</h3>
<p>Formulate a comprehensive AI governance framework that clearly defines the organization’s principles, policies, and procedures for developing, deploying, and operating AI systems. This framework should establish clear roles, responsibilities, accountability mechanisms, and risk assessment protocols.</p>
<p>Examples of emerging frameworks include the US National Institute of Standards and Technologies’ <a href="https://www.nist.gov/itl/ai-risk-management-framework" target="_blank">AI Risk Management Framework</a>, the <a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/" target="_blank">OSTP Blueprint for an AI Bill of Rights</a>, the <a href="https://artificialintelligenceact.eu/documents/" target="_blank">EU AI Act</a>, as well as <a href="https://safety.google/cybersecurity-advancements/saif/" target="_blank">Google’s Secure AI Framework (SAIF)</a>.</p>
<p>As your organization adopts an AI governance framework, it is crucial to consider the implications of relying on third-party foundation models. These considerations include the data from your app that the foundation model uses and your obligations based on the foundation model provider's terms of service.</p><div style="text-align: left;"><br /></div>
<h3>2. <span> </span>Embed AI safety principles into the design phase</h3>
<p>Incorporate AI safety principles, such as <a href="https://ai.google/responsibility/principles/" target="_blank">Google’s responsible AI principles</a>, into the design process from the outset.</p>
<p>AI safety principles involve identifying and mitigating potential risks and challenges early in the development cycle. For example, mitigate bias in training or model inferences and ensure explainability of models behavior. Use techniques such as adversarial training – red teaming testing of LLMs using prompts that look for unsafe outputs – to help ensure that AI models operate in a fair, unbiased, and robust manner.</p><div style="text-align: left;"><br /></div>
<h3>3. <span> </span>Implement continuous monitoring and auditing</h3>
<p>Track the performance and behavior of AI systems in real time with continuous monitoring and auditing. The goal is to identify and address potential safety issues or anomalies before they escalate into larger problems.</p>
<p>Look for key metrics like model accuracy, fairness, and explainability, and establish a baseline for your app and its monitoring. Beyond traditional metrics, look for unexpected changes in user behavior and AI model drift using a tool such as <a href="https://cloud.google.com/vertex-ai/docs/model-monitoring/overview" target="_blank">Vertex AI Model Monitoring</a>. Do this using data logging, anomaly detection, and human-in-the-loop mechanisms to ensure ongoing oversight.</p><div style="text-align: left;"><br /></div>
<h3>4. <span> </span>Foster a culture of transparency and explainability</h3>
<p>Drive AI decision-making through a culture of transparency and explainability. Encourage this culture by defining clear documentation guidelines, metrics, and roles so that all the team members developing AI systems participate in the design, training, deployment, and operations.</p>
<p>Also, provide clear and accessible explanations to cross-functional stakeholders about how AI systems operate, their limitations, and the available rationale behind their decisions. This information fosters trust among users, regulators, and stakeholders.</p><div style="text-align: left;"><br /></div>
<h2>Final word</h2>
<p>As AI's role in core and critical systems grows, proper governance is essential for its success and that of the systems and organizations using AI. The four proposals in this article should be a good start in that direction.</p>
<p>However, this is a broad and complex domain, which is what this series of articles is about. So, look out for deeper dives into the tools, techniques, and processes you need to safely integrate AI into your development and the apps you create.</p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0tag:blogger.com,1999:blog-596098824972435195.post-92179518192437694182023-12-19T14:00:00.000-08:002023-12-19T14:00:00.128-08:00Create smart chips for link previewing in Google Docs<meta content="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtgE1FsS2QL-dW-z9dt3fEyJXVwF0CJVm3YGcBExuyqbN1QSdu9Co-z2N9cBAhBIkz9H8g2Y6JsySioM2X99cMe9GBN5u_YunTVfaNBW2KRxnPDuEQo85Tu6qu6bI6955p9c_HoretZ2RXUVrKvZUAmRaG4mwqjQdiOrHHN_YF443uoyVgA_pRY-sNbbg/s320/social-Create-smart-chips-for-link-previewing-in-Google-Docs.png" name="twitter:image"></meta>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtgE1FsS2QL-dW-z9dt3fEyJXVwF0CJVm3YGcBExuyqbN1QSdu9Co-z2N9cBAhBIkz9H8g2Y6JsySioM2X99cMe9GBN5u_YunTVfaNBW2KRxnPDuEQo85Tu6qu6bI6955p9c_HoretZ2RXUVrKvZUAmRaG4mwqjQdiOrHHN_YF443uoyVgA_pRY-sNbbg/s320/social-Create-smart-chips-for-link-previewing-in-Google-Docs.png" style="display: none;" />
<em>Posted by <a href="https://twitter.com/ChanelGreco" target="_blank">Chanel Greco</a>, Developer Advocate</em>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbPzSwePlzrKW3r_hgjz0v_jNsNxjuW0Cv_NKvOhl7YIBJ7lZM6iHlOH_BLjsbeZEHEpBSMo9ouvZBzTbKSuLxzX1jTxKPrACTMW56cIkbtp2oO59Ai1XaIEkWTcApWYmVR9o6u21d53ksDNbj-s4wPeKrHhX6USkI-eFyjQBRSDPizR-u3iBCOHb4g9Y/s1600/header-Create-smart-chips-for-link-previewing-in-Google-Docs.png"><img border="0" data-original-height="800" data-original-width="1058" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbPzSwePlzrKW3r_hgjz0v_jNsNxjuW0Cv_NKvOhl7YIBJ7lZM6iHlOH_BLjsbeZEHEpBSMo9ouvZBzTbKSuLxzX1jTxKPrACTMW56cIkbtp2oO59Ai1XaIEkWTcApWYmVR9o6u21d53ksDNbj-s4wPeKrHhX6USkI-eFyjQBRSDPizR-u3iBCOHb4g9Y/s1600/header-Create-smart-chips-for-link-previewing-in-Google-Docs.png" /></a>
<a name='more'></a><p></p>
<p><a href="https://youtu.be/P69uyJYifMw" target="_blank">Earlier this year</a>, we announced the general availability of third-party smart chips in Google Docs. This new feature lets you add, view, and engage with critical information from third party apps directly in Google Docs. Several partners, including <a href="https://workspace.google.com/marketplace/app/asana/923474483785" target="_blank">Asana</a>, <a href="https://workspace.google.com/marketplace/app/atlassian_cloud/331162826608" target="_blank">Atlassian</a>, <a href="https://workspace.google.com/marketplace/app/figma/1035183305161" target="_blank">Figma</a>, <a href="https://workspace.google.com/marketplace/app/loom/574092966097" target="_blank">Loom</a>, <a href="https://workspace.google.com/marketplace/app/miro/1062019541050" target="_blank">Miro</a>, <a href="https://workspace.google.com/marketplace/app/tableau_cloud/1044537977580" target="_blank">Tableau</a>, and <a href="https://workspace.google.com/marketplace/app/whimsical/287554556504" target="_blank">Whimsical</a>, have already created smart chips so users can start embedding content from their apps directly into Docs. <a href="https://twitter.com/choraria" target="_blank">Sourabh Choraria</a>, a Google Developer Expert for Google Workspace and hobby developer, published a third-party smart chip solution called <a href="https://workspace.google.com/marketplace/app/link_previews/215545697761" target="_blank">“Link Previews”</a> to the Google Workspace Marketplace. This app adds information to Google Docs from multiple commonly used SaaS tools. </p>
<p>In this blog post you will find out how you too can create your own smart chips for Google Docs.</p>

<div style="text-align: left;"><br /></div>
<div style="text-align: left;"><br /></div>
<h2>Understanding how smart chips for third-party services work</h2>
<p>Third-party smart chips are powered by Google Workspace Add-ons and can be published to the Google Workspace Marketplace. From there, an admin or user can install the add-on and it will appear in the sidebar on the right hand side of Google Docs. </p>
<p>The Google Workspace Add-on detects a service's links and prompts Google Docs users to preview them. This means that you can create smart chips for any service that has a publicly accessible URL. You can configure an add-on to preview multiple URL patterns, such as links to support cases, sales leads, employee profiles, and more. This configuration is done in the add-on’s manifest file.</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;">{
<span style="color: #0d904f;">"timeZone"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"America/Los_Angeles"</span>,
<span style="color: #0d904f;">"exceptionLogging"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"STACKDRIVER"</span>,
<span style="color: #0d904f;">"runtimeVersion"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"V8"</span>,
<span style="color: #0d904f;">"oauthScopes"</span><span style="color: #666666;">:</span> [
<span style="color: #0d904f;">"https://www.googleapis.com/auth/workspace.linkpreview"</span>,
<span style="color: #0d904f;">"https://www.googleapis.com/auth/script.external_request"</span>
],
<span style="color: #0d904f;">"addOns"</span><span style="color: #666666;">:</span> {
<span style="color: #0d904f;">"common"</span><span style="color: #666666;">:</span> {
<span style="color: #0d904f;">"name"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"Preview Books Add-on"</span>,
<span style="color: #0d904f;">"logoUrl"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"https://developers.google.com/workspace/add-ons/images/library-icon.png"</span>,
<span style="color: #0d904f;">"layoutProperties"</span><span style="color: #666666;">:</span> {
<span style="color: #0d904f;">"primaryColor"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"#dd4b39"</span>
}
},
<span style="color: #0d904f;">"docs"</span><span style="color: #666666;">:</span> {
<span style="color: #0d904f;">"linkPreviewTriggers"</span><span style="color: #666666;">:</span> [
{
<span style="color: #0d904f;">"runFunction"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"bookLinkPreview"</span>,
<span style="color: #0d904f;">"patterns"</span><span style="color: #666666;">:</span> [
{
<span style="color: #0d904f;">"hostPattern"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"*.google.*"</span>,
<span style="color: #0d904f;">"pathPrefix"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"books"</span>
},
{
<span style="color: #0d904f;">"hostPattern"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"*.google.*"</span>,
<span style="color: #0d904f;">"pathPrefix"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"books/edition"</span>
}
],
<span style="color: #0d904f;">"labelText"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"Book"</span>,
<span style="color: #0d904f;">"logoUrl"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"https://developers.google.com/workspace/add-ons/images/book-icon.png"</span>,
<span style="color: #0d904f;">"localizedLabelText"</span><span style="color: #666666;">:</span> {
<span style="color: #0d904f;">"es"</span><span style="color: #666666;">:</span> <span style="color: #0d904f;">"Libros"</span>
}
}
]
}
}
}</span>
</pre></div>
<imgcaption><center><em>The manifest file contains the URL pattern for the Google Books API</em></center></imgcaption>
<div style="text-align: left;"><br /></div><p>The smart chip displays an icon and short title or description of the link's content. When the user hovers over the chip, they see a card interface that previews more information about the file or link. You can customize the card interface that appears when the user hovers over a smart chip. To create the card interface, you use widgets to display information about the link. You can also build actions that let users open the link or modify its contents. For a <a href="https://developers.google.com/apps-script/add-ons/editors/gsao/preview-links" target="_blank">list of all the supported components</a> for preview cards check the developer documentation.</p>
<div style="background: rgb(248, 248, 248); border: 0px; overflow: auto; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="font-family: courier;"><span style="color: green; font-weight: bold;">function</span> getBook(id) {
<span style="color: #408080; font-style: italic;">// Code to fetch the data from the Google Books API</span>
}
<span style="color: green; font-weight: bold;">function</span> bookLinkPreview(event) {
<span style="color: green; font-weight: bold;">if</span> (event.docs.matchedUrl.url) {
<span style="color: #408080; font-style: italic;">// Through getBook(id) the relevant data is fetched and used to build the smart chip and card</span>
<span style="color: green; font-weight: bold;">const</span> previewHeader <span style="color: #666666;">=</span> CardService.newCardHeader()
.setSubtitle(<span style="color: #ba2121;">'By '</span> <span style="color: #666666;">+</span> bookAuthors)
.setTitle(bookTitle);
<span style="color: green; font-weight: bold;">const</span> previewPages <span style="color: #666666;">=</span> CardService.newDecoratedText()
.setTopLabel(<span style="color: #ba2121;">'Page count'</span>)
.setText(bookPageCount);
<span style="color: green; font-weight: bold;">const</span> previewDescription <span style="color: #666666;">=</span> CardService.newDecoratedText()
.setTopLabel(<span style="color: #ba2121;">'About this book'</span>)
.setText(bookDescription).setWrapText(<span style="color: green; font-weight: bold;">true</span>);
<span style="color: green; font-weight: bold;">const</span> previewImage <span style="color: #666666;">=</span> CardService.newImage()
.setAltText(<span style="color: #ba2121;">'Image of book cover'</span>)
.setImageUrl(bookImage);
<span style="color: green; font-weight: bold;">const</span> buttonBook <span style="color: #666666;">=</span> CardService.newTextButton()
.setText(<span style="color: #ba2121;">'View book'</span>)
.setOpenLink(CardService.newOpenLink()
.setUrl(event.docs.matchedUrl.url));
<span style="color: green; font-weight: bold;">const</span> cardSectionBook <span style="color: #666666;">=</span> CardService.newCardSection()
.addWidget(previewImage)
.addWidget(previewPages)
.addWidget(CardService.newDivider())
.addWidget(previewDescription)
.addWidget(buttonBook);
<span style="color: green; font-weight: bold;">return</span> CardService.newCardBuilder()
.setHeader(previewHeader)
.addSection(cardSectionBook)
.build();
}
}</span>
</pre></div>
<imgcaption><center><em>This is the Apps Script code to create a smart chip.</em></center><center><em><br /></em></center></imgcaption>

<div style="text-align: left;"><br /></div>
<div style="text-align: left;"><br /></div>
<p>For a detailed walkthrough of the code used in this post, please checkout the <a href="https://developers.google.com/apps-script/add-ons/preview-links-google-books" target="_blank">Preview links from Google Books with smart chips</a> sample tutorial.</p>
<div style="text-align: left;"><br /></div>
<div style="text-align: left;"><br /></div>
<h2>How to choose the technology for your add-on</h2>
<p>When creating smart chips for link previewing, you can choose from two different technologies to create your add-on: Google Apps Script or alternate runtime. </p>
<p>Apps script is a rapid application development platform that is built into Google Workspace. This fact makes Apps Script a good choice for prototyping and validating your smart chip solution as it requires no pre-existing development environment. But Apps Script isn’t only for prototyping as some developers choose to create their Google Workspace Add-on with it and even publish it to the <a href="https://workspace.google.com/marketplace" target="_blank">Google Workspace Marketplace</a> for users to install. </p>
<p>If you want to create your smart chip with Apps Script you can check out the video below in which you learn how to build a smart chip for link previewing in Google Docs from A - Z. Want the code used in the video tutorial? Then have a look at the <a href="https://developers.google.com/apps-script/add-ons/preview-links-google-books" target="_blank">Preview links from Google Books with smart chips</a> sample page. </p>
<iframe allowfullscreen="" class="BLOG_video_class" height="355" src="https://www.youtube.com/embed/x3P4SCrRoKI" width="100%" youtube-src-id="x3P4SCrRoKI"></iframe>
<p>If you prefer to create your Google Workspace Add-on using your own development environment, programming language, hosting, packages, etc., then alternate runtime is the right choice. You can choose from different programming languages like Node.js, Java, Python, and more. The hosting of the add-on runtime code can be on any cloud or on premise infrastructure as long as runtime code can be exposed as a public HTTP(S) endpoint. You can learn more about how to create smart chips using alternate runtimes from the <a href="https://developers.google.com/apps-script/add-ons/editors/gsao/preview-links#node.js_2" target="_blank">developer documentation</a>.</p>
<div style="text-align: left;"><br /></div>
<div style="text-align: left;"><br /></div>
<h2>How to share your add-on with others</h2>
<p>You can share your add-on with others through the Google Workspace Marketplace. Let’s say you want to make your smart chip solution available to your team. In that case you can publish the add-on to your Google Workspace organization, also known as a private app. On the other hand, if you want to share your add-on with anyone who has a Google Account, you can publish it as a public app.</p>
<p>To find out more about publishing to the Google Workspace Marketplace, you can watch <a href="https://youtu.be/OpGsJl6QQHQ" target="_blank">this video</a> that will walk you through the process.</p>
<div style="text-align: left;"><br /></div>
<div style="text-align: left;"><br /></div>
<h2>Getting started</h2>
<p>Learn more about creating smart chips for link previewing in the <a href="https://developers.google.com/apps-script/add-ons/editors/gsao/preview-links" target="_blank">developer documentation</a>. There you will find further information and code samples you can base your solution of. We can’t wait to see what smart chip solutions you will build. </p>
Google Developershttp://www.blogger.com/profile/11166847513477443177noreply@blogger.com0