Google IO 2018 Keynote and Highlights

Google I/O is being held from Tuesday, May 8, through Thursday, May 10, at the Shoreline Amphitheater in Mountain View, California.

AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.
Google will make the Google Assistant more visual, more naturally conversational, and more helpful. You’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. Google is also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend
Google is also working on a new system called Duplex. This brings together all of Google’s AI efforts. In an example shown at I/O, Google demonstrated how its AI voice could phone a hairdressers or restaurant and book appointments. However, it is still in an experimental stage.
* Google is rolling out an all-new @GoogleNews—using the best of AI to organize what’s happening in the world to help you learn more about the stories that matter to you

Google is unveiling a beta version of Android P, the next release of Android. Android P makes your smartphone smarter, helping it learn from and adapt to you. Take battery life, for instance—I think all of us often wish we had more of it. In Android P, we partnered with DeepMind to build Adaptive Battery, which prioritizes battery power only for the apps and services you use the most, to help you squeeze the most out of your battery. Google used machine learning to create Adaptive Brightness, which learns how you like to set the brightness slider given your surroundings.
Across the platform, your phone will help you better navigate your day, using context to give you smart suggestions based on what you like to do the most and automatically anticipating your next action. App Actions, for instance, help you get to your next task more quickly by predicting what you want to do next. Say you connect your headphones to your device, Android will surface an action to resume your favorite Spotify playlist. Actions show up throughout Android in places like the Launcher, Smart Text Selection, the Play Store, the Google Search app and the Assistant.


Today at Google I/O, Google announced that Lens will now be available directly in the camera app on supported devices from LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus, and of course the Google Pixel. Google announced three updates that enable Lens to answer more questions, about more things, more quickly:
1. smart text selection connects the words you see with the answers and actions you need. You can copy and paste text from the real world—like recipes, gift card codes, or Wi-Fi passwords—to your phone. Lens helps you make sense of a page of words by showing you relevant information and photos. Say you’re at a restaurant and see the name of a dish you don’t recognize—Lens will show you a picture to give you a better idea. This requires not just recognizing shapes of letters, but also the meaning and context behind the words. This is where all our years of language understanding in Search help.
2. sometimes your question is not, “what is that exact thing?” but instead, “what are things like it?” Now, with style match, if an outfit or home decor item catch your eye, you can open Lens and not only get info on that specific item—like reviews—but see things in a similar style that fit the look you like.
3. Lens now works in real time. It’s able to proactively surface information instantly—and anchor it to the things you see. Now you’ll be able to browse the world around you, just by pointing your camera. This is only possible with state-of-the-art machine learning, using both on-device intelligence and cloud TPUs, to identify billions of words, phrases, places, and things in a split second.



The redesigned Explore tab will be your hub for everything new and interesting nearby. When you check out a particular area on the map, you’ll see dining, event, and activity options based on the area you’re looking at. Top trending lists like the Foodie List show you where the tastemakers are going, and help you find new restaurants based on information from local experts, Google’s algorithms, and trusted publishers like The Infatuation and others.
There is an augmented version of Google maps that automatically loads streetview when a camera app is opened. It is a visual positioning system that recognizes the buildings and landmarks that are in front of it.
AI chips
Google announced its new AI processor: the Tensor Processing Unit 3.0. This kind of processor allows the firm to process the large amounts of data needed for machine learning and AI. “These chips are so powerful we’ve had to introduce liquid cooling in our data centers,” Pichai said.

Google IO 2018 Keynote and Highlights

Google I/O is being held from Tuesday, May 8, through Thursday, May 10, at the Shoreline Amphitheater in Mountain View, California.

AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.
Google will make the Google Assistant more visual, more naturally conversational, and more helpful. You’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. Google is also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend
Google is also working on a new system called Duplex. This brings together all of Google’s AI efforts. In an example shown at I/O, Google demonstrated how its AI voice could phone a hairdressers or restaurant and book appointments. However, it is still in an experimental stage.
* Google is rolling out an all-new @GoogleNews—using the best of AI to organize what’s happening in the world to help you learn more about the stories that matter to you

Google is unveiling a beta version of Android P, the next release of Android. Android P makes your smartphone smarter, helping it learn from and adapt to you. Take battery life, for instance—I think all of us often wish we had more of it. In Android P, we partnered with DeepMind to build Adaptive Battery, which prioritizes battery power only for the apps and services you use the most, to help you squeeze the most out of your battery. Google used machine learning to create Adaptive Brightness, which learns how you like to set the brightness slider given your surroundings.
Across the platform, your phone will help you better navigate your day, using context to give you smart suggestions based on what you like to do the most and automatically anticipating your next action. App Actions, for instance, help you get to your next task more quickly by predicting what you want to do next. Say you connect your headphones to your device, Android will surface an action to resume your favorite Spotify playlist. Actions show up throughout Android in places like the Launcher, Smart Text Selection, the Play Store, the Google Search app and the Assistant.


Today at Google I/O, Google announced that Lens will now be available directly in the camera app on supported devices from LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus, and of course the Google Pixel. Google announced three updates that enable Lens to answer more questions, about more things, more quickly:
1. smart text selection connects the words you see with the answers and actions you need. You can copy and paste text from the real world—like recipes, gift card codes, or Wi-Fi passwords—to your phone. Lens helps you make sense of a page of words by showing you relevant information and photos. Say you’re at a restaurant and see the name of a dish you don’t recognize—Lens will show you a picture to give you a better idea. This requires not just recognizing shapes of letters, but also the meaning and context behind the words. This is where all our years of language understanding in Search help.
2. sometimes your question is not, “what is that exact thing?” but instead, “what are things like it?” Now, with style match, if an outfit or home decor item catch your eye, you can open Lens and not only get info on that specific item—like reviews—but see things in a similar style that fit the look you like.
3. Lens now works in real time. It’s able to proactively surface information instantly—and anchor it to the things you see. Now you’ll be able to browse the world around you, just by pointing your camera. This is only possible with state-of-the-art machine learning, using both on-device intelligence and cloud TPUs, to identify billions of words, phrases, places, and things in a split second.



The redesigned Explore tab will be your hub for everything new and interesting nearby. When you check out a particular area on the map, you’ll see dining, event, and activity options based on the area you’re looking at. Top trending lists like the Foodie List show you where the tastemakers are going, and help you find new restaurants based on information from local experts, Google’s algorithms, and trusted publishers like The Infatuation and others.
There is an augmented version of Google maps that automatically loads streetview when a camera app is opened. It is a visual positioning system that recognizes the buildings and landmarks that are in front of it.
AI chips
Google announced its new AI processor: the Tensor Processing Unit 3.0. This kind of processor allows the firm to process the large amounts of data needed for machine learning and AI. “These chips are so powerful we’ve had to introduce liquid cooling in our data centers,” Pichai said.