Every year, technology enthusiasts, computer scientists, developers, and anyone remotely involved in the tech industry, eagerly await the Google I/O event to know what the Mountain View giant has been brewing over the last year.
Like previous conferences, the annual developer conference by Google this year too unveiled several key announcements, where it introduced some of its most innovative updates for existing products, taking an exponential leap in applying groundbreaking research in Artificial Intelligence (AI), Machine Learning (ML), and data science across search, Pixel, Assistant, and other Google products, that affect everything from your job to home and multiple other aspects of daily life.
Let’s take a look at the latest updates revealed by Google during the I/O 2019:
Full Coverage, a feature launched in Google News last year, is now a part of the overall Google Search to better organise news-related topics. Now, when you search for something, you’ll also see a timeline of events for that specific topic. For instance, if you are searching for information about Black Holes, you’ll see the complete story timeline.
Podcasts, one of the preferred media of content consumption, will also undergo an important upgrade. On searching about a specific topic, podcasts associated with it will directly appear in the search results and you can tap to listen to them right in the results or save them for later.
Google is also bringing visual information directly in search, accepting that seeing is often understanding. It is adding a new dimension to Google search by bringing the camera to search. For instance, if you are searching for muscle flexion as a medical student, you can see a 3D model of the same and can place in your own space, bringing camera and Augmented Reality (AR) capabilities to Google search.
Google Lens Update
Google Lens has already been used by over a billion times by people to ask instantaneous questions about whatever they see and like. It basically identifies objects using image recognition technology and is essentially indexing the physical world.
As per the latest update which will be rolled out later this year, Google has found new ways to make Lens more helpful, integrating it with Google Maps. For instance, if you are dining at a restaurant and want to know more about the dishes it serves, you can simply point your camera towards the menu and it will automatically highlight the most popular dishes at that restaurant on the menu itself, and tap on a particular dish to see how it looks like at the restaurant.
Google Lens can also help you pay for your meal by calculating the tip and splitting the total just by pointing the camera at the bill receipt.
Another important update and one that will definitely make lives easier is the launch of a new camera on Google Go, the search platform for entry-level Google devices. The new camera capabilities make it possible for millions of people who can’t read to understand the language through text to speech recognition technology and computer vision. This feature also allows for translation into one’s native language for better understanding. Right now, this text to speech feature works in over 12 languages.
Google Duplex was launched as a voice assistant by training AI on simple tasks to help save your time, like booking restaurant reservations on the phone. Now, Google is extending Duplex from a voice-based assistant to also perform web-based tasks.
Expanding from just restaurant reservations, Google Duplex will now also be able to book your movie tickets and car rentals. It aims to make the tedious task of filling out multiple forms and engaging in a long workflow to enhance the user experience as well as help businesses retain customers through fast and efficient service.
The Duplex works in integration with the Google Assistant. For instance, if you have an upcoming trip, you can simply ask your Google Assistant to book a car rental and it will automatically go the preferred website and fill out the forms on your behalf, taking information from your trip confirmation on Gmail or Calendar. Nevertheless, you’ll always be in control of the flow; the Assistant helps you save time by making selections on your behalf and only asking you to confirm, offering an amazing level of personalisation.
The Assistant uses complex algorithms and multiple machine learning models to process speech, requiring a massive 100 GB of storage and a network connection. As Sundar Pichai said during his talk, ‘bringing these models to your phone - bringing the power of a Google data centre in your pocket is an incredibly challenging computer science problem’.
The new Google Assistant has changed the game, though. Through extensive research in deep learning, these 100 GB models have been downsized to a meagre 0.5 GB, a massive milestone in terms of bringing advanced tech to smartphones. This means that the new Google Assistant is incredibly fast and smooth in its performance.
The new-generation Assistant responds to requests in real-time and delivers results 10 times faster. Typing on your phone might actually feel slower in comparison! Assistant is integrated across apps. You can use the voice feature to send messages, reply to emails, and even send pictures on just a simple voice command. The alarms can be stopped simply by saying ‘Stop’ instead of actually tapping on the phone - whether this is a bane or a boon, that’s for you to decide!
The new Google Assistant multi-tasks across different apps, saving a lot of time and effort, by being able to handle complex speech scenarios and personalising the process as per every individual’s preferences, including features like Picks for You and Personal References. The update will be launched in the new Pixel phones later this year.
Google has not been very popular among users in terms of maintaining data privacy. With its new updates across devices and apps, Google is endeavouring to rectify that approach. One such update is the launch of the incognito mode in Google Maps that avoids tracking and offers greater privacy to the user of their geographical timeline.
Auto-deletion of saved data will also be introduced across search, photos, maps, and other apps, which means that the user will be able to delete any old data by changing the settings in privacy. Google is also enhancing security at the account sign-in level, making the two-step verification more convenient by bringing the protection of security keys directly in the Android devices. This means that users can now sign in with just a tap.
The company is using exceptional research in AI to not only build amazing products but also improve data privacy. For instance, Federated Learning, a new approach to ML developed by Google, allows the products to work better without collecting raw data from your device. How? Instead of sending the data to the cloud, Google used advanced tech to bring ML models to your device.
Google is fast moving from a company that helps you find answers to a company that helps you get things done through pathbreaking innovations in AI, machine learning, and data science. The updates are not limited to specific Google apps, but also tangible products like Google Nest Hub and Pixel 3A, both launched during the annual keynote.
Following the incredible use of technology by the world-leader, it’s needless to say how almost every advancement in today’s digital world revolves around data. For anyone still on the fence on the relevance and scope of data science as a career or a field to branch out, this is your cue!