Edition No. 20 The Future and Mobile Changed by Google - AMORE STORIES - ENGLISH
#Digital
2018.06.29
0 LIKE
98 VIEW
  • 메일 공유
  • https://stories.amorepacific.com/en/edition-no-20-the-futu

Edition No. 20 The Future and Mobile Changed by Google

ColumnistNam Jaehyun
Amorepacific Digital IT Innovation Team


 Since Google released its mobile OS Android in 2007, Google had its mobile apps at the center of developing service platforms and the Android ecosystem, with the goal of 'mobile-first'. At Google Input/Output 2017, Google announced that it will move from 'mobile-first' to 'AI-first', and this year that it will provide mobile app services to achieve 'AI for Everyone'. This announcement is interpreted as Google's intent on focusing on R&D in AI core technologies and applying AI technology in mobile apps to provide new values and improve user experiences to provide benefits to many users.

 Through continuous growth for about a decade, Android led the shift from the Personal Computer (PC) era to the mobile era. As of 2017, the number of Android smartphones in use around the world surpassed 2 billion, with more than 1 billion in monthly active users for Google's core mobile apps – Play Store, Chrome, YouTube, Google Maps, Gmail and others. Google also stabilized the performance of Android platform and changed its User Experience (UX) to improve user convenience over the decade. Despite such changes, most apps are still offering multi-touch input type services due to; fundamental limitations of touchscreen-based smartphones, the absence of designing a new interface or difficulties in applying new interface in apps because the technology is not yet ready to be applied.

 With the recent development of AI technologies, however, user input method is evolving. New types of services are offered based on voice and vision (image, video) input. Voice and vision technologies are now technologically ready to be used. As of 2017, the word error rate of Google's speech recognition was low at 4.9%, while the vision error rate of image recognition – classifying images into dog vs. cat category – was lower than around 5%, which was the error level of humans.
  • "Error rates of speech and image recognition",
    Source : Google, 2017. 05

 This column takes a look at what services are provided via Google's mobile app in line with the interface evolving to provide users new experiences and values.

# Google Gmail : "AI suggesting personalized wording according to individual situation"

 It seems that the touchscreen as an input device won't be changing soon. Rather, Google is strengthening user interaction by combining AI technology to enable users to input texts more conveniently. This function is well proven with Gmail's Smart Reply and Compose.
  • "Smart Reply vs. Smart Compose",
    Source : Google AI Blog

 Both functions are functions where AI technology suggests what to write and reply on a received email. Smart Reply allows the user to choose from a number of sentences so that the user can reply in one click. On the other hand, Smart Compose allows AI technology to complete the rest of the sentence colored in grey, enabling the user to use the suggested sentence by clicking on the tab.

 AI can make suggestions by learning the user's frequently used expressions and sentences, therefore letting the user choose from the suggestions. This reduces spelling or grammatical mistakes and saves the user time in composing a reply.

# Google Duplex : "AI scheduling reservations by making the call for you"

 Have you imagined AI calling a hair salon or a restaurant instead to make a reservation for you? At Google I/O 2018, Google previewed Google Duplex for the first time, which uses natural conversation technology to carry out certain tasks such as calling to make reservations or scheduling appointments.
  • Source : https://www.youtube.com/watch?v=lXUQ-DdSDoE

 Google Duplex was developed to enable making reservations over the phone, observing the fact that 60% of self-owned businesses in the U.S. do not have an online reservation system. Google Duplex has the potential to benefit all sides as users can save time by having Google Duplex call and make reservations for them, while businesses can save costs by replacing staff.

 Google Duplex combines voice technology – Automatic Speech Recognition (ASR), which converts spoken words into text, and Text To Speech (TTS), which converts text into spoken words – and deep learning technology to analyze intent. (Find further information on the technologies on the blogs listed under references.) Hundreds of thousands of conversations data on making reservations were used to learn to create natural conversation.

 The received speech is converted to text via ASR system while considering previous context, and a reply with the accurate intent is created through deep learning processing, which is then converted via TTS into speech. For a more natural sounding conversation, speech disfluencies like 'hmm' are added throughout the conversation and there is a timing function that adjusts response time.
  • Source : Google AI Blog

 The technology that was released is only limited to making restaurant reservations and scheduling hair salon appointments but has the potential to be expanded into various areas of our lives. Won't we soon see the day when Google Duplex will carry out specific tasks of responding to our customers asking for information or making cancellations on products they ordered?

# Google Lens : "AI delivering information immediately by recognizing images captured by camera"

 While AI in speech recognition technology is leading to processing specific everyday tasks through natural conversation, image recognition technology is leading us in the direction of recognizing images captured by camera and allowing users to search for information or linking the images with various services such as shopping. The technology can be used when users want to know about the objects they come across but don't know how to search for information on them because they don't have detailed information.
  • "Providing various services through image recognition"
    Source : Google Play Store, Google Lens app

 When you point the phone camera at an object, Google Lens delivers information on the names and details of the plants and animals you captured or gives information on ratings, business hours and historical facts on popular locations. If you capture the image of products such as clothes or furniture, Google Lens searches for similar products and allows you to connect with shopping function. It also has Smart Text Selection Features, allowing users to select and extract text from the image on camera and save it to memo.

 Android phone users can enjoy Google Lens' many features with the camera app already installed on the device. Google Lens also supports third party camera apps. It seems to be Google's strategy to expand AI ecosystem. Google is already collaborating with LG Electronics in Korea and Sony, Xiaomi, and Nokia worldwide.

# Epilogue

 Google unveiled Google Assistant that uses speech recognition interface and Google Duplex, which is an extension of the same functions, to resolve users' inconvenience in situations where they cannot touch the smartphone screen. It also released Google Lens, which uses vision interface, to understand the situation or objects in front of our eyes in the real world. AI technology is core in delivering these services, and Google is learning from analyzing data such as texts, speech and images, while advancing the technology.

 Even at this moment, Google plans to identify inconveniences in our lives and provide services that apply AI technology in various areas to solve them. In the medical field, one can use an image of a patient's eyeground to suggest regular checkups by predicting the risks of heart attacks or a stroke for that patient with high accuracy. Google's technology can also enable service for people with hearing impairment that separately shows speech and subtitles of each speaker on the TV screen when they are all speaking at once. As such, Google continues to solve problems users face in their lives.

 Can you image a future changed by AI and Google which will improve all inconveniences in our everyday lives one problem at a time?



  • Like

    0
  • Recommend

    0
  • Thumbs up

    0
  • Supporting

    0
  • Want follow-up article

    0
TOP

Follow us:

FB TW IG