Making it Real with Machine Learning
I recently wrote a post on the reality of Digital Transformation on LinkedIn. While the response was great, one of the important pieces of feedback was to make it a bit more real using a business use case. Another was to go further into details, specifically regarding applying machine learning in real world. As a result, I am writing this post describing a classic business use case that I encounter in customer service scenarios with many of our clients.
One of the patterns that I see with several clients is arduous handling of voice and image data in a contact center environment. The use cases range from cataloging, agent training, text transcription, speech analytics, image recognition, pattern matching, feature extraction and sentiment analysis. In some of the client scenarios, I found out that:
- The agent training process currently entails scores of call recordings first to be archived as raw audio. A team is then entrusted to listen to these recordings and make notes of opportunities either to train the agents or to identify unprofessional behavior. There are many inherent problems with this: (a) it is practical to only do spot checks of randomly picked recordings, (b) the feedback has a lag time and (c) it is subject to interpretation of different individuals reviewing the recordings.
- In the digital environment of today, often there is rich media such as files, images and video uploaded when a service center is contacted. For example as a consumer, I may upload pictures of my damaged car when talking to an agent initiating the insurance case. Again, today these rich media are manually examined or completely fail to be examined altogether..
The volume and variety of the rich content collected in these environments is a goldmine for analytics not only to provide best user experience serving the customer, but also to counter fraud and gain insights of the customer behavior in general.
As a solution to these problems, I am sharing some quick and dirty prototype code that I put together in python using Google Machine Learning APIs. I created this to help our clients understand the benefits and value better in their context. All the code is available on my Github. In addition to the code, you can also watch this short companion Youtube video if you want to see it in action along with a code walkthrough.
The following three examples in python show how Google Cloud Machine Learning APIs can be used in a Contact Center environment to address the use case of extracting information from rich media and performing analytics on it to unlock insights and value previously out of reach.
- Transcribe call recording audio to text rather than having teams take notes on raw audio files. Furthermore the call recordings are archived and also used to train their agents. The first sample demonstrates transcribing an audio recording file in a few seconds without any hardware or software infrastructure, all done in Google Cloud using Google ML Speech API.
- To automate and speed up processing of consumer’s rich media files, it is important to extract the information without manual intervention. This is where Google ML Vision API is used in the second example to demonstrate extracting text from an image of a handwritten note and to recognize and extract features of an image of an object, a car in this case.
- The most important aspect is to be able to do analytics on all this information. By being able to do so on non-textual rich media information such as audio and image, businesses are able to unlock value that was out of reach before. Contact center customer request rich media such as audio and images are all converted to text as shown above and sentiment analysis can be run on such media to give a complete picture. The third example shows use of Google Natural Language API to not only get the sentiment score, but also extract entities and enrich with metadata. Two text blocks are shown in the code to demonstrate both positive and negative sentiment.
I hope this helps contextualizing these emerging technology constructs in a business setting. As always, being a practitioner leader for several years and passionate about technology and engineering talent transformation, I am eager to engage in dialog and hear other experiences and viewpoints.
Anil is a seasoned technology and business leader who started his career as a Software Engineer in semiconductor design software. True to his Silicon Valley roots, he worked at couple of startups and then started his own startup doing turnkey projects for companies such as HP, Fujitsu, and Siemens. In the early 2000s, he was part of young budding eBay where he was instrumental in engineering the next generation payment system ground up.
As Vice-President at Visa, Anil has been instrumental in leading the development of ApplePay, AndroidPay, SamsungPay, Visa Checkout, Tokenization and Visa Direct. He is equally passionate about people engagement and transformation. At Altimetrik, his focus is Digital and Talent Transformation with clients and internally alike.
Anil is always open to engage in dialog and hear other experiences and viewpoints.
Connect with him @ LinkedIn