Big Data, Speech Recognition and Artificial Intelligence

In a recent interview with Business Insider, Paul Krugman mentioned driverless cars and speech recognition as examples of technologies that use “big data” to accomplish something that we previously thought required human intelligence.

The science of making “intelligent machines”, or Artificial Intelligence (AI), has been a favorite topic on science fiction movies, but there are applications that we use every day that fit that definition. Siri, the voice-activated personal assistant launched with the iPhone 4S, is a good example. The technology that allows computers to understand human speech took decades to develop, but in a very short period of time, Apple was able to ‘mainstream’ an AI mobile app that changes the way we interact with machines.

What will be the next AI application to go mainstream? Rosalind Picard, director of the affective computing research group at MIT Media Lab, believes that teaching computers to understand emotion can be the next step in making “intelligent machines”.

In the context of the enterprise, emotion adds a dimension to the Human–computer interaction (HCI) that has many practical applications, as shown by companies like AFFECTIVA and Sociometric SolutionsAFFECTIVA, the company co-founded by Rosalind Picard, has already launched a product that uses emotion recognition for marketing research. Sociometric Solutions, another start up that came out of MIT Media Lab, is using social sensing technology for organizational design.

I don’t expect focus groups to go away anytime soon, or for computers to take over decision making in organizations. However, emotion recognition will change the way we interact with computers. At EMOSpeech, we are developing applications that use emotion recognition to automate and improve the call monitoring function at call centers. This is a time consuming process that, until now, could only be done with human intervention. We are currently beta testing our first product and will be able to share results very soon. Stay tuned!

Posted in emotion recognition, Technology | Tagged , , , , , , | Leave a comment

How to tell if you really like that Doritos Goat 4 Sale commercial

In the context of enterprise applications, there are interesting developments that show how computers can be trained to understand what we say, how we say it, and return something of value from this information. Emotion recognition is already being used in industries like entertainment and healthcare. AFFECTIVA, a company that came out of MIT Media Lab, has developed a platform that can identify emotional states from facial expressions. Their technology is being used to support traditional market research activities like surveys and focus groups.

This year, AFFECTIVA is once again using their AFFDEX demo to let the public test their response to the Super Bowl Ads. If you want to have some fun testing your emotional response to this year’s ads while checking out their demo on line, you can try it out here. To learn more about the research done on affective computing at MIT Media Lab, check out their research projects here.

Posted in emotion recognition, Technology | Tagged , , | Leave a comment

Thank you for calling, how are you today?

When I recently called my cable provider to report a problem, the help desk specialist took my name and politely asked “Thank you for calling Mr. Raul, how are you today?”. I don’t think the specialist that handled my call was actually interested in finding out if I was having a good day or not, but it is a polite way to greet a customer.

Finding out how customers feel is a good way to start a conversation, and when done systematically, can impact a company’s bottom line. But how do you find out how customers feel, and what can you do with this information?

Call centers monitor all kinds of information about calls, including delay times, hold times, abandon rates and service level. Measuring how effectively the call center is meeting customer needs, however, is a different matter. Call centers rely on surveys, complaints and call monitoring, but the volume of calls can be time consuming and expensive.

At EMOSpeech, we are developing applications that help call centers measure the emotional characteristics in customer interactions. In addition to providing valuable performance metrics, these applications can help reduce the time spent monitoring calls, making the auditing process more effective. This is done by analyzing 100% of recorded calls and providing the output to call center supervisors. Instead of randomly selecting calls, supervisors can identify which ones should be audited, based on the type and level of emotion.

We are currently testing metrics that evaluate how agents handle different call scenarios. Handling customer complaints is probably the first scenario that comes to mind for most people, but there are other qualities that different companies might find relevant, such as assertiveness/hesitation, confidence/insecurity, empathy/indifference. We will be sharing the results of this evaluation when completed.

What emotions and call scenarios are relevant for your organization? Send us your feedback by posting a comment!

Posted in emotion recognition, Technology | Tagged , , , , , , | Leave a comment