In a recent interview with Business Insider, Paul Krugman mentioned driverless cars and speech recognition as examples of technologies that use “big data” to accomplish something that we previously thought required human intelligence.
The science of making “intelligent machines”, or Artificial Intelligence (AI), has been a favorite topic on science fiction movies, but there are applications that we use every day that fit that definition. Siri, the voice-activated personal assistant launched with the iPhone 4S, is a good example. The technology that allows computers to understand human speech took decades to develop, but in a very short period of time, Apple was able to ‘mainstream’ an AI mobile app that changes the way we interact with machines.
What will be the next AI application to go mainstream? Rosalind Picard, director of the affective computing research group at MIT Media Lab, believes that teaching computers to understand emotion can be the next step in making “intelligent machines”.
In the context of the enterprise, emotion adds a dimension to the Human–computer interaction (HCI) that has many practical applications, as shown by companies like AFFECTIVA and Sociometric Solutions. AFFECTIVA, the company co-founded by Rosalind Picard, has already launched a product that uses emotion recognition for marketing research. Sociometric Solutions, another start up that came out of MIT Media Lab, is using social sensing technology for organizational design.
I don’t expect focus groups to go away anytime soon, or for computers to take over decision making in organizations. However, emotion recognition will change the way we interact with computers. At EMOSpeech, we are developing applications that use emotion recognition to automate and improve the call monitoring function at call centers. This is a time consuming process that, until now, could only be done with human intervention. We are currently beta testing our first product and will be able to share results very soon. Stay tuned!