Grzegorz J. Nalepa is a professor at the AGH University in Krakow, Poland who recently held a series of lectures at the Faculty of Computer Science. One of them was “Affective Computing, Context and Processing”. We interviewed him on the ethical questions posed by this new development in technology.
You are a computer scientist but, at the same time, you have a master’s degree in Philosophy. What were the reasons for such an unusual choice?
Let’s say that I decided to approach Philosophy for intellectual pleasure, not for research reasons. When I completed my Master in Computer Science and Artificial Intelligence, I felt that, in some way, I needed to dedicate myself to something more profound, so I took up that study while attending a PhD in Computer Science.
During your stay at unibz, one of your scheduled lectures was on Affective Computing. What is it?
It’s a very broad topic and not entirely a new one. As a matter of fact, almost twenty years ago it was Rosalind Picard - the founder and director of the Affective Computing Research Group at the Massachusetts Institute of Technology - who proposed it for the first time. I think the simplest definition could be that of a domain where, on the one hand, computers are able to simulate and express emotions and, on the other hand, they can recognize and interpret human feelings. These are the two main aspects on which the research focuses nowadays. I work mostly on this second aspect.
Why should someone want computers to detect and interpret or simulate emotions? What’s the use of it?
From a business prospective, there is a huge, open market for such applications. Many companies are interested in the development of technologies that allow the recognition of feelings. Some of them are already doing it. We can think of the “like” buttons on Facebook, for example, which have expanded from the original thumbs up to other emotions.
What kind of applications do you refer to?
There are many different ones. For instance, in Computer Science we refer to “Sentiment Analysis“. It’s basically an analysis of a written text on Facebook, Amazon, a website or wherever. Fundamentally, you have a text, a computer and an algorithm: you run the algorithm on the text, whichshould tell you if it’s emotionally positive or negative. More advanced methods could describe a full range of different feelings. Recently there has been a lot of work on the automatic detection of hate speech, for example. Neuromarketing has a great interest in knowing the emotional state of a consumer in order to use this information for different marketing purposes. We could be getting different types of advertising based on our feelings - depending on whether we are sad or happy, we could get a suitable message. In general, marketing will be benefiting from the research in this area.
What about the second dimension of affective computing?
There could be many applications in care systems. These other applications are mainly oriented towards senior citizens. These do not necessarily have to be robots, maybe it could mean an affective communication system between a machine and a user. In this case we want the experience to be as friendly as possible and the system to be either sympathetic or compassionate. If the system can discover how we feel, maybe it could use a better tone and cheer us up!
Clearly there are important debates on possible ethical consequences of AI, rather than specifically on affective computing. But I don’t think such discussions should just be held among us computer scientists.
With such gross interference in our emotional world, are we not running the risk of constantly being manipulated by the market?
In general I agree that we’re too observed and monitored. There might be technologies that are invented for evil purposes but I’d say that most technologies are neutral. We can use affective computing in a positive way but, of course, we could use this information to mislead people.
What kind of ethical problems does this pose to a Computer Scientist?
We are, of course, aware that this whole domain could have a significant impact on our societies but scientists are rarely responsible for creating the final products for the market. It’s a long chain that involves people from business, politics and, finally, everyone of us. The problem is that society should understand how the technology works. But this is a general question that goes beyond mere Computer Science. It’s a problem that regards society as a whole. The same applies to cameras we have in the cities, to the monitoring of credit cards, smart phones, and other aspects of our everyday life and technology. Recently I’ve been reading and hearing that people are more and more worried about the development of Artificial Intelligence (AI). But I don’t think we need to be frightened. Fear is always a powerful tool on a commercial and a political level. We need to understand the potential of AI and cope with it; the same applies to Affective Computing.
Is there a debate on this topic in the community of computer scientists?
Clearly there are important debates on possible ethical consequences of AI, rather than specifically on affective computing. But I don’t think such debates should just be held among us computer scientists. In some countries there are foundations where people from different domains and groups of society meet to discuss ethical questions concerning scientific and technology issues. Computer scientists are mainly oriented towards getting technical results. It’s an issue we should try to approach together. Take the example of a self-driving car. Who’s responsible for its choices when it’s driving? We need to find those answers together. I think there’s great responsibility on people from the humanities – for example sociologists or philosophers - because they have the conceptual means which people from the technological sectors often lack. The ones who study human nature should also contribute to the creation of the right environment for this discussion.
Do you think that our society is ready to cope with such a challenge like the one represented by robots as substitutes for human beings?
It is likely these technologies would be developed anyway and it is certainly better for society to discuss these questions rather than be surprised when they finally become part of life. Basically, I think we’re developing general artificial intelligence because we don’t understand our own, as well as its relation to consciousness, so we want conscious and intelligent computers to solve this problem for us.
(Revision: Jemma Prior)
Mal eben den Stecker ziehen
Lässt sich Zukunft vorhersagen? Wann wird Künstliche Intelligenz schlauer als wir Menschen sein? Warum braucht es eine menschenkonforme Maschine? Academia hat die beiden Soziologen Andreas Metzner-Szigeth (unibz) und Roland Benedikter (Eurac Research) zum Gespräch geladen.
Attenti a quei nerd
L’ipotesi che il NOI si profili come la Silicon Valley nostrana c’è. Non per una vasta produzione di microchip a base di silicio, ma per la concentrazione di nerd che potrebbero circolarvi: giovani molto abili con le nuove tecnologie e totalmente assorbiti da esse. L’idea è quella di insediarvi una fabbrica di “dati intelligenti”. Dati, cioè, che servono a prendere decisioni che migliorano la vita delle persone. Academia ne parla con Diego Calvanese, docente ordinario e affermato ricercatore di logica computazionale, referente del progetto Smart Data Factory.
Connecting computer science to philosophy
If you think of computer science, it’s difficult to make the jump to philosophy and economics. Giancarlo Guizzardi has thrown himself into a truly interdisciplinary enterprise.
Conoscere la propria memoria, aiuta a migliorarla
La memoria prospettica ci permette di pianificare le azioni che svolgeremo in futuro, mentre la meta-memoria permette di rendersi conto delle nostre abilità mnemoniche. Ad esempio, prendersi la responsabilità di fare un compito nel futuro, rendersi conto se si sarà in grado di ricordarsene, ed eseguirlo quando sarà necessario farlo. Entrambe sono capacità essenziali, oggi più che mai, in un’epoca in cui ognuno di noi è chiamato ad essere multitasking. Si può facilitarne lo sviluppo nei bambini in età scolare? Una ricerca del prof. Demis Basso cerca di scoprirlo.