AI and employment_
It is not clear if artificial intelligence will create or destroy employment. But the assumption employment will not be the same in 20 years, is something that seems proven. Tasks might be automated, jobs not so much . That is, routine and predictable tasks will be susceptible to be made by a robot, while activities that implies relation, creativity or intuition appears to be less subjects to machine substitution. On the other hand, it is sure that new job positions will be created, either for the design or maintenance of the robots or for activities related to supervision.
The question of whether such machines should be subject to specific taxation regimes is related to the massive implementation of robots. This opens the opportunity to engage in activities of greater personal growth, since the machines would be supporting the economic fabric of society.
This means a change of attitude regarding the way of understanding work. It will be necessary new talents and generations that we should understand.
Inequality in the digital world_
Technology has been improving production capacity and quality of life, and to date, there have been advances accessible to practically everyone (from the pulley to the explosion engine).
But today technology is increasingly complex and is only accessible to a few. This means its benefits do not reach everyone. The wealth generated by the machines will be distributed among fewer people, only in these few owners of such machines; and social improvements will only be enjoyed by those more developed societies.
The digital divide is not something new, but it is something that can be widened with the expected progress of technology. It is not only a matter of accessing the wealth that technology can generate, but also the possible wellbeing of quality of life that it can provide. Inequality is at odds with justice, and lack of justice brings instability.
Impact of technology on behavior_
In 2014 the bot Eugene Goostman passed the Turing test, getting completely emulate a human during a conversation. Since that moment artificial intelligence systems have been developed in such a way that is already possible, for example, for chatbots to reserve a table in a restaurant or an appointment at the hairdresser, without the interlocutors knowing that they are talking to a machine.
It seems that we are talking with people, but we must be aware we are talking with machines. If we are not aware of this fact, our behaviour can be altered.
Using voice assistants (for example, Alexa, Google Home or Cortana) causes children to become demanding and despoting. When they ask something to his voice assistant, it answers quickly and accurately, no matter how they did the request. They are getting used to asking through the direct use of imperatives (google, show me the news!). This behavior moves to the real world and they ask for things with the same demand and rudeness.
Machines are predictables and do what we want them to do. People have will and feelings. We must keep in mind the differences between the digital world and the real world.
People are wrong, therefore, machines too, because they are made by people. Machines do what we want through some algorithms, and with the information we give them (the input data ). If algorithms have failures or are incorrect, or if the data is wrong, then the result is wrong: the machine “is wrong”.
People learn, and we do not always learn well. There are machines that learn too and they can also learn in the wrong way. Machine learning is based on statistics, and that means that the have an error range, a certain probability of failure. But we do not always have awareness of this fact and we consider valid any result because the system says so. But the system also fails, and we fails too if we do not know what is failing.
Bias and lack of neutrality_
We can think that machines act in a fair and neutral way because they have no will or particular interests. But it is not true. Machines work with the data we provide them. If these data are vitiated in any way, if they are not neutrals, then the result of the machine neither will be neutral.
The case of Microsoft’s Chatbot, whose Twitter account had to be closed due to inappropriate comments, is famous. There are other examples of machines that predict criminal behaviors, and generally assign more risk to black people, or those that classify as a woman any person (male or female) who appears in a kitchen.
It is necessary to understand how a machine can have a biased behavior and assign the necessary responsibilities so that it doesn’t happen.
Every digital device can be hacked. It doesn´t matter how many security measures you have. The code can be altered and, then, the behavior can be dangerous. The importance of this fact lies with the possibility that we do not realize that the machine has been hacked and therefore we are not aware of the danger. In the real world we see when something has been altered, but this is not seen so easily in the digital world. (your phone may have a virus and you do not know it). The consequence can be twofold: we can suffer damage of some kind (information theft), or we can give good results that are manipulated.
This risk expands with the proliferation of artificial intelligence systems. By assigning them the category of “smart”, we instinctively think that they cannot fail. We have assigned to technology the false value of the absence of failure, but the only sure fact is that nothing in the digital world is totally secure.
Artificial intelligence is based on extremely complex algorithms, and the result sometimes escapes the control of its designers. Machines can learn and their learning can become uncontrolled, without the engineers themselves knowing why a machine has made a certain decision. In this case, who is responsible for this decision? How and who explains the result of the machine?
Perhaps we should demand two qualities from artificial intelligence: that their behavior can be explained and that their operation can be heard. Then, we can arrive at an idea of responsibility in the sense of having the obligation to inform and justify any decision made by an algorithm and to be forced to mitigate any social impact or potential damage.
Machines are increasingly involved in our day to day life and they will be more. They are machines which have a certain autonomy and are able to make some decisions. An autonomous driving car, at specific time, may have to decide whether to run over a person or save the occupant of the vehicle. The possibility of harm or benefit leaves the digital world and land to the real world. We no longer talk about getting points in a gamification system or eliminating trolleys in a virtual reality system. This arrival in the real world means that machines must be under the principle of justices and subject, therefore, to a kind of law.