It seems obvious that moral artificial intelligence would be better than the alternative. Can and would we align AI’s values with our own? to want has? This is the question that underlies this conversation between EconTalk host Russ Roberts and psychologist Paul Bloom.
Leaving aside (at least for now) the question of whether AI will become smarter, what would be the benefits? moral Does AI deliver? Would these benefits be outweighed by the potential costs? Let’s listen to what You have to say! Please share your reactions to the prompts below in the comments. As Russ says, we’d love to hear from you.
1- How would you describe the relationship between morality and intelligence? Is it smarter necessarily does it imply more morale – whether in humans or AI? Can Does more intelligence offer a greater chance of morality? What should AI do? learn develop morality similar to that of humans? How much of (human) intelligence comes from education? What part of morality?
2- Where does (human) cruelty come from? Bloom suggests that intelligence is largely innate, although continually influenced subsequently, while morality is largely bound by culture. To what extent would AI need to be acculturated for it to acquire some semblance of morality? Bloom reminds us that “…most of the things we look at that totally shock us are done by my people who don’t think of themselves as bad guys.” To what extent could acculturation create cruel AI?
4- Roberts asks: Since humans don’t really get high marks on morality, why not use AI superintelligence to solve moral problems – a kind of data-driven morality? (A useful corollary question he asks is: Why don’t we make cars that can’t exceed the speed limit?) Bloom notes this obvious tension between morality and autonomy. How could AI help alleviate this tension? How could this create such tension worse? Continuing the theme of morality versus autonomy, where does the authoritarian impulse come from? Why this (utopian) human need to impose moral rules/tools on others? Roberts says: “I’m not convinced that the nanny state is simply motivated by the fact that I want you not to smoke because I know what’s best for you. I think part of it is: I don’t want you to smoke because I want you to do what. I want.” Is this a typically human trait? Would this be a transferable trait to AI?
5- Roberts says: “The country I lived in and loved, the United States, seems to be falling apart, as does much of the West. This doesn’t look good. I see many dysfunctional aspects of life in the modern world. Am I too pessimistic? How would it be You reply to Russ?
Bonus question: In response to Roberts’ question above, Bloom responds: “I have no problem admitting that economic freedom in the broad sense has helped change the standard of living of humanity by the billions.” This is a good thing. I have no problem with the idea that there is a cultural evolution, and that’s a good thing, because a lot of that evolution has been productive and has allowed people to lead better lives. I think the question is whether the so-called Enlightenment project itself is the source of all this.
To what extent do you agree with Bloom? This question has also arisen recently in this episode of The Grand Antidote with David Boazwho insist that not only the Age of Enlightenment responsible for such a positive change, it is a project that is in progress. Again, how much do you agree?