Active Hackathon
Imagine. At some point in a not-so-distant future, you’re driving down the highway in a self-driving car, boxed in on all sides by other vehicles. Inevitably, you might find yourself stuck in a life-threatening situation where your car won’t be able to stop in time to avoid a collision.
It has a choice—either collide with one of the other vehicles endangering another passenger’s life or put your life in harm’s way.
What do you think it would do?
If we were driving a car in manual mode, whichever way we chose, it would be considered a reaction to the situation as opposed to a deliberate decision—an instinctual, potentially panicked reaction with no forethought or malice.
However, if a programmer were to instruct the car to take the same call in a life-threatening situation, it could be interpreted as a premeditated homicide. A programmed, self-driving vehicle would, at some point, take a life to save another.
So, who do we tell it to save when morality dictates saving both lives?
The moral machine experiment is all about finding answers to such morally grim questions.
Created by researchers Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon and Iyad Rahwan, the online experimental platform explores the moral dilemmas faced by autonomous vehicles.
When Syrian informatics engineer Edmond Awad enrolled himself in an introductory course on AI, he unknowingly entered a world that would forever alter his perception of life.
“I was fascinated by the concepts of many AI techniques like neural networks and genetic algorithms. It pushed me to read more about it. Then, when I went to grad school, I chose to work for my master’s and PhD on topics in multi-agent systems and symbolic AI. I also had a special interest in morality, culture, and religions. So, in 2015—right before AI ethics became popular—as I was about to start a programme at the MIT Media Lab, my advisor Iyad Rahwan told me about this paper he had with Jean-François Bonnefon and Azim Shariff on the ethics of automated vehicles (which was eventually published in Science). I was excited to learn that there is a potential research topic that brings together my interests in AI and ethics”, Awad tells Analytics India Magazine.
Upon expressing his interest in the subject to Rahwan, the pair began deliberating on potential follow-up work to their paper. They discussed what other factors might influence people’s decisions in trolley-like situations.
Eventually, Iyad suggested working on a website that would combine all potential factors. The goal was twofold—to collect data about the popular perception of moral decisions taken by machines and to design a public engagement tool that promotes the discussion around the ethics of machines.
The main functionality of the website is the Judge interface, where you are presented with thirteen scenarios representing dilemmas faced by a self-driving car. These dilemmas are inspired by the Trolley problem.
Each dilemma presents two potential negative outcomes, each resulting in loss of lives. The number, gender, age, along with some other factors of surrounding characters and environment in each outcome vary at each occurrence.
For each scenario, a choice for the preferred outcome has to be made. At the end of the experiment, a summary of decisions taken is presented along with a comparison to others and an optional survey.
There are other parts of the website which allow users to design their own dilemmas (Design interface) as well as browse dilemmas designed by others (Browse interface).
Following the deployment of the website, the team added a Classic interface that presents three variants of the classic ‘Trolley problem’.
The Moral Machine attracted worldwide attention and allowed the team to collect 40 million decisions, in ten languages, from millions of people, in 233 countries and territories.
Based on the moral preferences of their citizens, countries congregate into three clusters: Western, Eastern and Southern. Interestingly, participants showed strong preferences for AVs to spare humans over pets, to spare more lives over fewer lives and to spare younger humans over older humans.
While the general direction of the preferences was universal (e.g., most countries preferred sparing the lives of younger humans over older humans), the magnitude of these preferences varied considerably across countries (e.g., the preference to spare younger lives was less pronounced in Eastern countries).
Differences between countries may be explained by modern institutions and deep cultural traits (e.g., countries with a stronger rule of law have a higher preference for sparing the law-abiding pedestrians at the cost of those flouting road safety laws).
In response to the variety of responses, all of which could be considered moral, Awad explains—
“For many of these tradeoffs, there is no one ideal resolution (or a framework) that all experts agree on. But in most cases, there are multiple ethically defensible solutions that are supported by different groups of experts. This does not mean the answer to your question is easy.
For a long time, we [have] accepted and lived with the idea of having multiple accepted ethical frameworks. But now, with the increasing autonomy of machines, preparing them to take central roles in society, we are forced to make a choice on how these machines should resolve moral tradeoffs.
The choice of which ethical framework should govern the machine’s decision should be chosen from one of those ethically-defensible, well-thought solutions. [But] which one? Perhaps the one that people like the most, or the one most liked by the elected representatives in charge of making such a decision.
Now once you make a decision on how machines should resolve moral tradeoffs, the question is how to actually implement it. And that’s a different challenge altogether.”
Check out more here: The Car That Knew Too Much by Jean-François Bonnefon
AI systems are presumed to be biased with respect to some parameters. Even when limiting ourselves to one dimension (for instance, gender), there are numerous ways to define ‘bias’ and ‘fairness’ in any given instance.
In fact, it has been claimed that there are situations where only three sensible and simple definitions of fairness could not be upheld simultaneously by any non-trivial classifier.
Awad says, “This does not mean we should give up on building unbiased machines, but it helps us scope where to focus the work. In fact, some experts believe that fixing machine bias is easier than fixing human bias. But essentially, there is a choice to be made here about what kind of fairness is desirable. This is getting back to moral tradeoffs again.”
There are, of course, less contentious problems of bias—problems that result in clear harm to society in general or groups of minorities.
Generally speaking, adopting a responsible, reflective approach to developing AI systems can be helpful in mitigating such potential harm and avoiding unintended consequences. Such an approach would engage with a diverse group of stakeholders from the beginning.
“In cases of AI systems prepared to play a big role in society, we can learn from the development of safety-critical systems that use a package of safety procedures such as adopting different layers of safety, and performing iterations of testing and evaluation in controlled environments and using simulations before deployment”, Awad adds.
Edmond Awad says, “The Moral Machine project spurred many follow-up projects that focus on studying the moral behaviour and moral decision-making of humans and machines in different contexts and across different societies and to provide proof-of-concept computational models to implement ethical decision-making in AI-based algorithms.
These projects have inspired me to co-lead a perspective piece with Sydney Levine that proposes a research agenda and a framework titled ‘Computational Ethics.’
We co-wrote the paper with a team of world-leading scholars from different disciplines, including philosophy, computer science, cognitive sciences and social sciences. In it, we propose a computationally-grounded approach for the study of ethics, and we argue that our understanding of human and machine ethics will benefit from such a computational approach.”
Moral Machine has also inspired the methodology for some follow-up projects in the use of websites developed as serious online games with the goal of collecting large-scale data.
One such project is ‘MyGoodness’, a website that generates charity dilemmas with the goal of identifying the different factors that may influence people to give ineffectively. Awad led the creation of this website with his advisor, Iyad Rahwan, Zoe Rahwan and Erez Yoeli. The project was created in cooperation with The Life you Can Save Foundation.
Since its deployment in December 2017, ‘MyGoodness’ has been visited by 250,000 users who have contributed over three million responses. There are other projects in preparation using a similar approach.
More recently, Edmond Awad was co-Investigator on a big EPSRC-funded grant with the goal of investigating and developing the first AI system for air traffic control.
“Our team, led by Tim Dodwell, is composed of researchers from the Universities of Exeter and Cambridge, The Alan Turing Institute, and NATS, the main provider of air traffic control services in the UK. The project is still at an early stage, but we have already identified challenges and lessons that we plan to share publicly at some point”, Awad reveals.
At the end of his discussion about the experiments and their implications, Edmond Awad shared his thoughts about the scope of the research itself; the value of curtailing misinformation; and communicating the implications of such technological and scientific advancements to the public with clarity.
“I would like to think that our role as researchers is to create knowledge. But there is a lot of work that needs to be done to effectively deliver this knowledge to the public. The spread of misinformation and the lack of trust in science in the last few years—especially with the dire consequences during Covid—is an alarm for all academics and researchers that more work should be done in communicating the knowledge we create and in engaging the public in discussions around the societal and ethical considerations of scientific and technological advances”
Conference, Virtual
Genpact Analytics Career Day
3rd Sep
Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep
Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023
Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023
Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023
Stay Connected with a larger ecosystem of data science and ML Professionals
Discover special offers, top stories, upcoming events, and more.
Experts say that if governments seek access, having so much information about people in the hands of a single firm risks becoming a single point of failure for democracy
Various big tech companies announced new innovations, software developments and hardwares along with conferences and workshops for AI and Metaverse.
“There are probably a bunch of people at the company who should not be working here,” says Meta chief Mark Zuckerberg.
Marvel excluded!
The Metaverse without human-like representations or AI inside it will be a very dull and sad place
It will provide OpenAI API developers free access to GPT-based classifiers that can detect harmful content, OpenAI states in a blog post.
A whopping 348 apps are banned in India to uphold ‘the security of the state and public order’ but are still available as cracked versions on the web
Protein structures innately shift and transform, sometimes very drastically and sometimes in subtle ways when small-molecule ligands show up.
According to Sigmoid, a data engineering and AI solutions company, adoption of cloud data warehouses is growing at a CAGR of 15%.
The company has built a repository of more than two crore data points, including navigation systems, telematics, and 3D data visualisations
Stay up to date with our latest news, receive exclusive deals, and more.
© Analytics India Magazine Pvt Ltd 2022
Terms of use
Privacy Policy
Copyright