On the Ethics of AI, and the Potential for Existential Crises that are Accelerated by AI

AI - enabled drone warfare…a truly terrifying and existential human crisis.

On December 14th, the I-SHARPE Club hosted a very special guest: Dr Shujaat Mirza. Dr Mirza is an expert in AI with a concentration in the ethical implications of AI. https://www.linkedin.com/in/shujaatmirzaa

Dr Mirza did his BS in Computer Science from the NYU Abu Dhabi campus, an elite global institution with one of the lowest acceptance rates in the world. He stayed to receive his MPhil and PhD from NYU-AD, with his thesis topic entitled “Towards Responsible AI: Safeguarding Privacy, Integrity, and Fairness”. Dr Mirza is uniquely qualified to speak on AI and its implications for the future.

We began the conversation with the basic question of: what is artificial intelligence or AI? In simple terms, with AI, you don’t teach or tell the computer what do, rather you give the computer some facts or premises, and let the computer figure out how to solve the problem, without a specific set of rules.  This is different from straightforward command-logic coding in which the algorithms explicitly describe the modes through which solutions can be arrived at. One example on an AI task would be to show the machine millions of images of apples and oranges, and then let the machine discover how to distinguish between apples and oranges.  This is different from telling the machine to look for certain features in apples (“red”) vs oranges (“orange”). It is notable that the initial "forward pass” of this activity may not be good or accurate in the beginning of the training, but then iterative “back propagation algorithms” will improve the solution accuracy and reduce error over time.

This led to the second topic of how AI has improved over time when compared to the human baseline of certain tasks.  For example, there has been steep growth in the success of complex tasks such as competition math, which has improved rapidly in the last 2 years.

Dr Mirza then explained the difference between artificial general intelligence (AGI) vs localized AI.  The key difference with AGI is the ability to demonstrate intelligence across domains.  One benchmark to test the achievement is to pass a 2 hour adversarial Turing test, in which the human is not able to distinguish whether the interlocutor is a machine or another human.

Dr Mirza further explained that ongoing progress in AI has a lot to do with chip availability.  For example, the 1950-80s are referred to as the “AI winter”. During this period, the algorithms for AI existed in the domain of computer science, but the compute did not.  The ongoing quest to achieve AGI will be affected by the availability of compute power. For this reason, it is difficult to forecast when AGI will become a reality, it could be 2032, or it could be later.

The discussion then turned to the opportunities of AI, which are quite tremendous. The output of human productivity is likely to increase dramatically. Problems that are currently considered too complex will become tractable, particularly as large sets of data (for example in human health) can be parsed with AI.

We then discussed the threats of AI at length. We began with the problem of AI furthering the gap in racial inequities and the gap between rich and poor through the lens of “access to technology vs not”. This could lead to significant human upheaval as the AI continues to exacerbate the concentration of wealth, productivity, research, and power among those who already have access to the tools needed to create wealth, productivity, research, and power. The technology gap is only likely to widen unless efforts are made to reverse this trend.

Another issue with AI will be the deployment of “deepfake” technologies to sway public opinion through misinformation. This is likely to lead to human beings becoming both increasingly suspicious of information but paradoxically more gullible to information that leads to confirmation bias.

The final, and perhaps most terrifying, topic was the use of AI in warfare. Sadly we have already seen the deployment of AI in use in conflicts in the Middle East and Europe. Nation states are currently in a race to develop the most lethal AI technologies, and there doesn’t appear to be any regulatory control over this. A highly alarming scenario, more devastating even than a nuclear weapon, would be an AI-enabled drone swarm to designed to take out a city or a population. This has been referred to as AI’s “Oppenheimer Moment”. While the “Human in the loop” mechanism with a kill switch decision has been proposed as a safety mechanism, the unfortunate reality is that drones equipped with AIs that coordinate complex tasks together are likely to be the most devastating lethal technology ever seen.

 An additional topic that was discussed in the Q&A session was the questions of a self aware AI that acts maliciously against human interests. This idea is referred to as “alignment” in which the AI chooses to make decisions that conform with human values. However, this leads to the possibility of machines ultimately achieving “situational awareness”.  The term “super alignment” refers to a scenario in which the AI agent is deceptively aligned with human interest to create the illusion of alignment.  This obviously leads to the scenarios that have been popular in books and in Hollywood and previously were in the domain of science fiction. Within our lifetimes, we may see an “agentic world” where AI agents will be co-working with humans.

Other interest topics that were discussed include the question of whether robots may experience subjective reality.  Or can they mimic it? This may have to do with how advanced the technology of retrieval cross checking can get.

Finally we closed again on a somber idea, that of technology monopolization.  Currently there are very few people who are actually deciding what the future will look like, and there is nothing constraining them and no checks or balances on these limited individuals as they act on behalf of corporations and states.

We thank Dr MIrza for amazing meeting and conversation!

Previous
Previous

Continuous orderly creation vs evolution based on randomness

Next
Next

Inaugural Meeting of the I-SHARPE Club: Understanding Righteous Conduct in Light of the Maqasid, with the Honorable Imam Zaid Shakir