The pandemic has accelerated global digitization and forced us to rethink and reimagine the potential role of technology. This week, our Technology Panel Directors, Xiuhao Chen and Oscar Bian, will be sharing their insights on Artificial Intelligence.
Panel Directors’ Intervieew
Q1: What kind of a discipline is AI?
Xiuhao: AI is a broad discipline, mainly concerning computer science. AI itself comes in many varieties. The ones we come in contact with everyday are relatively simple, like Siri & Alexa. We are right to call these AI, but they are still quite primitive compared to the stuff we see in sci-fi movies. Siri and Alexa are more about capturing some key words in your speech to help you complete little tasks. They don’t actually feel especially ‘intelligent’, but that’s a widespread phenomenon in today’s AI industry.
(cont.) AI overlaps with a lot of other academic disciplines as well. Given the rapid development of AI we see today, I think there’s plenty to talk about in almost any area. For example, in ethics we need to think about how to deal with and co-exist with AI from a moral and legal perspective. In sociology, there’s a lot of room for exploring the problems AI might bring to contemporary society.
Q2: What is the connection between AI and philosophy?
Xiuhao: There’s a branch in philosophy called the philosophy of mind (PoM) and cognitive science. Philosophers working in this area are often cognitive scientists as well. At the dawn of PoM as a field there were two main schools of thought. One school thought that human cognitive ability is essentially a neural network. Our actions can be broken down like this: we form point-to-point connections through our neural network in response to an input, and this process generates an output. The other school thought that cognitive ability is achieved through symbols and computation, like 1+1. AI began as an extension of the neural network school of thought. The most basic AI simulations were from academic discussions like these, and were gradually put into practical use later on. PoM is essentially concerned with human cognition, including our ability to learn. A related problem is what counts as consciousness. If a machine can respond to outside stimuli in exactly the same fashion as a human being, then is it right to say it has consciousness?
(cont.) I think the link between AI and contemporary philosophy can be divided into two main areas. One is concerned with theoretical and technical problems, like the ones studied in PoM. The other is concerned with ethical issues in application, and there are short-term and long-term issues. Topical discussions on algorithm’s challenge to privacy or the use of facial recognition in government surveillance are examples of these short-term ethical problems.
Q3: What is the current state of development for AI? Are there any bottlenecks?
Xiuhao: I think the most fundamental bottleneck is the lack of computational ability due to problems with data, technology and theory. AI needs very complex technology and a huge amount of data to achieve human-like cognition. A lot of people take this to be the reason why we are still so far away from ubiquitous AI. Say we want an AI to learn to distinguish cats and dogs in a photo. We might need to feed it millions of photos in order for it to make the distinction with accuracy. But the same task is obviously a lot easier for a human being. There are also some technical issues here. Some photos which humans would instantly recognize as a dog might confuse an AI. It’s like in the case of facial recognition – if you wear sunglasses the AI might not recognize you anymore. So AI still faces a lot of limitations: even a miniscule change in the data can render its prior training data useless. It’s also worth mentioning self-learning. Although we often bring up the concept of ‘machine learning’, in the current state of play the learning ability of machines is really not that strong. It’s more about humans feeding the machine data, telling it what we want, and the machine going off to learn how to process the data. We must give AI a lot of one-to-one, essentially primitive data in order for it ‘learn’.
Q4: Who will have control over AI in the future, tech companies or the government? If AI takes over the majority of human jobs, what will the future of work be like?
Xiuhao: Currently the private sector has the most control over artificial intelligence. The relationship between AI and the labour force is a constantly evolving topic. There are two main considerations here: displacement (making traditional labour force redundant) or replacement (rethinking the relationship between human and technology, augmenting traditional labour force). The British government has always been stressing the importance of values like selflessness, justice, objectivity, accountability, openness, honesty and leadership. Based on such moral framework, different countries will form different assessments of AI.
(cont.) Overall, due to the special nature of AI, governments should all be concerned about objectivity, accountability and openness. Governmental institutions should be focusing on enacting more comprehensive and robust AI ethics laws in the future and ensure that discrimination based on AI is not reflected in government policy.
Q5: AI relies on big data. How can we deal with the potential data privacy issues and concerns of mass surveillance?
Xiuhao: There are two main solutions. One is through government legislation, such as the EU’s General Data Protection Regulation. The other is through public opinion. For example, data privacy is a particularly sensitive issue in western societies right now. From the perspective of big tech firms, scandals concerning data use can be extremely damaging to their business. So I think the general public can serve as an effective oversight mechanism. A solution will require the public to have their say, as well as a right attitude from the government.
Q6: How will the ‘transhumanism’ brought about by technological development affect this generation of young people? What are some of the challenges and opportunities for them?
Oscar: My biggest concern is about the future of the labour force and the impact of AI on what it means to be human. A lot of young people worry about finding a balance point between self-fulfilment and societal demands when they plan their future careers. A lot of inner struggle comes from a conflict between the necessity of joining the system and desire to escape it. Yet the transhumanist society brought about by technology and AI is likely to consolidate the necessity of the status quo and expand its power ad infinitum, thus exacerbating their struggle. A more practical phenomenon is the huge pressure for young people to go into STEM. To paraphrase Audre Lorde, future society might force young people to use the master’s tools to fortify the master’s house.
Xiuhao: Despite the fact that any sort of ‘transhuman’ society is very far away from us, I think the situation we find ourselves in is very special. It’s fair to say that AI developed as this generation of young people grew up. AI technology started to make use of big data and advanced algorithms in our time. Since we are at the beginning of AI exploration, we have to think ahead about a lot of problems, no matter how far away they are from us. It’s like how we have to think ahead about climate change – regardless of whether we can experience the impact of climate change on our offspring, as long as our decisions today are significant for the future, we have some serious thinking to do.
Q7: What was your motivation for designing this panel? What special significance does it have today?
Xiuhao: The main reason is just my personal interest in AI. I’m eager to use this as an opportunity to learn, especially since I’m in a place like Oxford. The university has a Future of Humanity Institute which has done a lot of work on the future of AI and existential risk. Another is that we live in special times. AI developed rapidly in the 1950s and 60s but experienced a pause in the 1980s. People had high expectations for AI, but its development suddenly came to a halt. AI really began fulfilling its potential in the 21st century, especially after 2010, due to the explosive increase in the amount of data we have and improvements in our computational power. Artificial intelligence is indeed a hot topic right now, and I hope this panel encourages us to think ahead about the long-term ethical problems associated with AI. This duty and responsibility fall squarely on our generation’s shoulder.
Remember Skynet, the AI that ruled the world in Terminator? Experts predict that within the next two to three decades, machines will become as intelligent as humans. Advances in AI technology, while concretely improving our lives, are also posing complex moral questions. The practical challenges range from prejudices and discrimination reinforced by algorithms to threats to personal data privacy and the social costs of mass surveillance; in the meantime, philosophically speaking we may have to confront superintelligence and transhumanism in our lifetime. How might developments in AI affect this generation of young people, and what are its biggest challenges? Is ‘AI domination’ merely fearmongering?
In addition to discussing AI, OCF’s Technology Panel will also be branching into other issues to do with global disasters. Similar to the risks posede by AI, many fields are facing existential threats with implications across the world. More and more young people are now concerned with global challenges: should they become investigative journalists, help build new cities, or contribute to solving the energy crisis? However, the possibility of imminent disaster may also breed previously unknown opportunities; even hi-tech fields like AI are opening its doors to innovation. As young people begin to take up prominent positions in technological advancement, how could they incorporate effective altruism into scientific careers? Why is it necessary, and how do we partake in this generational shift?
Planning: Yifan Zhao, Jiayi Qiu
Content: Jiayi Qiu, Xiuhao Chen, Oscar Bian
Formatting: Yifan Zhao
Translation: Irene Airuo Zhang