Learning Module

Emerging Technologies

MEET YOUR LECTURER

Ryan Jenkins

Dr. Ryan Jenkins is an associate professor of philosophy and a senior fellow at the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo. He studies the ethics of emerging technologies, especially automation, cyber war, autonomous weapons, and driverless cars. His work has appeared in journals such as Ethical Theory and Moral Practice, and the Journal of Military Ethics, as well as public fora including the Washington Post, Slate and Forbes.

Lessons

1.

Is Technology as “Neutral” as We Think It Is?

2.

Should Online Platforms Prevent the Spread of False Information?

3.

Should Online Platforms Censor Hate Speech?

4.

Are There Hidden Dangers in Robots that Look Like Us?

5.

Should We Use Killer Robots to Fight Our Wars?

6.

Do Drones Make War Too Easy?

7.

Will Autonomous Vehicles Live Up to Their Promise?

8.

Should We Worry about Our Diminishing Sphere of Privacy?

9.

Does Predictive Policing Make Us All Safer?

10.

What if Robots Did All the Work?

Module Introduction

All of our lives are shaped by technology every day — and increasingly, these technologies are structuring the opportunities we have to live, work, play, and relate to one another. As technologies like artificial intelligence (AI) creep further into our lives, the need is greater than ever for careful reflection on their nature and implications. This series of videos investigates technology, from its development and creation through to its deployment and the range of effects it has on society.

Ryan Jenkins, associate professor of philosophy and a senior fellow at the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo, leads us through an exploration of the ethical dimensions of technology to help inject some nuance into our understanding of technology and its role in our lives.

It is commonplace, especially in the West, to flippantly suggest that our tools are merely neutral, with no values “built into them” or “embedded” in them. True, technologies can be used for good or ill, and the person wielding the tool is a more obvious subject of moral appraisal. But human priorities, values, and beliefs help answer the question of which technologies get made, why, and for whom. Is technology neutral? Is artificial intelligence objective — or simply a more efficient (and opaque) way of making value-laden and biased decisions?

We also consider a series of questions about how technology mediates our views of the world. Does interacting with female voice assistants, and seeing overly sexualized robots in movies and TV, reiterate the bias that women are subordinate? Is censorship the answer to hate speech and rampant misinformation online? And what’s the value of “free speech,” anyhow? Should we be comfortable offloading the task of killing to machines, as we are already gradually doing in warfare? Could that make war too easy, or too undignified? Or are those kinds of concerns just silly moralizing?

Lastly, we survey a series of technologies and their effects on society. Should the government be allowed to surveil us? And is it acceptable that the police might use artificial intelligence to forecast when and where crime takes place — and or even who might be most likely to commit a crime in the future? What if autonomous vehicles turn out to be merely playthings for the rich, at the expense of the poor? Answering these questions requires going deeper into the nature and significance of values like privacy and justice. When AI and robots become sophisticated enough to replace most human labor, what will be left for us to do? Should we look forward to a utopia of leisure, or dread the ennui and poverty that could result?

What’s the best way of incorporating technology into our lives without sacrificing the deepest and most significant interactions and activities that make life worth living? We can only answer this question after rolling up our sleeves and delving into the philosophical foundations of our views of our tools, our selves, and their connection.

Learning Outcomes

  • Explain why it’s important for us to engage in serious ethical investigation as we decide which technologies to develop, how to deploy the technologies we do develop, and the extent to which the creation and use of these technologies should be regulated and modified in light of their societal effects
  • Critique the common assumption that, while technological innovations might be used with the aim of promoting a given goal or perspective, they are in themselves merely neutral tools
  • Marshal concrete examples to show how technology mediates our views of the world, of each other, and even of ourselves
  • Describe how the advantages of technologies like drones and killer robots in warfare come with certain ethical costs and push us to reassess the meaning and value of “honor” in combat
  • Draw informed conclusions about the ways the government and private media companies should be allowed to use technology in policing our actions and speech
  • Analyze the multi-layered ethical considerations that arise when we offload everyday tasks and jobs to automated technologies
  • Explain why proper ethical examination of emerging technologies requires requires thinking more carefully about the nature and significance of key familiar concepts, such as data, honor, privacy, justice, free speech, and even work
LESSON ONE

Is Technology as “Neutral” as We Think It Is?

By the end of this lesson, you will be able to:

  • Explain what’s meant by the idea that our technologies are simply “neutral” tools to help us live more effective, efficient lives 
  • Identify reasons to question the supposed “neutrality” of our technological tools 
  • Reflect on which values should guide us in designing a technology like Google’s search engine

Watch

Comprehending the argument

1. Which of the following statements best captures this video’s assessment of the commonly held assumption that our technologies are simply “neutral tools” that help us pursue our goals in more efficient and effective ways?

Correct! Wrong!

 

2. Which of the following examples serves as clearest evidence for the video’s claim that Google’s search engine is not a neutral tool, providing objective results?

Correct! Wrong!

Evaluating the argument

Consider a user who enters a Google search for “how to commit identity theft without getting caught.” Which of the following most nearly reflects your opinion on how the Google search engine should respond to such a search query?
LESSON TWO

Should Online Platforms Prevent the Spread of False Information?

By the end of this lesson, you will be able to:

  • Specify several key concerns that lead many to call for online media companies to restrict false and misleading speech on their platforms 
  • Describe arguments that social media companies have given for refusing to censor the free speech of their users as a means of preventing the spread of false information 
  • Assess the reasons for and against censorship of online speech, and draw conclusions about how online platforms ought to deal with the increasing proliferation of falsehoods online

Watch

Comprehending the argument

1. According to the video, the spread of falsehoods resulting from unrestricted online speech is concerning in all of the following ways EXCEPT:

Correct! Wrong!

 

2. Which of the following is one way that social media companies publicly defend their decision not to take more significant steps to prevent the spread of falsehoods and misinformation online?

Correct! Wrong!

Evaluating the argument

How do you think online media companies should deal with the problem of false and misleading information being spread on their platforms?
LESSON THREE

Should Online Platforms Censor Hate Speech?

By the end of this lesson, you will be able to:

  • Describe the “paradox of tolerance” that challenges the viability of an absolute commitment to free speech
  • Demonstrate the conceptual and ethical difficulty of defining “hate speech,” by showing how Facebook’s leaked policy classified groups as worthy of protection from “hate speech”
  • Articulate your position on whether online hate speech should be censored

Watch

Comprehending the argument

1. The “paradox of tolerance” is the idea that:

Correct! Wrong!

 

2. According to the video’s discussion of the leaked 2017 Facebook policy regarding which groups were classified for protection from hate speech, which of the following groups would have fallen into a protected class?

Correct! Wrong!

Evaluating the argument

Should our commitment to freedom of speech include allowing people to express hateful, derogatory views about others online?
LESSON FOUR

Are There Hidden Dangers in Robots that Look Like Us?

By the end of this lesson, you will be able to:

  • Define the concept and purpose of anthropomorphic “framing” in the design of robots and other technologies
  • Explain why anthropomorphic design framing raises concerns about the intentional and unintentional perpetuation of social injustices, and compare alternative approaches to resolving these concerns
  • Evaluate the arguments and express your judgment on how we should respond to the concerns raised about anthropomorphic design

Watch

Comprehending the argument

1. What does it mean to say that engineers use “anthropomorphic framing” when designing robots?

Correct! Wrong!

 

2. If a company plans to use anthropomorphic framing in designing its robots, which of the following tactics is most likely to ensure that company’s designs avoid perpetuating harmful stereotypes of historically marginalized groups?

Correct! Wrong!

Evaluating the argument

In your estimation, what is the best way to address the highlighted social justice concerns of critics regarding anthropomorphic design in our technologies?
LESSON FIVE

Should We Use Killer Robots to Fight Our Wars?

By the end of this lesson, you will be able to:

  • Discern the potential practical and ethical advantages of using autonomous weapons (i.e., killer robots) rather than human soldiers to fight our wars 
  • Explain how an analogy between killer robots and sociopathic soldiers helps to illuminate a core ethical criticism of deploying killer robots as weapons of war
  • Determine whether the advantages of killer robots outweigh the ethical concerns about using robots to do our killing for us

Watch

Comprehending the argument

1. Which of the following is NOT cited by supporters as a potential advantage of relying on autonomous weapons rather than on human soldiers to conduct our wars?

Correct! Wrong!

 

2. What is the primary sense in which sociopathic human soldiers can be considered analogous to killer robots, as we reflect on the ethics of using robots to fight our wars?

Correct! Wrong!

Evaluating the argument

As autonomous weapon technology becomes more and more advanced, do you think we should be willing to increasingly rely on killer robots to conduct our wars?
LESSON SIX

Do Drones Make War Too Easy?

By the end of this lesson, you will be able to:

  • Consider the advantages of drones in warfare, along with the reasons that critics oppose drone warfare as fundamentally worse than conventional methods of war
  • Analyze the “threshold argument” against the use of drone warfare
  • Express an ethical position on the increasing prevalence of drones in war

Watch

Comprehending the argument

1. According to the video, which of the following is the strongest reason critics give when they argue that drones are fundamentally worse than conventional weapons of war?

Correct! Wrong!

 

2. Which is NOT a reason that drones are thought to reduce the “threshold” to war?

Correct! Wrong!

Evaluating the argument

Which of the following statements best expresses your strongest opinion about the increased use of drones as weapons of war?
LESSON SEVEN

Will Autonomous Vehicles Live Up to Their Promise?

By the end of this lesson, you will be able to:

  • Identify the potential benefits that lead many to see autonomous vehicles as a highly promising technological innovation
  • Articulate the concerns raised by critics who doubt that autonomous vehicles will live up to their great promise, especially for people who aren’t wealthy 
  • Express your own perspective on how society should weigh the pros and cons of AVs, and what level of public oversight there should be in the further development of this industry

Watch

Comprehending the argument

1. Which of the following is NOT a way that proponents expect autonomous vehicles to benefit society?

Correct! Wrong!

 

2. The arguments in this video suggest that wealthier people are likely to disproportionately benefit (relative to everyone else) from autonomous vehicles in all of the following ways EXCEPT:

Correct! Wrong!

Critical Thinking

Based on the potential benefits and concerns of AVs, which of the following most closely reflects your opinion of how society should proceed in adopting AV technology?
LESSON EIGHT

Should We Worry about Our Diminishing Sphere of Privacy?

By the end of the lesson, you will be able to:

  • Explain the value of individual privacy and why the proliferation of surveillance technologies might therefore be a cause for concern
  • Identify ways that the importance of privacy might be challenged and defended
  • Express your perspective on the way that technology seems to be contracting our “spheres of protected activities”

Watch

Comprehending the argument

1. In opposing the proliferation of surveillance technologies, proponents of privacy often argue that we need privacy because it gives us a “sphere of protected activity.” What is the most important thing about having this “sphere of protected activity,” according to these privacy advocates?

Correct! Wrong!

 

2. Which of the following is NOT a way that supporters of surveillance tools usually respond to the concern that these tools present a threat to our individual privacy?

Correct! Wrong!

Critical Thinking

In your view, how concerned should we be about the way that surveillance technologies are curtailing our privacies, and thus our “spheres of protected activities”?
LESSON NINE

Does Predictive Policing Make Us All Safer?

By the end of this lesson, you will be able to:

  • Describe the idea and purpose of “predictive policing”
  • Explain how the historical crime data used as inputs for “predictive policing” might be shaped by biases, and why critics therefore warn that this approach to policing is likely to perpetuate certain kinds of prejudice and injustice 
  • Formulate a view on the proper role, if any, of “predictive policing” in creating a safer, more just, and more flourishing society

Watch

Comprehending the argument

1. Which of the following best describes a primary goal of “predictive policing”?

Correct! Wrong!

 

2. For “predictive policing” technology to deliver fair and reliable predictions of the locations where greater police presence would most improve public safety and security, the historical crime data on which these predictions are based must themselves be objective records of information, unshaped by subjective bias. If (as many believe) these crime records have been shaped by human bias, which of the following historical biases is LEAST likely to have directly contributed to this problem?

Correct! Wrong!

Critical thinking

Should we support the greater use and innovation of “predictive policing” technology by law enforcement?
LESSON TEN

What if Robots Did All the Work?

By the end of this lesson, you will be able to:

  • Distinguish the kinds of work that increasingly can be done by advanced technologies, and understand why many find this move toward greater automation very promising 
  • Explain the value of work in our lives, and why critics see more peril than promise in the automation revolution
  • Analyze the debate over the accelerating automation of work and assess the best path forward

Watch

Comprehending the argument

1. Which of the following is NOT a reason that critics are concerned about the rapidly increasing automation of jobs?

Correct! Wrong!

 

2. Which job is LEAST likely to get replaced by robots in the near future?

Correct! Wrong!

Evaluating the argument

Which of the following best describes your opinion on how society should address the rapidly increasing automation of work?