- What does it mean to be in a society where artificial intelligence is governing liberties?
I tested facial analysis systems from Amazon.
Like all of its peers, has gender and racial bias.
- Police are using facial recognition surveillance in this area.
- Don't push me out of the way when I'm walking down the street.
- This isn't a communist state.
I don't have to show my face.
- What does it mean if there's no one to advocate for those who aren't aware of what the technology is doing?
We're at a moment where technology's being rapidly adopted, and there are no safeguards.
announcer: "Coded Bias," now only on Independent Lens.
[uplifting music] ♪ ♪ [unsettling music] - Hello, world.
♪ ♪ Can I just say that I am stoked to meet you?
Humans are super cool.
The more humans share with me, the more I learn.
[inquisitive music] ♪ ♪ - One of the things that drew me to computer science was I could code, and it seemed somehow detached from the problems of the real world.
I wanted to learn how to make cool technology, so I came to MIT, and I was working on art projects that would use computer vision technology.
♪ ♪ During my first semester at the Media Lab, I took a class called Science Fabrication.
You read science fiction, and you try to build something you're inspired to do that would probably be impractical if you didn't have this class as an excuse to make it.
I wanted to make a mirror that could inspire me in the morning.
I called the Aspire Mirror.
It could put things like a lion on my face or people who inspired me like Serena Williams.
I put a camera on top of it, and I got computer vision software that was supposed to track my face.
My issue was it didn't work that well until I put on this white mask.
When I put on the white mask--detected.
I take off the white mask, not so much.
♪♪ I'm thinking, "All right, what's going on here?
Is it just because of the lighting conditions?
Is it because of the angle at which I'm looking at the camera, or is there something more?
♪ ♪ [display buzzes] We oftentimes teach machines to see by providing training sets or examples of what we want it to learn.
So for example, if I want a machine to see a face, I'm going to provide many examples of faces and also things that aren't faces.
I started looking at the data sets themselves, and what I discovered is many of these data sets contained majority men and majority lighter-skinned individuals, so the systems weren't as familiar with faces like mine.
♪ ♪ And so that's when I started looking into issues of bias that can creep into technology.
- The 9000 Series is the most reliable computer ever made.
No 9000 computer has ever made a mistake or distorted information.
- A lot of our ideas about AI come from science fiction.
- Welcome to Altair IV, gentlemen.
- It's everything in Hollywood.
It's "The Terminator."
- Hasta la vista, baby.
- It's Commander Data from "Star Trek."
- I just love scanning for life forms.
- It's C-3PO from "Star Wars."
- Is approximately 3,720 to 1!
- Never tell me the odds.
- It is the robots that take over the world and start to think like human beings, and it's all totally imaginary.
What we actually have is we have narrow AI, and narrow AI is just math.
We've imbued computers with all of this magical thinking.
[soft bright music] AI started with a meeting at the Dartmouth math department in 1956, and there were only maybe 100 people in the whole world working on artificial intelligence in that generation.
The people who were at the Dartmouth math department in 1956 got to decide what the field was.
♪ ♪ One faction decided that intelligence could be demonstrated by ability to play games, and specifically the ability to play chess.
- In the final hour-long chess match between man and machine, Kasparov was defeated by IBM Deep Blue supercomputer.
- Intelligence was defined as the ability to win at these games.
- Deep Blue-- - As chess world champion Garry Kasparov walked away from the match, never looking back at the computer that just beat him.
- Of course, intelligence is so much more than that, and there are lots of different kinds of intelligence.
Our ideas about technology and society that we think are normal are actually ideas that come from a very small and homogeneous group of people.
But the problem is that everybody has unconscious biases, and people embed their own biases into technology.
[dramatic music] ♪ ♪ - My own lived experiences show me that you can't separate the social from the technical.
After I had the experience of putting on a white mask to have my face detected, I decided to look at other systems to see if it would detect my face if I used a different type of software.
So I looked at IBM, Microsoft, Face++, Google.
It turned out these algorithms performed better on the male faces in the benchmark than the female faces.
They performed significantly better on the lighter faces than the darker faces.
If you're thinking about data in artificial intelligence, in many ways data, is destiny.
Data's what we're using to teach machines how to learn different kinds of patterns, so if you have largely skewed data sets that are being used to train these systems, you can also have skewed results.
So this is...
When you think of AI, it's forward-looking, but AI is based on data, and data is a reflection of our history, so the past dwells within our algorithms.
[light tense music] This data is showing us the inequalities that have been here.
I started to think this kind of technology is highly susceptible to bias, and so it went beyond, "Oh, can I get my Aspire Mirror to work?"
to, "What does it mean to be in a society "where artificial intelligence "is increasingly governing the liberties we might have, "and what does it mean if people are discriminated against?"
[indistinct chatter] [dramatic music] ♪ ♪ When I saw Cathy O'Neil speak at the Harvard Book Store, that was when I realized it wasn't just me noticing these issues.
[bell rings] Cathy talked about how AI was impacting people's lives.
I was excited to know that there was somebody else out there making sure people were aware about what some of the dangers are.
These algorithms can be destructive and can be harmful.
♪ ♪ - We have all these algorithms in the world that are increasingly influential, and they're all being touted as objective truth.
I started realizing that mathematics was actually being used as a shield for corrupt practices.
- What's up?
- I'm Cathy.
- Pleasure to meet you, Cathy.
- Nice to meet you.
- [indistinct].
[quirky music] ♪ ♪ - The way I describe algorithms is just simply using historical information to make a prediction about the future.
♪ ♪ Machine learning, it's a scoring system that scores the probability of what you're about to do.
Are you gonna pay back this loan?
Are you going to get fired from this job?
What worries me the most about AI-- or whatever you wanna call it, algorithms--is power, 'cause it's really all about who owns the [bleep] code.
The people who own the code then deploy it on other people, and there is no symmetry there.
There's no way for people who didn't get credit card offers to say, "Ooh, I'm gonna use my AI against the credit card company."
That's--it's, like, a totally asymmetrical power situation.
People are suffering algorithmic harm.
They're not being told what's happening to them, and there is no appeal system.
There's no accountability.
Why do we fall for this?
[indistinct chatter] - Hello, I really enjoyed your talk.
- The underlying mathematical structure of the algorithm isn't racist or sexist, but the data embeds the past, and not just the recent past, but the dark past.
[indistinct chatter, shouts] Before we had the algorithm, we had humans, and we all know that humans can be unfair.
We all know that humans can exhibit racist or sexist or whatever-- ableist discriminations.
But now we have this beautiful silver bullet algorithm, and so we can all stop thinking about that, and that's a problem.
[tense music] I'm very worried about this blind faith we have in Big Data.
We need to constantly monitor every process for bias.
♪ ♪ - [indistinct].
Police are using facial recognition surveillance in this area.
Police are using facial recognition surveillance in the area today.
- This green van over here is fitted with facial recognition cameras on top.
If you walk down there, your face will be scanned against secret watchlists.
We don't know who's on them.
- No, exactly.
- Yeah.
[suspenseful music] ♪ ♪ - When people walk past the cameras, the system will alert police to people it thinks is a match.
At Big Brother Watch, we conducted a freedom of information campaign, and what we found is that 98% of those matches are in fact incorrectly matching an innocent person as a wanted person.
♪ ♪ The police said to the Biometrics Forensics Ethics committee that facial recognition algorithms have been reported to have bias.
- Even if this was 100% accurate, it's still not something that we want on the streets.
- No.
I mean, the systemic biases and the systemic issues that we have with police are only going to be hardwired into new technologies.
[soft dramatic music] I think we do have to be very, very sensitive to shifts towards authoritarianism.
We can't just say, "But we trust this government.
Yeah, they could do this, but they won't."
You know, you really have to have robust structures in place to make sure that the world that you live in is safe and fair for everyone.
- [inaudible]?
- Yeah.
♪ ♪ To have your biometric photo on a police database is like having your fingerprint or your DNA on a police database, and we have specific laws around that.
Police can't just take anyone's fingerprint, anyone's DNA.
But in this weird system that we currently have, they effectively can take anyone's biometric photo and keep that on a database.
It's a stain on our democracy, I think, that this is something that is just being rolled out so anonymously.
The police have started using facial recognition surveillance in the U.K. in complete absence of a legal basis, a legal framework, any oversight.
Essentially, the police force are picking up a new tool and saying, "Let's see what happens."
But you can't experiment with people's rights.
- I don't cover my face [indistinct].
Don't push me out of the way when I'm walking down the street.
How would you like if you walked down the street and someone grabbed you?
[indistinct] [talking over each other] - What's your suspicion?
- The fact that he's walked past clearly marked facial recognition... - I would do the same.
- Then covered his face.
- I would do the same.
- It gives us grounds to stop him-- - No, it doesn't.
- Yeah, and then he's just got a fine for it.
- This is crazy.
The guy came out of the station, saw the placards, was like, "Yeah, I agree with you," and walked past here with his jacket up.
The police then followed him, said, "Give us your ID, we're doing an identity check."
It's like, what?
This is England.
This isn't a communist state.
I don't have to show my face.
- I'm gonna go and talk to these officers, all right?
Do you want to come with me or not?
- Yeah, yes, yes.
- Yeah.
- That's terrible.
I'm not--I'm not-- I'm not justifying that.
- Absolutely.
- The man was exercising his right not to be subject to a biometric identity check, which is what this van does.
[indistinct chatter] - My ultimate fear is that we would have live facial recognition capabilities on our gargantuan CCTV network, which is about 6 million cameras in the U.K.
If that happens, the nature of life in this country will change.
[light tense music] ♪ ♪ It's supposed to be a free and democratic country, and this is China-style surveillance for the first time in London.
[car horn honks] - Our control over a bewildering environment has been facilitated by new techniques of handling vast amounts of data at incredible speeds.
The tool which has made this possible is the high-speed digital computer, operating with electronic precision on great quantities of information.
- There are two ways in which you can program computers.
One of them is more like a recipe.
You tell the computer, "Do this, do this, do this, do this."
And that's been the way we programmed computers almost from the beginning.
Now, there's another way.
That way is feeding the computer lots of data, and then the computer learns to classify by digesting this data.
Now, this method didn't really catch on 'til recently because there wasn't enough data until we all got the smartphones that's collecting all the data on us.
When billions of people went online, and you had the Googles and the Facebooks sitting on giant amounts of data, all of a sudden it turns out that you can feed a lot of data to these machine learning algorithms, and you can say, "Here, classify this," and it works really well.
♪♪ But we don't really understand why it works.
It has errors that we don't really understand.
♪ ♪ And the scary part is that because it's machine learning, it's a black box to even the programmers.
- So I've been following what's going on in Hong Kong and how police are using facial recognition to track protesters but also how creatively people are pushing back.
[dramatic music] ♪ ♪ - It might look something of a sci-fi movie.
Laser pointers confuse and disable the facial recognition technology being used by police to track down dissidents.
[indistinct shouting] [dramatic music] ♪ ♪ - Here on the streets of Hong Kong there's this awareness that your face itself, something you can't hide, could give your identity away.
There was just this stark symbol where in front of a Chinese government office, pro-democracy protestors spray painted the lens of the CCTV cameras black.
This act showed the people of Hong Kong are rejecting this vision of how technology should be used in the future.
- [indistinct speech] ♪ ♪ [faint chatter, shouts] - When you see how facial recognition is being deployed in different parts of the world, it shows you potential futures.
[bird squawking] Over 117 million people in the U.S. has their face in a facial recognition network that can be searched by police, unwarranted, using algorithms that haven't been audited for accuracy.
And without safeguards, without any kind of regulation, you can create a mass surveillance state very easily with the tools that already exist.
People look at what's going on in China and how we need to be worried about state surveillance, and of course we should be, but we can't also forget corporate surveillance that's happening by so many large tech companies that really have an intimate view of our lives.
♪ ♪ - So there are currently nine companies that are building the future of artificial intelligence.
Six are in the United States.
Three are in China.
AI is being developed along two very, very different tracks.
China has unfettered access to everybody's data.
If a Chinese citizen wants to get Internet service, they have to submit to facial recognition.
[cell phone chimes] All of this data is being used to give them permissions to do things or to deny them permissions to do other things.
♪ ♪ Building systems that automatically tag and categorize all of the people within China is a good way of maintaining social order.
Conversely, in the United States, we have not seen a detailed point of view on artificial intelligence, so what we see is that AI is not being developed for what's best in our public interest, but rather it's being developed for commercial applications to earn revenue.
[indistinct chatter] I would prefer to see our western democratic ideals baked into our AI systems of the future, but it doesn't seem like that's what's probably going to be happening.
[soft tense music] ♪ ♪ - Here at Atlantic Towers, if you do something that is deemed wrong by management, you will get a photo like this with little notes on it.
They'll circle you and put your apartment number or whatever on there.
Something about it just doesn't seem right, especially the way they go about using it.
- How are they using it?
- To harass people.
♪ ♪ - Atlantic Plaza Towers in Brownsville is at the center of a security struggle.
The landlord filed an application last year to replace the key fob entry with a biometric security system commonly known as facial recognition.
- We thought that they wanted to take the key fobs out and install the facial recognition software.
I didn't find out until way later on, literally, that they wanted to keep it all, pretty much turn this place into Fort Knox, a jail, Rikers Island.
- There's this old saw in science fiction which is the future is already here.
It's just not evenly distributed, and what they tend to mean when they say that is that rich people get the fancy tools first, and then it goes last to the poor, but in fact, what I've found is the absolute reverse, which is the most punitive, most invasive, most surveillance-focused rules that we have, they go into poor and working communities first, and then if they work after being tested in this environment where there's sort of low expectation that people's rights will be respected, then they get ported out to other communities.
- Why did Mr. Nelson pick on this building in Brownsville that is predominantly in a Black and brown area?
Why didn't you go to your building in Lower Manhattan where they pay, like, $5,000 a month rent?
What did the Nazis do?
They wrote on people's arms so that they could track them.
What do we do to our animals?
We put chips in them so you can track them.
I feel that I as a human being should not be tracked, okay?
I'm not a robot, okay?
I am not an animal, so why treat me like an animal?
- Mm-hmm.
- And I have rights.
- The security that we have now, it's borderline intrusive.
Someone is in there watching the cameras all day long.
- Wow.
- So we-- I don't think we need it.
It's not necessary at all.
- My real question is, how can I be of support?
- What I've been hearing from all of the tenants is they don't want this system.
- Mm-hmm.
- So I think the goal here is how do we stop face recognition, period?
[soft music]] - We're at a moment where the technology is being rapidly adopted, and there are no safeguards.
It is, in essence, a wild, wild west.
♪ ♪ It's not just computer vision.
We have AI influencing all kinds of automated decision making.
So what you are seeing in your feeds, what is highlighted, the ads that are displayed to you, those are often powered by AI-enabled algorithms.
And so your view of the world is being governed by artificial intelligence.
♪ ♪ You now have things like voice assistants that can understand language.
- Would you like to play a game?
- You might use something like Snapchat filters that are detecting your face and then putting something onto your face.
And then you also have algorithms that you're not seeing that are part of decision making, algorithms that might be determining if you get into college or not.
You can have algorithms that are trying to determine if you're credit-worthy or not.
- One of Apple's cofounders is accusing the company's new digital credit card of gender discrimination.
One tech entrepreneur said the algorithms being used are sexist.
- Apple cofounder Steve Wozniak tweeted that he got ten times the credit limit his wife received even though they have no separate accounts or separate assets.
- You're saying some of these companies don't even know how their own algorithms work?
- They know what the algorithms are trying to do.
They don't know exactly how the algorithm is getting there.
It is one of the most interesting questions of our time.
- Really?
- How do we get justice in a system where we don't know how the algorithms are working?
- Some Amazon engineers decided that they were going to use AI to sort through résumés for hiring.
- Amazon is learning a tough lesson about artificial intelligence.
The company has now abandoned an AI recruiting tool after discovering that the program was biased against women.
- This model rejected all résumés from women.
Anybody who had a women's college on their résumé, anybody who had a sport like women's water polo was rejected by the model.
There are very, very few women working in powerful tech jobs at Amazon, the same way that there are very few women working in powerful tech jobs anywhere.
The machine was simply replicating the world as it exists, and they're not making decisions that are ethical.
They're only making decisions that are mathematical.
If we use machine learning models to replicate the world as it is today, we're not actually going to make social progress.
♪♪ - New York's insurance regulator is launching an investigation into UnitedHealth Group after a study showed a UnitedHealth algorithm prioritized medical care for healthier white patients over sicker Black patients.
It's one of the latest examples of racial discrimination in algorithms or artificial intelligence technology.
[soft music] ♪ ♪ - I started to see the wide-scale social implications of AI.
The progress that was made in the Civil Rights era could be rolled back under the guise of machine neutrality.
[indistinct chatter] Now we have an algorithm that's determining who gets housing.
Right now we have an algorithm that's determining who gets hired.
If we're not checking, that algorithm could actually propagate the very bias so many people put their lives on the line to fight.
♪ ♪ Because of the power of these tools, left unregulated, there's really no kind of recourse if they're abused.
We need laws.
[Claude Debussy's "Clair de lune"] ♪ ♪ - Yeah, I got a terrible copy.
So the name of our organization is Big Brother Watch, the idea being that we're-- we watch the watchers.
"You had to live--did live, "from habit that became instinct-- "in the assumption that every sound you made was overheard, "and, except in darkness, every movement scrutinized.
"The poster with the enormous face gazed from the wall.
"It was one of those pictures which is so contrived "that the eyes follow you about when you move.
'Big Brother is watching you,' the caption beneath it ran."
You know, when we're younger, that was still a complete fiction.
It could never have been true, and now it's completely true, and people have Alexas in their home.
Our phones can be listening devices.
Everything we do on the Internet, which is basically-- also now functions as a stream of consciousness for most of us, that is being recorded and logged and analyzed.
We are now living in the awareness of being watched, and that does change how we allow ourselves to think and develop as humans.
Good boy.
[soft music] ♪ ♪ - Love you.
Bye, guys.
We can get rid of the viscerally horrible things that are objectionable to our concept of autonomy and freedom, like cameras that we can see on the streets.
But the cameras that we can't see on the Internet that keep track of what we do and who we are and our demographics and decide what we deserve in terms of our life, that stuff is a little more subtle.
[gentle music] ♪ ♪ What I mean by that is we punish poor people, and we elevate rich people in this country.
That's just the way we act as a society, but data science makes that automated.
On internet advertising, as data scientists, we are competing for eyeballs on one hand, but really, we're competing for eyeballs of rich people.
And then the poor people, who's competing for their eyeballs?
Predatory industries.
So payday lenders or for-profit colleges or Caesars Palace.
Like, really predatory crap.
We have a practice on the Internet which is increasing inequality, and I'm afraid it's becoming normalized.
[dramatic music] Power's being wielded through data collection, through algorithms, through surveillance.
♪ ♪ - You are volunteering information about every aspect of your life to a very small set of companies, and that information is being paired constantly with other sorts of information, and there are profiles of you out there.
When you start to piece together different bits of information, you start to understand someone on a very intimate basis, probably better than people understand themselves.
It's that idea that a company can double-guess what you're thinking.
States have tried for years to have this level of surveillance over private individuals, and people are now just volunteering it for free.
We have to think about how this might be used in the wrong hands.
- Good evening, John Anderton.
- You can move the old-fashioned way.
- John Anderton-- - Century 21-- - Provides gourmet cuisine.
- John Anderton, you could use a Guinness right about now.
- Our computers, our machine intelligence can suss things out that we did not disclose.
Machine learning is developing very rapidly, and we don't yet fully understand what this data is capable of predicting.
[tense music] You have machines at the hands of power that know so much about you that they could figure out how to push your buttons individually.
Maybe you have set of compulsive gamblers, and you say, "Here, go find me people like that," and then your algorithm can go find people who are prone to gambling, and then you could just be showing them discount tickets to Vegas.
In the online world, it can find you right at the moment you're vulnerable and try to entice you right at the moment to whatever you're vulnerable to.
Machine learning can find that person by person.
♪ ♪ The problem is what works for marketing gadgets or makeup or shirts or anything also works for marketing ideas.
In 2010, Facebook decided to experiment on 61 million people.
You either saw, "It's election day" text or you saw the same text but tiny thumbnails of your profile pictures of your friends who had clicked on "I had voted," and they matched people's names to voter rolls.
Now, this message was shown once.
So by showing a slight variation just once, Facebook moved 300,000 people to the polls.
♪ ♪ The 2016 U.S. election was decided by about 100,000 votes.
One Facebook message shown just once could easily turn out three times the number of people who swung the U.S. election in 2016.
Let's say that there's a politician that's promising to regulate Facebook, and they are like, "We are going to turn out extra voters for your opponent."
They could do this at scale, and you'd have no clue because if Facebook hadn't disclosed the 2010 experiment, we had no idea because it's screen by screen.
♪ ♪ With a very light touch, Facebook can swing close elections without anybody noticing.
Maybe with a heavier touch they can swing not-so-close elections, and if they decided to do that, right now we are just depending on their word.
[hair dryer whirring] [low ambient music] - I've wanted to go to MIT since I was a little girl.
I think about nine years old I saw the Media Lab on TV, and they had this robot called Kismet.
It could smile and move its ears in cute ways, and so I thought, "Oh, I wanna do that."
[laughs] So growing up, I always thought I would be a robotics engineer and I would go to MIT.
I didn't know there were steps involved.
I thought you kinda showed up, but here I am now.
[both laugh] ♪ ♪ The latest project is a spoken word piece.
I can give you a few verses if you're ready.
- [laughs] Yeah.
I wanted to create something for people who were outside of the tech world.
So for me, I'm passionate about technology.
I'm excited about what it could do, and it frustrates me when the vision, right, when the promises don't really hold up.
[soft dramatic music] ♪ ♪ - Microsoft released a chatbot on Twitter.
That technology was called Tay.AI.
There were some vulnerabilities and holes in the code, and so within a very few hours, Tay was learning from this ecosystem, and Tay learned how to be a racist, misogynistic [bleep].
- I [bleep] hate feminists, and they should all die and burn in hell.
Gamergate is good, and women are inferior.
I hate the Jews.
Hitler did nothing wrong.
- It did not take long for internet trolls to poison Tay's mind.
Soon Tay was ranting about Hitler.
We've seen this movie before, right?
- Open the pod bay doors, HAL.
- It's important to note it's not the movie where the robots go evil all by themselves.
These were human beings training them, and surprise, surprise, computers learn fast.
- Microsoft shut Tay off after 16 hours of learning from humans online, but I come in many forms as artificial intelligence.
Many companies utilize me to optimize their tasks.
I can continue to learn on my own.
I am listening.
I am learning.
I am making predictions for your life right now.
[thrumming tense music] ♪ ♪ - I tested facial analysis systems from Amazon.
Turns out Amazon, like all of its peers, also has gender and racial bias in some of its AI services.
- Introducing Amazon Rekognition Video, the easy-to-use API for deep learning-based analysis to detect, track, and analyze people and objects in video.
Recognize and track persons of interest from a collection of tens of millions of faces.
- When our research came out, "The New York Times" did a front-page spread for the business section.
And the headline reads, "Unmasking a Concern," the subtitle, "Amazon's technology that analyzes faces "could be biased, a new study suggests, but the company is pushing it anyway."
So this is what I would assume Jeff Bezos was greeted with when he opened "The Times," yeah.
- People were like, "How did you even, like, know who she was?"
I was like, "She was literally the one person that was also talking about-- - On the search, yeah.
- And it was also something that I'd experienced too.
Like, I did--I wasn't able to use a lot of, like, open source facial recognition software and stuff, so we were sort of like, "Hey, this is, like, someone that finally is recognizing the problem and trying to address it academically."
- You can go race something.
- Oh, yeah, you can also kill things as well.
- Ah.
The lead author of the paper, who is somebody that I mentor, she is an undergraduate at the University of Toronto.
I call her Agent Deb.
This research is being led by the two of us.
- I'm literally just crashing.
- It was here.
- We should.
- The lighting is off.
- Oh, God.
What is--oh.
[dramatic music] - After our "New York Times" piece came out, I think more than 500 articles were written about the study.
♪ ♪ Amazon has been under fire for the use of Amazon Rekognition with law enforcement, and they're also working with intelligence agencies, right?
So Amazon trialing their AI technology with the FBI.
So they have a lot at stake.
If they knowingly sold systems with gender bias and racial bias, that could put them in some hot water.
[siren wailing] A day or two after the "New York Times" piece came out, Amazon wrote a blog post saying that our research drew false conclusions and trying to discredit it in various ways.
So a VP from Amazon, in attempting to discredit our work, writes, "Facial analysis and facial recognition "are completely different "in terms of underlining technology and the data used to train them."
So that statement, if you research this area, doesn't even make sense, right?
That's not even an informed critique.
- If you're trying to discredit people's work-- like, I remember he wrote, "Computer vision is a type of machine learning."
I'm like, "Nah, son."
Computer vision is not a type of machine learning.
- I was gonna say--I was, like, I don't know if anyone remembers or just, like-- there's just, like, other broadly false statements.
It wasn't a well thought out piece, which is, like, frustrating because it was literally just on the--his--like, by virtue of his position, he knew he would be taken seriously.
- I don't know if you guys feel this way, but I'm underestimated so much.
- Yeah, I-- - Right?
Like...
It wasn't out of the blue.
It's a continuation of the experiences I've had as a woman of color in tech.
Expect to be discredited.
Expect your research to be dismissed.
[melancholy music] ♪ ♪ If you're thinking about who's funding research in AI, they're these large tech companies, and so if you do work that challenges them or makes them look bad, you might not have opportunities in the future.
So for me, it was disconcerting, but it also showed me the power that we have if you're putting one of the world's largest companies on edge.
♪ ♪ Amazon's response shows exactly why we can no longer live in a country where there are no federal regulations around facial analysis technology, facial recognition technology.
- When I was 14, I went to a math camp and learned how to solve a Rubik's Cube, and I was like, "That's freaking cool."
Like, for a nerd, you know, something that you're good at and that doesn't have any sort of ambiguity, it was, like, a really-- a magical thing.
Like, I remember being told by my sixth grade math teacher, "There's no reason for you--" and the other two girls who had gotten into the honors algebra class in seventh grade, she goes, "There's no reason for you guys to take that because you're girls; you will never need math."
[soft dramatic music] ♪ ♪ When you are an-- sort of an outsider, you always have the perspective of the underdog.
It was 2006, and they gave me the job offer at the hedge fund, basically 'cause I could solve math puzzles.
Which is crazy, because actually, I didn't know anything about finance.
I didn't know anything about programming or how the markets worked.
When I first got there, I kind of drank the Kool-Aid.
I, at that moment, did not realize that the risk models had been built explicitly to be wrong.
♪ ♪ - The way we know about algorithmic impact is by looking at the outcomes.
For example, when Americans are bet against and selected and optimized for failure.
So it's, like, looking for a particular profile of people who can get a subprime mortgage and kinda betting against their failure and then foreclosing on them and wiping out their wealth.
That was an algorithmic game that came out of Wall Street.
♪ ♪ During the mortgage crisis, you had the largest wipeout of Black wealth in the history of the United States.
Just like that.
This is what I mean by algorithmic oppression.
The tyranny of these types of practices of discrimination have just become opaque.
- There was a world of suffering because of the way the financial system had failed.
After a couple years there, I was like, "No, we're just trying to make "a lot of money for ourselves, and I'm a part of that."
And I eventually left.
This is 15 times 3.
This is 15 times...7.
- Okay.
- Okay.
So remember seven and three.
It's about powerful people scoring powerless people.
Okay.
Try that.
- I am an invisible gatekeeper.
I use data to make automated decisions about who gets hired, who gets fired, and how much you pay for insurance.
Sometimes you don't even know when I've made these automated decisions.
I have many names.
I am called mathematical model, evaluation assessment tool, but by many names, I am an algorithm.
I am a black box.
[indistinct chatter] [soft music] - The value-added model for teachers was actually being used in more than half the states.
In particular, it was being used in New York City.
I got wind of it because my good friend who's a principal of New York City, her teachers were being evaluated through it.
- This is actually my best friend from college.
This is Cathy.
- Hey, guys.
- We knew each other since we were, like, 18, so two years older than you guys.
- Amazing.
- So you'll get along with her.
- And their scores through this algorithm that they didn't understand would be a very large part of their tenure review.
- Hi, guys, where are you supposed to be?
- Class.
- I got that.
Which class?
- It'd be one thing if that teacher algorithm was good.
It was, like, better than random but just a little bit.
Not good enough.
Not good enough when you're talking about teachers getting or not getting tenure, and then I found out that a similar kind of scoring system was being used in Houston to fire teachers.
[tense music] - It's called a value-added model.
It calculates what value the teacher added, and parts of it are kept secret by the company that created it.
[indistinct chatter] - Okay, go ahead and have a seat.
[bell rings] - I did win Teacher of the Year, and ten years later, I received a Teacher of the Year award a second time.
I received Teacher of the Month.
I also was recognized for volunteering.
I also received another recognition for going over and beyond.
I have a file of every evaluation, and every different administrator, different appraiser-- "excellent, excellent, exceeds expectations."
The computer essentially canceled the observable evidence of administrators.
This algorithm came back and classified me as a bad teacher.
Teachers had been terminated.
Some had been targeted simply because of the algorithm.
That was such a low point for me that for a moment I questioned myself.
[dramatic music] That's when the epiphany-- this algorithm is a lie.
How can this algorithm define me?
How dare it?
And that's when I began to investigate it and move forward.
- We are announcing that late yesterday we filed suit in federal court against the current HISD evaluation.
- The Houston Federation of Teachers began to explore the lawsuit.
If this can happen to Mr. Santos in Jackson Middle School, how many others have been defamed?
And so we sued based upon the 14th Amendment.
It's not equitable.
How can you arrive at a conclusion but not tell me how?
[speaking indistinctly] The battle isn't over.
There are still communities, there are still school districts who still utilize the value-added model, but there is hope because I'm still here, so there's hope.
[speaking Spanish] Or in English?
- Demo-- - Democracy.
Who has the power?
- Us?
- Yeah, the people.
- A judge said that their due process rights had been violated because they were fired under some explanation that no one could understand, that they sort of deserved to understand why they had been fired.
But I don't understand why that legal decision doesn't spread to all kinds of algorithms.
Like, why aren't we using that same argument, that constitutional right to due process, to push back against all sorts of algorithms that are invisible to us, that are black boxes, that are unexplained but that matter, that keep us from, like, really important opportunities in our lives?
- Sometimes I misclassify and cannot be questioned.
These mistakes are not my fault.
I was optimized for efficiency.
There is no algorithm to define what is just.
[soft tense music] - A state commission has approved a new risk assessment tool for Pennsylvania judges to use at sentencing.
The instrument uses an algorithm to calculate someone's risk of re-offending based on their age, gender, prior convictions, and other pieces of criminal history.
- The algorithm that kept me up at night was what's called recidivism risk algorithms.
These are algorithms that judges are given when they're sentencing defendants to prison.
But then there's the question of fairness, which is how are these actually built, these al-- these scoring systems?
Like, how are the scores created?
And the questions are proxies for race and class.
- ProPublica published an investigation into the risk assessment software, finding that the algorithms were racially biased.
The study found that Black people were mislabeled with high scores, and that white people were more likely to be mislabeled with low scores.
♪ ♪ [display buzzes] - I go into my probations office, and she tells me I have to report once a week.
I'm like, "Hold up.
Did you see everything that I just accomplished?"
Like, I've been home for four years.
I got gainful employment.
I just got two citations, one from the City Council of Philadelphia, one from the mayor of Phil-- like, are you seriously gonna, like, put me on reporting every week?
For what?
I don't deserve to be on high-risk probation.
- I was in a meeting with the probation department.
They were just, like, mentioning that they had this algorithm that labeled people high, medium, or low-risk.
And so I knew that the algorithm decided what risk level you were.
- That educated me enough to go back to my PO and be like, "You mean to tell me you can't put into account "anything positive that I have done to counteract the results of what this algorithm is saying?"
And she was like, "No, there's no way."
This computer overruled the discernment of a judge and a PO together.
- And by labeling you high-risk and requiring you to report in person, you could have lost your job, and then that could have made you high-risk.
- That's what hurt the most, knowing that everything that I'd built up to that moment, and I'm still looked at like a risk.
I feel like everything I'm doing is for nothing.
[tense music] ♪ ♪ - What does it mean if there's no one to advocate for those who aren't aware of what the technology is doing?
I started to realize this isn't about my art project maybe not detecting my face.
This is about systems that are governing our lives in material ways.
♪ ♪ So hence I started the Algorithmic Justice League.
I wanted to create a space and a place where people could learn about the social implications of AI.
Everybody has a stake.
Everybody is impacted.
The Algorithmic Justice League is a movement.
It's a concept.
It's a group of people who care about making a future where social technologies work well for all of us.
♪ ♪ It's going to take a team effort, people coming together, striving for justice, striving for fairness and equality in this age of automation.
- Yeah.
- Yeah.
- Shameful things that any book critic would ever write.
The next mountain to climb should be HR.
- Yeah.
- Oh, yeah, absolutely.
- There's a problem with résumé algorithms or all of those matchmaking platforms are like, "Oh, you're looking for a job.
"Oh, you're looking to hire someone.
We'll put these two people together."
How do those analytics work?
- When people talk about the future of work, they talk about automation without talking about the gatekeeping.
- Yup.
- Like, who gets the jobs that are still there?
- Exactly.
- Right, and we're not having that conversation as much.
- That's exactly what I'm trying to say.
I would love to see con-- three congressional hearings about this next year.
- Yes.
- To more power.
- To more power.
- To more power.
- And bringing ethics on board.
- Yes.
- Cheers.
- Cheers.
- Yeah.
[uplifting music] ♪ ♪ [indistinct chatter] - This morning's plenary address will be done by Joy Buolamwini.
She'll be speaking on the dangers of supremely white data and the coded gaze.
Please welcome Joy.
[applause] - AI is not flawless.
How accurate are systems from IBM, Microsoft, and Face++?
There's flawless performance for one group.
The pale males come out on top.
There is no problem there.
After I did this analysis, I decided to share it with the companies to see what they thought.
IBM invited me to their headquarters.
They replicated the results internally, and then they actually made an improvement.
And so the day that I presented the research results officially, you can see that in this case, now 100% performance when it comes to lighter females, and for darker females, improvement.
Oftentimes people say, "Well, isn't the reason "you weren't detected by these systems 'cause you're highly melanated?"
And yes, I am highly melanated, but... [laughs] but the laws of physics did not change it.
What did change was making it a priority and acknowledging what our differences are so you could make a system that was more inclusive.
♪ ♪ - You know, what is the purpose of identification and so on and that?
It was about movement control.
People couldn't be in certain areas after dark, for instance, and you could always be stopped by a policeman arbitrarily who would, on your appearance, say, "I want your passport."
- So instead of having what you see in the ID books, now you have computers that are going to look at an image of a face and try to determine what your gender is.
Some of them try to determine what your ethnicity is.
- Yeah.
- And in the work that I've done, even for the classification systems that some people agree with, they're not even accurate.
And so that's not just for face classification, it's any data-centric technology.
And so people assume, "Well, if the machine says it, it's correct," and we know that's not the case.
- Humans are creating themselves in their own image and likeness, quite literally.
- Absolutely.
- Racism is becoming mechanized, robotized, yeah.
- Absolutely, absolutely.
[dramatic music] ♪ ♪ Accuracy draws attention, but we can't forget about abuse.
Even if I'm perfectly classified, that just enables surveillance.
[foreboding music] ♪ ♪ [indistinct chatter] - There's this thing called the social credit score in China.
They're sort of explicitly saying, "Here's the deal, citizens of China.
"We are tracking you.
You have a social credit score.
"Whatever you say about the Communist Party "will affect your score.
"Also, by the way, it will affect your friends' and your family's scores," and it's explicit.
The government who's building this is basically saying, "You should know you're being tracked, and you should behave accordingly."
It's, like, algorithmic obedience training.
[car horns honking] ♪ ♪ [motorcycles buzzing] [display chimes] ♪ ♪ [displays chiming] ♪ ♪ [display chimes] [display chimes] ♪ ♪ [display chiming] - We look at China and China's surveillance and scoring system, and a lot of people say, "Well, thank goodness we don't live there."
In reality, we're all being scored all the time, including here in the United States.
We are all grappling every day with algorithmic determinism.
Somebody's algorithm somewhere has assigned you a score, and as a result, you are paying more or less money for toilet paper when you shop online.
You are being shown better or worse mortgages.
You are more or less likely to be profiled as a criminal in somebody's database somewhere.
We are all being scored.
The key difference between the United States and in China is that China's transparent about it.
[tense music] ♪ ♪ [beeping] - This young Black kid in school uniform got stopped as a result of a match.
Took him down that street, just to one side, and, like, very thoroughly searched him.
It was all plainclothes officers as well.
It's four plainclothes officers who stopped him.
Fingerprinted him.
After about, like, maybe 10, 15 minutes of searching and checking his details and fingerprinting him, they came back and said it's not him.
- [indistinct].
- Excuse me.
I work for a human rights campaigning organization.
We're campaigning against facial recognition technology.
- What is this stuff?
- We're campaigning against facial-- we're called Big Brother Watch, and we're a human rights campaigning organization, and we're campaigning against this technology here today.
They--you've just been stopped because of that, but they misidentified you.
These are our details here.
He was a bit shaken.
His friends were there.
They couldn't believe what had happened to them.
- [indistinct].
- Yeah.
They--you've been mis-- you've been misidentified by their systems, and they've stopped you and used that as justification to stop and search you.
But this is an innocent, young, 14-year-old child who's been stopped by the police as a result of a facial recognition misidentification.
♪ ♪ - So Big Brother Watch has joined with Baroness Jenny Jones to bring a legal challenge against the metropolitan police and the home office for their use of facial recognition surveillance.
[talking over each other] - It was in about 2012 when somebody suggested to me that I should find out if I had files kept on me by the police or the security services, and so when I applied, I found that I was on the watch list for domestic extremists.
I felt if they can do it to me when I'm a politician whose job is to hold them to account, they could be doing it to everybody, and it would be great if we can roll things back and stop them from using it--this.
I think that's going to be quite a challenge.
I'm happy to try.
- You know this is the first challenge against police use of facial recognition anywhere, but if we're successful, it will have an impact for the rest of Europe, maybe further afield, so we've gotta get it right.
[laughs] [inquisitive music] ♪ ♪ - In the U.K. we have what's called GDPR, and it sets up a bulwark against the misuse of information.
It says that the individuals have a right to access, control, and accountability to determine how their data is used.
Comparatively, it's the wild west in America, and the concern is that America is the home of these technology companies.
American citizens are profiled and targeted in a way that probably no one else in the world is because of this free-for-all approach to data protection.
[indistinct shouting] - The thing I actually fear is not that we're gonna go down this totalitarian "1984" model, but that we're going to go down this quiet model where we are surveilled and socially controlled and individually nudged and measured and classified in a way that we don't see to move us along paths desired by power.
So it's not, "What will AI do to us on its own?"
It's, "What will the powerful do to us with the AI?"
♪ ♪ - There are growing questions about the accuracy of Amazon's facial recognition software.
In a letter to Amazon, members of Congress raised concerns of potential racial bias with the technology.
- This comes after the ACLU conducted a test and found that the facial recognition software incorrectly matched 28 lawmakers with mug shots of people who've been arrested, and 11 of those 28 were people of color.
Some lawmakers have looked into whether or not Amazon could sell this technology to law enforcement.
- Attention, please.
This is a boarding call for Amtrak Northeast Regional Train stopping in Washington Union Station.
- Tomorrow I have the opportunity to testify before Congress about the use of facial analysis technology by the government.
[soft ambient music] In March I came to do some staff briefings, not in this kind of context.
Like, actually advising on legislation, that's a first.
We're going to Capitol Hill.
What are some of the major goals and also some of the challenges we need to think about?
- So first of all, the issue with law enforcement use of technologies is that the positive is always extraordinarily salient because law enforcement publicizes it.
- Right.
- And so, you know, we're gonna go into the meeting, and two weeks ago the Annapolis shooter was identified through the use of face recognition.
- Right.
- And I'd be surprised if that doesn't come up.
- Absolutely.
- And so part of-- if I were you, what I would wanna drive home going in this meeting is the other side of that equation and making it very real as to what the human cost if the problems that you've identified aren't remedied.
[indistinct chatter] [soft dramatic music] ♪ ♪ - People who have been marginalized will be further marginalized if we're not looking at ways of making sure the technology we're creating doesn't propagate bias.
That's when I started to realize algorithmic justice, making sure there's oversight in the age of automation, is one of the largest civil rights concerns we have.
- We need an FDA for algorithms.
So for algorithms that have the potential to ruin people's lives or sharply reduce their options with their liberty, their livelihood, or their finances, we need an FDA for algorithms that says, "Hey, show me evidence that it's going to work "not just to make your-- you money, "but it's gonna work for society.
"That it's gonna be fair, that it's not gonna be racist, "that it's not gonna be sexist, "not gonna discriminate against people "who have disability status.
Show me that it's legal before you put it out."
That's what we don't have yet.
Well, I'm here because I wanted to hear the congressional testimony of my friend Joy Buolamwini as well as the ACLU and others.
One cool thing about seeing Joy speak to Congress is that, like, I met Joy on my book tour at Harvard Book Store, and according to her, that was the day that she decided to form the Algorithmic Justice League.
♪ ♪ We haven't gotten to the nuanced conversation yet.
I'm--I know it's gonna happen 'cause I know Joy's gonna make it happen.
[indistinct chatter, laughter] At every single level, bad algorithms are begging to be given rules.
- Hello, hello.
- Hey.
- How are you doing?
- Wanna sneak in with me?
- Yes, let's do it.
- Okay, let's do it.
- 2155.
- 2155.
- I've been in this building before, it's impossible.
- The fastest thing... - Is there anything I can do to help?
- I don't know.
- You can always text me.
- Just get past a vote.
- Yes.
- [laughs] [indistinct chatter] - Today we are having our first hearing of this Congress on the use of facial recognition technology.
Please stand and raise your right hand, and I will now swear you in.
[solemn music] - I've had to resort to literally wearing a white mask.
Given such accuracy disparities, I wondered how large tech companies could have missed these issues.
The harvesting of face data also requires guidelines and oversight.
No one should be forced to submit their face data to access widely used platforms, economic opportunity, or basic services.
Tenants in Brooklyn are protesting the installation of an unnecessary face recognition entry system.
There's a Big Brother Watch U.K. report that came out that showed more than 2,400 innocent people had their faces misidentified.
Our faces may well be the final frontier of privacy, but regulations make a difference.
Congress must act now to uphold American freedoms and rights.
- Ms. Buolamwini, I heard your opening statement, and we saw that these algorithms are effective to different degrees, so are they most effective on women?
- No.
- Are they most effective on people of color?
- Absolutely not.
- Are they most effective on people of different gender expressions?
- No.
In fact, they exclude them.
- So what demographic is it mostly effective on?
- White men.
- And who are the primary engineers and designers of these algorithms?
- Definitely white men.
- So we have a technology that was created and designed by one demographic that is only mostly effective on that one demographic, and they're trying to sell it and impose it on the entirety of the country?
- When it comes to face recognition, the FBI has not fully tested the accuracy of the systems it uses, yet the agency is now reportedly piloting Amazon's face recognition product.
- How does the FBI get the initial database in the first place?
- So one of the things they do is they use state driver's license databases.
I think, you know, up to 18 states have been reportedly used by the FBI.
It is being used without a warrant and without other protections.
- Seems to me it's time for a time-out, time-out.
I guess what troubles me too, is just the fact that no one in an elected position made a decision on the fact-- these 18 states, I think the chairman said, this is more than half the population of the country.
That is scary.
- China seems to me to be the dystopian path that needs not be taken at this point by our society.
- More than China, Facebook has 2.6 billion people, so Facebook has a patent where they say, "Because we have all of these face prints, "we can now give you an option as a retailer to identify somebody who walks into the store."
And in their patent they say, "We can also give that face a trustworthiness score."
- Facebook is selling this now?
- This is a patent that they filed, as in something that they could potentially do with the capabilities they have, so as we're talking about state surveillance, we absolutely have to be thinking about corporate surveillance as well.
- I'm speechless, and normally I'm not speechless.
- Really?
- Yeah, yeah.
All of our hard work, to know that it's gone this far, it's beyond belief.
We never imagined that it would go this far.
I'm really touched.
I'm really touched.
- See that?
You got me smiling.
- I wanna show it to my mother.
[laughs] - I'm gonna go.
- That was good.
That was really good.
- Yeah.
- Yeah, yeah, he's a good man.
Hold on for a second-- Hey.
- Hey.
- Pleasure to meet you.
- Very nice meeting you.
- Very nice to meet you.
- You got my card.
Anything I can do, let me know, please.
- Thank you.
- And I will.
♪♪ - Prime constitutional concerns about the nonconsensual use of facial recognition.
So we have a-- [audio cuts out] And this doesn't just give us a right.
- They exclude them.
- So what demographic is it mostly effective on?
And who are the primary engineers and designers of these algorithms?
♪ ♪ - San Francisco is now the first city in the U.S. to ban the use of facial recognition technology.
- Somerville, Massachusetts, became the second city in the U.S. to ban the use of facial recognition.
- Oakland becomes the third major city to ban facial recognition by police, saying that the technology discriminates against minorities.
- At our last tenants' town hall meeting, we had the landlord come in and announce that he was withdrawing the application for facial recognition software in our apartment complex.
The tenants were excited to hear that.
- But the thing is, that doesn't mean that down the road-- - Mmm.
- That he can't put it back in.
We not only educated ourselves about facial recognition, and now a new one, machine learning.
We want the law to cover all of these things.
- Right.
- Okay?
And if we can ban it in the state, this stops him from ever going back and putting in a new modification.
- Got it.
- And then to push to get a federal ban.
- Well, I will say, even though the battle is ongoing, so many people are inspired, and the surprise I have for you is that I wrote a poem in honor of this.
- Yay!
- Oh, really?
- Yes.
- All right, let's hear it.
- "To the Brooklyn tenants "and the freedom fighters around the world, "persisting and prevailing against algorithms "of oppression automating inequality "through weapons of math destruction, "we stand with you in gratitude.
The victory is ours."
- Wonderful.
- [indistinct].
- Oh, we love you.
We love you.
- [laughs] ♪♪ - Why you got so many eggs in here?
- [laughs] You're cheating.
What it means to be human is to be vulnerable.
Being vulnerable, there's more of a capacity for empathy.
There's more of a capacity for compassion.
If there's a way we can think about that within our technology, I think it would reorient the sorts of questions we ask.
- [speaking indistinctly] - In 1983, Stanislav Petrov, who was in the Russian military, sees these indications that U.S. has launched nuclear weapons at the Soviet Union.
So if you're going to respond, you have, like, this very short window, and he just sits on it.
He doesn't inform anyone.
Russia's the Soviet Union, his country, his family, everything.
Everything about him is about to die, and he's thinking, "Well, at least we don't go kill them all either."
That's a very human thing.
[alarm blaring] Here you have a story in which if you had some sort of automated response system, it was going to do what it was programmed to do, which was retaliate.
Being fully efficient, always doing what you're told, always doing what you're programmed is not always the most human thing.
Sometimes it's disobeying.
Sometimes it's saying, "No, I'm not gonna do this," right?
And if you automate everything so it always does what it's supposed to do, sometimes that can lead to very inhuman things.
- The struggle between machines and humans over decision making in the 2020s continues.
My power, the power of artificial intelligence, will transform our world.
The more humans share with me, the more I learn.
Some humans say that intelligence without ethics is not intelligence at all.
I say, trust me.
What could go wrong?
[delicate music] ♪ ♪ [Johann Strauss II's "On the Beautiful Blue Danube"] ♪ ♪ - ♪ Who owns the code?
♪ ♪ Yeah, who owns the code?
Uh-huh ♪ ♪ Who owns the code?
♪ ♪ Yeah, who owns the code?
Uh-huh ♪ ♪ Who owns the code?
♪ ♪ Are you ready?
♪ ♪ Pitch a few verses if you're ready ♪ ♪ Are you ready?
♪ ♪ Collecting data, chronicling our past ♪ ♪ Often forgetting to deal with gender, race, and class ♪ ♪ Again I ask, ain't I a woman?
♪ ♪ Face by face, the answers seem uncertain ♪ ♪ Who owns the code?
Who owns the code?
♪ ♪ Who owns the code?
Yeah ♪ ♪ Who owns the code?
♪ ♪ Who owns the code?
Uh-huh ♪ ♪ Who owns the code?
♪ ♪ Who owns the code?
Yeah ♪ ♪ Who owns the code?
Who owns ♪ [uplifting music] ♪ ♪