Firing Line
Fei-Fei Li
5/23/2025 | 26m 46sVideo has Closed Captions
Dr. Fei-Fei Li discusses ethical development of AI and the challenge of establishing regulations.
AI pioneer Dr. Fei-Fei Li discusses ethical development of artificial intelligence and the challenge of establishing effective regulations. She addresses government funding of research, diversity in science, and ensuring child safety as AI advances.
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback
Firing Line
Fei-Fei Li
5/23/2025 | 26m 46sVideo has Closed Captions
AI pioneer Dr. Fei-Fei Li discusses ethical development of artificial intelligence and the challenge of establishing effective regulations. She addresses government funding of research, diversity in science, and ensuring child safety as AI advances.
Problems with Closed Captions? Closed Captioning Feedback
How to Watch Firing Line
Firing Line is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- [Margaret] The godmother of AI, this week on "Firing Line."
- Artificial intelligence must benefit humanity.
(attendees cheering) (attendees applauding) - [Margaret] That's Dr. Fei-Fei Li of Stanford University accepting a lifetime achievement award at this year's Webbys for her work on artificial intelligence.
Her lab works on computer vision, teaching computers to see and create 3D worlds.
She foresees both enormous benefits and enormous risks in this developing technology.
- Even when humans discover fire, it could be deadly.
It's true.
So every technology is a double-edged sword.
- [Margaret] Lee co-founded Stanford's Institute for Human-Centered AI to keep the focus on improving human lives.
In Senate testimony in 2023, she warned that Congress needs to establish guardrails around the use of AI.
- While AI, like most technologies, promises to solve many problems for the common good, it can also be misused to cause harm.
It falls upon the US government to spearhead the ethical procurement and deployment of these systems.
- But Vice President JD Vance is pushing a different message.
- We believe that excessive regulation of the AI sector could kill a transformative industry just as it's taking off.
The AI future is not gonna be won by hand-wringing about safety.
It will be won by building.
- [Margaret] What does computer scientist Fei-Fei Li say now?
- [Announcer 1] "Firing Line with Margaret Hoover" is made possible in part by Robert Granieri, Vanessa and Henry Cornell, the Fairweather Foundation, Peter and Mary Kalikow, Cliff and Laurel Asness, the Meadowlark Foundation, the Beth and Ravenel Curry Foundation, and by the following.
Corporate funding is provided by Stephens Inc. - Dr. Fei-Fei Li, welcome to "Firing Line."
- Thank you, Margaret.
I'm excited.
- We are now one decade in to the artificial intelligence revolution, and I wanna know what you would say right now.
How intelligent is artificial intelligence?
- What a great question.
It's very intelligent.
But can it think like humans?
I don't think so yet.
It's rapidly advancing.
And some part of AI, artificial intelligence, is very advanced, like some of the language intelligence, but some part, it is nowhere compared to humans, like emotional intelligence or uniquely human creativity and all that.
- You have written that despite its name, there is nothing, quote, "artificial about this technology."
It is made by humans, intended to behave like humans, and affects humans."
In what sense is it not artificial?
- The human impact, the human interaction, its influence on our world, even in our human lives, for me those are not artificial.
- The new pope, Pope Leo XIV, is not optimistic necessarily about the prospects of AI to contribute to humanity.
He revealed that his papal name was actually inspired by former Pope Leo XIII, who led the Catholic church through the Industrial Revolution.
And he recently said, "In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to the developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor."
Does the new Pope have a point?
- Absolutely.
I totally agree with him in the sense that no matter what technology can do, human dignity is absolutely central to our civilization.
And this is the point I'm trying to make.
This is the reason I co-founded Stanford Human-Centered AI Institute, putting human in the center because I think technology is always a double-edged sword.
No technology will by itself do good or bad.
It can be used in both ways.
- I wanna point out though that you do have a pretty idealistic outlook of AI.
And it seems as though you are convinced that the technology can be human-centered in order to advance the human condition.
How can AI enhance the human condition?
- AI is a tool.
And I do believe humanity invents tool by and large with intention to make life better, make work better.
Most tools are invented with that intention, and so is AI.
About 12 years ago, as AI was taking off, I was thinking about what's my responsibility as a AI scientist, as the the generation that brought this technology to humanity?
And it really became very important for me that I do have a responsibility beyond just coding codes and creating computer science technology, but really to do good with this technology.
And I think, for example, AI can help drug discovery.
AI can help making our patients safer.
AI can map out biodiversity.
AI can help us discover new materials.
AI can help social scientists sift through and learn from enormous amount of data to understand how economics work.
AI can help our government to be more efficient.
There's a lot AI could do to make life and work better.
- You helped found, as you mentioned, you helped found and are a co-director of the Stanford Institute for Human-Centered AI.
And you wrote in the New York Times in 2018 that, quote, "If we want to play a positive role in tomorrow's world, we must be guided by human concerns."
You have been concerned about bias in artificial intelligence.
- Mm-hmm.
- And you even mentioned in your book incidents of AI mislabeling Black Americans or Black people as gorillas.
Studies have shown self-driving cars are less likely to detect darker skin pedestrians.
Some AI images have generated imagery that is explicitly racist or sexist.
There are not very many women involved in AI.
There are not many people of color involved in AI.
Does that impact the algorithms?
Does that impact the output?
Does that impact society at large if it's not reflective on the front end of society at-large?
- Yeah, you're totally right.
You know, look, AI is a technical system.
And when we designed this technical system, every step of the way, people are involved.
You know, some work is by curating data set, or labeling data, or designing algorithms.
All this, every step, people are involved.
So when we invite more people with different background, their insights, their knowledge, their emotional understanding of the downstream application will impact the input of the system.
- It seems to me that we're in a moment right now where there is an intense pressure to eliminate diversity, equity, and inclusion programs, to not think so much about the inherent diversity of the groups that we're participating in.
On some level, are you swimming upstream as you think about these inputs?
- Um, I think we all want a better world.
I really believe in this kind of common sense values that we want more people to benefit, we want more people to be involved.
Of course, implementation, how do we translate these beliefs into implementation?
And I still believe we need to involve as many people as possible.
I still believe that students from all background, whether they're from rural community, inner cities, artists- - Immigrants?
- Immigrants, yes, girls, arts lovers, you know, future journalists, future lawyers, future doctors, they all should be learning AI and have a say in this technology.
- You talked about the importance of community collaboration.
You've talked about the various stakeholders as part of a human-centered AI approach.
Yet, the majority of investment in AI is coming from the private sector.
- Mm-hmm.
- And I've heard you say that, quote, "AI is too important to be owned by private industry alone."
How do you address that?
- Yeah, Margaret, I'm actually concerned about this.
On one hand, I absolutely take a lot of pride, especially in America, that our private industry is so vibrant in developing wonderful AI technologies.
And they are translating that into products that do help people.
On the other hand, this vibrancy that we see today from the private sector is a result of a very healthy ecosystem in the past decades, where the federal governments, public sector, academia, and private sector worked together to grow this technology together.
So, for me, the ecosystem is almost like a really healthy relay race, where the public sector and the academia takes the first baton and does the basic science research.
As we run more and more advanced, we pass that to industry.
And eventually, everybody in the society benefits.
- And yet the model now is the opposite.
- What's happening is that university has been so drained of resources.
You know, the chips are not in universities.
The data are very rarely available in universities.
And a lot of talents are going only into industry.
And we're not getting enough resourcing back into the academia.
This is where I get worried because training, a lot of good training, is done in universities.
Even if you look at today's big tech company, most of their talents come from programs that, you know, academic program that provided computer science education, PhD programs, master programs, and we still need that.
- It's worth mentioning that of course universities have been drained of resources.
Perhaps, you know, the other actor here is government investment.
- Yes.
- How important is the government's role in investing in AI?
- The government's role in investing in basic science is fundamental to our country and to our society, because in academia, the kind of curiosity-driven research produces public good.
And public good is in the form of knowledge expansion, scientific discovery, as well as talents.
When students come to the universities and study under the best researchers, getting to labs, go to lectures, that they can glean the latest knowledge, this is a fundamentally a critical thing for our society.
- Do you think that's at risk in this environment?
- I think it's been, I've been saying this for, gosh, almost 10 years.
I'm seeing the draining of the resource, you know, starting quite a few years ago.
And I continue to be worried.
I continue to be advocating for a balanced ecosystem.
Again, I'm very excited what private sector is doing, but I'm equally excited that my colleagues at Stanford are discovering cure for cancer, are uncovering how the brain works, are listening to whales and understanding how they talk to each other and migrate across the ocean.
These are important knowledge and scientific discovery that we continue to need.
- You said earlier this year, quote, "It's essential that we govern on the basis of science and not science fiction."
- Yes.
- Can you give me an example of the wrong way to go about governance in AI?
- An example of the wrong way is starting with hyperbole, hyperbole of this technology would end humanity or this technology is only utopia, there's nothing you can do wrong with AI.
And these two things hardly exist for any technology.
And even when humans discover fire, it could be deadly.
It's true.
But it also has changed the way we live and eat in the early days to become stronger.
So every technology is a double-edged sword.
I think focusing on the hyperbole and driving policies through that lens is not very constructive to our society.
- So you've written that AI governance should, quote, "ensure its benevolent usage to guard against harmful outcomes."
- Mm-hmm.
- In practice, how do you advise policy makers to do that?
- Yeah, this is a topic we talked a lot about at Stanford Human-Centered AI Institute.
I really think a pragmatic approach that focuses on applications and ensuring a guardrail for safe deliverance of this technology is a good starting point.
For example, in medicine, right?
We have FDA, a regulatory framework.
Is it perfect?
No.
But it does a lot of the guardrailing to keep our consumers safe.
And as AI becomes more and more impactful in the area of food or drugs, we need to update FDA to answer to the new changes of these applications.
On the other hand, just because, this is an even better example, transportation.
You know, clearly, now we're getting closer and closer to self-driving cars, and we need the regulatory framework to be updated so that we can understand the guardrail, the accountability.
But just because there is potential harm doesn't mean we should stop creating cars.
Think about a hundred years ago.
There were fatal accidents, more fatal accidents than now using cars.
But instead of shutting down GM or Ford, we created seat belts and speed limits.
So good regulatory framework helps to keep the utility of the technology safe but also continues to encourage innovation.
- It's a hard balance to strike.
- It is, it is a hard balance.
- So how do we do it?
I mean, practically, how can we implement sort of that perfect balance?
- I think we begin with education and dialogue.
I get very worried when the hyperbolic voices gets amplified the most and the public only hears about the extremes.
And a lot of education and dialogue needs to be done between the tech world and the policy world.
This is why my institute go to Washington DC and talk to policy makers and lawmakers across the aisle.
This is too important to be a political topic.
And then keep the technologists and experts at the table as the policy is being made.
- The original version of this program, "Firing Line," which aired in the 1990s, through the 1990s, as the internet was emerging, had dealt with the new technology of the time.
Listen to this paean to the internet on the original program in 1996 by John Barlow, who was a poet and an essayist, and he called himself a cyber libertarian.
Take a look.
- I come to you from cyberspace.
And that sounds, to you, like a ridiculous thing to say.
I mean, I must be some kind of cyberspace cadet, but I'm telling you that there is a social space that includes the entire geographical area of the Planet Earth and a fairly large and rapidly growing percentage of the Earth's population.
And there is a culture in there, and there is a way of understanding ideas in the exchange of ideas and the free market of ideas.
And those folks are not vulnerable to the excesses of the United States Congress.
We are free and sovereign from whatever the United States Congress may wish to impose on the rest of the human race.
- You know, as we look back at that era, it seems like one of the mistakes policy makers made was that they didn't anticipate or have a mechanism for dealing with the real risks that would develop, from the internet, from social media, from threats to privacy, and even threats to democracy.
- Yep.
- What lessons can we take from that era as we enter this age of artificial intelligence when it comes to a regulatory framework and governance?
- Yeah, it's kind of stunning to revisit that.
You know, it is great to be hopeful, to wanna use technology for good, to come from that right place, but we need to know that any technology can harm people.
And we cannot be naive about that.
You called me an idealist earlier.
I think I'm a pragmatist, you know?
I also see that we absolutely need to take into account the potential harm of this technology.
- The vice president of the United States, JD Vance, warned at an international AI summit earlier this year that excessive regulation could stifle the AI industry.
- [JD] The AI future is not gonna be won by hand-ringing about safety.
It will be won by building.
- As the government goes about crafting AI policies, how should we think about the balance between innovation and safety?
- I would love to see a governance model where the upstream scientific discovery, the research is encouraged because that's the innovation engine of our society.
But by the time this technology is closer and closer in the hands of consumers, and users, and and small businesses, we do need to put guardrails around it to ensure it doesn't cause too much harm.
- Earlier this year, the Chinese startup DeepSeek unveiled a chatbot that has outperformed models that were developed here in the United States at a much lower cost.
And the breakthrough of course triggered concerns from policymakers in Washington that China could outpace the United States in AI development.
Does it matter where these advances are made?
- It matters what values we care about.
This is why I continue to come back to human-centered AI.
I love this line.
There's no independent machine values.
Machine values are human values.
If we're a society that believes in, we talk about dignity, agency, and liberty, then we know we need to create technology that doesn't harm these values.
- Sam Altman told senators that the future, quote, "can be almost unimaginably bright, but only if we take concrete steps to ensure that an American-led version of AI, built on democratic values like freedom and transparency, prevails over an authoritarian one."
Do you agree with that?
- I absolutely believe that democratic values are very important.
- The Chinese military has already reportedly started to integrate DeepSeek into non-combat tasks.
With the world's most advanced militaries deploying this all-powerful tool, of course it won't always be human-centered in its priority, what concerns you the most?
- Great question.
I honestly have a lot of concerns in AI.
If you focus on national security, of course I'm worried about AI harm to people, right?
Nobody wants harm.
Nobody wants wars.
Nobody wants families to be taken apart.
And, you know, I was a physics student when I was in Princeton.
- You were inspired by Einstein.
- Exactly.
So we have seen that technology can become harmful for people in warfare.
And obviously I don't wanna see that.
In the meantime, I'm also concerned about AI leaving people behind, the socioeconomic wellbeing.
So there's a lot I'm worried about.
- There's lot to be worried about.
Listen, you're also, one more thing to worry about, you're also a mother.
- Yes.
- You have small children.
- Yeah.
- I do too.
Google recently announced that it would make Gemini chatbot available to children.
- Mm-hmm.
- Now it includes many safeguards and Google has still warned parents that it may encounter, you know, information or content that children don't wanna see.
But is AI ready to be placed in the hands of children?
- In general, I think anyone who is a learner, and our kids learn since the beginning of their life, should use AI as a tool.
I do believe that- - And then how do you prevent the, you know, the loads of reporting that students are over-relying on chatbots to complete their papers, to cheat on their homework.
And the criticism is that the chatbots and the AI actually stifle the process of independent thinking and critical thinking developing.
- If that happens, it's the failure of education, not the failure of the students.
I believe that if we teach responsible tool-using, students will be superpowered by AI, by calculator, by computer.
You know, as mothers, we teach our our kids to use fire.
Think about the day you teach them how to turn on the stove, right?
It's kind of frightening, but we still have to teach them.
They have to learn both the utility and the harm of fire.
The same thing with AI.
So I really think it's not constructive to just focus on students are cheating.
Students are cheating if we don't teach them well, if we're not creating a learning environment that they know how to use constructive tools.
I think we should absolutely incorporate AI into kids' learning, into classrooms.
This is a useful tool for us.
- Final question.
In an essay on artificial intelligence from 2018, Henry Kissinger wrote, "The most difficult yet important question about the world in which we are headed is this.
What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?"
I think the question is, for all that we stand to gain from AI, are we in danger of losing something fundamental about our humanity?
- Great question.
This comes to the word agency.
If we give up, we would.
If we give up to not just AI, if we give up to authoritarianism, if we give up to internet in a harmful way, we would lose our agency.
And AI is the same.
I don't think we should give up our agency.
- Fei-Fei Li, thank you for joining me on "Firing Line."
- Thank you, Margaret.
- [Announcer 1] "Firing Line with Margaret Hoover" is made possible in part by Robert Granieri, Vanessa and Henry Cornell, the Fairweather Foundation, Peter and Mary Kalikow, Cliff and Laurel Asness, the Meadowlark Foundation, the Beth and Ravenel Curry Foundation, and by the following.
Corporate funding is provided by Stevens Inc. (lighthearted music) (lighthearted music continues) (lighthearted music continues) (lighthearted music) (lighthearted music) - [Announcer 2] You're watching PBS.
Support for PBS provided by: