[House Hearing, 116 Congress]
[From the U.S. Government Publishing Office]


               FOSTERING A HEALTHIER INTERNET TO PROTECT 
                              CONSUMERS

=======================================================================

                             JOINT HEARING

                               BEFORE THE

             SUBCOMMITTEE ON COMMUNICATIONS AND TECHNOLOGY

                                AND THE

            SUBCOMMITTEE ON CONSUMER PROTECTION AND COMMERCE

                                 OF THE

                    COMMITTEE ON ENERGY AND COMMERCE
                        HOUSE OF REPRESENTATIVES

                     ONE HUNDRED SIXTEENTH CONGRESS

                             FIRST SESSION

                               __________

                            OCTOBER 16, 2019

                               __________

                           Serial No. 116-69
                           
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]                           


      Printed for the use of the Committee on Energy and Commerce

                   govinfo.gov/committee/house-energy
                        energycommerce.house.gov
                        
                               __________

                    U.S. GOVERNMENT PUBLISHING OFFICE                    
43-533 PDF                 WASHINGTON : 2021                     
          
-----------------------------------------------------------------------------------                         
                        
                    COMMITTEE ON ENERGY AND COMMERCE

                     FRANK PALLONE, Jr., New Jersey
                                 Chairman
BOBBY L. RUSH, Illinois              GREG WALDEN, Oregon
ANNA G. ESHOO, California              Ranking Member
ELIOT L. ENGEL, New York             FRED UPTON, Michigan
DIANA DeGETTE, Colorado              JOHN SHIMKUS, Illinois
MIKE DOYLE, Pennsylvania             MICHAEL C. BURGESS, Texas
JAN SCHAKOWSKY, Illinois             STEVE SCALISE, Louisiana
G. K. BUTTERFIELD, North Carolina    ROBERT E. LATTA, Ohio
DORIS O. MATSUI, California          CATHY McMORRIS RODGERS, Washington
KATHY CASTOR, Florida                BRETT GUTHRIE, Kentucky
JOHN P. SARBANES, Maryland           PETE OLSON, Texas
JERRY McNERNEY, California           DAVID B. McKINLEY, West Virginia
PETER WELCH, Vermont                 ADAM KINZINGER, Illinois
BEN RAY LUJAN, New Mexico            H. MORGAN GRIFFITH, Virginia
PAUL TONKO, New York                 GUS M. BILIRAKIS, Florida
YVETTE D. CLARKE, New York, Vice     BILL JOHNSON, Ohio
    Chair                            BILLY LONG, Missouri
DAVID LOEBSACK, Iowa                 LARRY BUCSHON, Indiana
KURT SCHRADER, Oregon                BILL FLORES, Texas
JOSEPH P. KENNEDY III,               SUSAN W. BROOKS, Indiana
    Massachusetts                    MARKWAYNE MULLIN, Oklahoma
TONY CARDENAS, California            RICHARD HUDSON, North Carolina
RAUL RUIZ, California                TIM WALBERG, Michigan
SCOTT H. PETERS, California          EARL L. ``BUDDY'' CARTER, Georgia
DEBBIE DINGELL, Michigan             JEFF DUNCAN, South Carolina
MARC A. VEASEY, Texas                GREG GIANFORTE, Montana
ANN M. KUSTER, New Hampshire
ROBIN L. KELLY, Illinois
NANETTE DIAZ BARRAGAN, California
A. DONALD McEACHIN, Virginia
LISA BLUNT ROCHESTER, Delaware
DARREN SOTO, Florida
TOM O'HALLERAN, Arizona
                                 ------                                

                           Professional Staff

                   JEFFREY C. CARROLL, Staff Director
                TIFFANY GUARASCIO, Deputy Staff Director
                MIKE BLOOMQUIST, Minority Staff Director
             Subcommittee on Communications and Technology

                        MIKE DOYLE, Pennsylvania
                                 Chairman
JERRY McNERNEY, California           ROBERT E. LATTA, Ohio
YVETTE D. CLARKE, New York             Ranking Member
DAVID LOEBSACK, Iowa                 JOHN SHIMKUS, Illinois
MARC A. VEASEY, Texas                STEVE SCALISE, Louisiana
A. DONALD McEACHIN, Virginia         PETE OLSON, Texas
DARREN SOTO, Florida                 ADAM KINZINGER, Illinois
TOM O'HALLERAN, Arizona              GUS M. BILIRAKIS, Florida
ANNA G. ESHOO, California            BILL JOHNSON, Ohio
DIANA DeGETTE, Colorado              BILLY LONG, Missouri
G. K. BUTTERFIELD, North Carolina    BILL FLORES, Texas
DORIS O. MATSUI, California, Vice    SUSAN W. BROOKS, Indiana
    Chair                            TIM WALBERG, Michigan
PETER WELCH, Vermont                 GREG GIANFORTE, Montana
BEN RAY LUJAN, New Mexico            GREG WALDEN, Oregon (ex officio)
KURT SCHRADER, Oregon
TONY CARDENAS, California
DEBBIE DINGELL, Michigan
FRANK PALLONE, Jr., New Jersey (ex 
    officio)
                                 ------                                

            Subcommittee on Consumer Protection and Commerce

                        JAN SCHAKOWSKY, Illinois
                                Chairwoman
KATHY CASTOR, Florida                CATHY McMORRIS RODGERS, Washington
MARC A. VEASEY, Texas                  Ranking Member
ROBIN L. KELLY, Illinois             FRED UPTON, Michigan
TOM O'HALLERAN, Arizona              MICHAEL C. BURGESS, Texas
BEN RAY LUJAN, New Mexico            ROBERT E. LATTA, Ohio
TONY CARDENAS, California, Vice      BRETT GUTHRIE, Kentucky
    Chair                            LARRY BUCSHON, Indiana
LISA BLUNT ROCHESTER, Delaware       RICHARD HUDSON, North Carolina
DARREN SOTO, Florida                 EARL L. ``BUDDY'' CARTER, Georgia
BOBBY L. RUSH, Illinois              GREG GIANFORTE, Montana
DORIS O. MATSUI, California          GREG WALDEN, Oregon (ex officio)
JERRY McNERNEY, California
DEBBIE DINGELL, Michigan
FRANK PALLONE, Jr., New Jersey (ex 
    officio)
                             C O N T E N T S

                              ----------                              
                                                                   Page
Hon. Mike Doyle, a Representative in Congress from the 
  Commonwealth of Pennsylvania, opening statement................     2
    Prepared statement...........................................     3
Hon. Robert E. Latta, a Representative in Congress from the State 
  of Ohio, opening statement.....................................     4
    Prepared statement...........................................     5
Hon. Jan Schakowsky, a Representative in Congress from the State 
  of Illinois, opening statement.................................     6
    Prepared statement...........................................     8
Hon. Cathy McMorris Rodgers, a Representative in Congress from 
  the State of Washington, opening statement.....................     8
    Prepared statement...........................................    10
Hon. Frank Pallone, Jr., a Representative in Congress from the 
  State of New Jersey, opening statement.........................    11
    Prepared statement...........................................    12
Hon. Greg Walden, a Representative in Congress from the State of 
  Oregon, opening statement......................................    13
    Prepared statement...........................................    15
Hon. Anna G. Eshoo, a Representative in Congress from the State 
  of California, prepared statement..............................   125

                               Witnesses

Steve Huffman, Cofounder and Chief Executive Officer, Reddit, 
  Inc............................................................    17
    Prepared statement...........................................    20
    Answers to submitted questions...............................   232
Danielle Keats Citron, Professor of Law, Boston University School 
  of Law.........................................................    24
    Prepared statement...........................................    26
    Answers to submitted questions...............................   242
Corynne McSherry, Ph.D., Legal Director, Electronic Frontier 
  Foundation.....................................................    37
    Prepared statement...........................................    39
    Answers to submitted questions...............................   253
Gretchen Peters, Executive Director, Alliance to Counter Crime 
  Online.........................................................    57
    Prepared statement...........................................    59
    Answers to submitted questions...............................   262
Katherine Oyama, Global Head of Economic Property Policy, Google.    65
    Prepared statement...........................................    67
    Answers to submitted questions...............................   269
Hany Farid, Ph.D., Professor, University of California, Berkeley.    77
    Prepared statement...........................................    79
    Answers to submitted questions...............................   284

                           Submitted Material

Letter of October 15, 2021, from Carl Szabo, Vice President and 
  General Counsel, NetChoice, to subcommittee members,bmitted by 
  Mr. McNerney...................................................   126
Statement of the Electronic Frontier Foundation, ``Could Platform 
  Safe Harbors Save the NAFTA Talks?,'' January 23, 2018, 
  submitted by Mr. Bilirakis.....................................   147
Letter of October 14, 2019, from Ruth Vitale, Chief Executive 
  Officer, CreativeFuture, to Mr. Pallone and Mr. Walden, 
  submitted by Ms. Schakowsky\1\
Letter of October 16, 2021, from Chip Rogers, President and Chief 
  Executive Officer, American Hotel & Lodging Association, to Mr. 
  Pallone and Mr. Walden, submitted by Ms. Schakowsky............   150
Letter, undated, from Michael Petricone, Senior Vice President, 
  Consumer Technology Association, to Mr. Pallone, et al., 
  submitted by Ms. Schakowsky....................................   152
Letter of September 9, 2019, from Steve Shur, President, The 
  Travel Technology Association, to Mr. Pallone, et al., 
  submitted by Ms. Schakowsky....................................   155
Statement of Airbnb, ``Airbnb & the Communications Decency Act 
  Section 230,'' submitted by Ms. Schakowsky.....................   158
Letter of October 15, 2019, from James P. Steyer, Founder and 
  Chief Executive Officer, Common Sense Media, to Mr. Pallone and 
  Mr. Walden, submitted by Ms. Schakowsky........................   160
Letter of October 15, 2019, from the Computer & Communications 
  Industry Association, et al., to Mr. Pallone, et al., submitted 
  by Ms. Schakowsky..............................................   164
Letter of October 16, 2019, from Hon. Ed Case, a Representative 
  in Congress from the State of Hawaii, to Mr. Doyle, et al., 
  submitted by Ms. Schakowsky....................................   166
Letter of October 10, 2019, from American Family Voices, et al., 
  to Members of Congress, submitted by Ms. Schakowsky............   169
Statement of the Internet Infrastructure Coalition, October 16, 
  2019, submitted by Ms. Schakowsky..............................   171
Letter of May 3, 2019, from Hon. Steve Daines, a U.S. Senator 
  from the State of Montana, and Mr. Gianforte, to Sundar Pichai, 
  Chief Executive Officer, Google, submitted by Ms. Schakowsky...   174
Letter of October 15, 2019, from Berin Szoka, President, 
  TechFreedom, to Mr. Pallone and Mr. Walden, submitted by Ms. 
  Schakowsky.....................................................   175
Letter of October 16, 2019, from Michael Beckerman, President and 
  Chief Executive Officer, the Internet Association, to Mr. 
  Pallone and Mr. Walden, submitted by Ms. Schakowsky............   190
Letter of October 15, 2019, from the Wikimedia Foundation to Mr. 
  Pallone, et al., submitted by Ms. Schakowsky...................   192
Statement of the Motion Picture Association, Inc., by Neil Fried, 
  Senior Vice President and Senior Counsel, October 16, 2019, 
  submitted by Ms. Schakowsky....................................   196
Article of September 7, 2017, ``Searching for Help: She turned to 
  Google for help getting sober. Then she had to escape a 
  nightmare,`` by Carl Ferguson, The Verge, submitted by Ms. 
  Schakowsky.....................................................   199
Statement of R Street Institute by Jeffrey Westling, Fellow, 
  Technology and Innovation, et al., October 16, 2019, submitted 
  by Ms. Schakowsky..............................................   217


----------

\1\ The letter has been retained in committee files and also is 
available at https://docs.house.gov/meetings/IF/IF16/20191016/110075/
HHRG-116-IF16-20191016-SD005.pdf.

 
          FOSTERING A HEALTHIER INTERNET TO PROTECT CONSUMERS

                              ----------                              


                      WEDNESDAY, OCTOBER 16, 2019

                  House of Representatives,
      Subcommittee on Communications and Technology
                             joint with the
  Subcommittee on Consumer Protection and Commerce,
                          Committee on Energy and Commerce,
                                                    Washington, DC.
    The subcommittees met, pursuant to notice, at 10:02 a.m., 
in the John D. Dingell Room 2123, Rayburn House Office 
Building, Hon. Mike Doyle (chairman of the Subcommittee on 
Communications and Technology) and Hon. Jan Schakowsky 
(chairwoman of the Subcommittee on Consumer Protection and 
Commerce) presiding.
    Members present: Representatives Doyle, Schakowsky, Eshoo, 
DeGette, Matsui, Castor, McNerney, Welch, Clarke, Loebsack, 
Schrader, Cardenas, Dingell, Veasey, Kelly, Blunt Rochester, 
Soto, O'Halleran, Pallone (ex officio), Latta (Subcommittee on 
Communications and Technology ranking member), Rodgers 
(Subcommittee on Consumer Protection and Commerce ranking 
member), Shimkus, Burgess, Guthrie, Kinzinger, Bilirakis, 
Johnson, Bucshon, Brooks, Hudson, Walberg, Carter, Gianforte, 
and Walden (ex officio).
    Staff present: AJ Brown, Counsel; Jeffrey C. Carroll, Staff 
Director; Sharon Davis, Chief Clerk; Parul Desai, FCC Detailee; 
Evan Gilbert, Deputy Press Secretary; Lisa Goldman, Senior 
Counsel; Tiffany Guarascio, Deputy Staff Director; Alex Hoehn-
Saric, Chief Counsel, Communications and Consumer Protection; 
Zach Kahan, Outreach and Member Service Coordinator; Jerry 
Leverich, Senior Counsel; Dan Miller, Senior Policy Analyst; 
Phil Murphy, Policy Coordinator; Joe Orlando, Executive 
Assistant; Alivia Roberts, Press Assistant; Tim Robinson, Chief 
Counsel; Chloe Rodriguez, Policy Analyst; Andrew Souvall, 
Director of Communications, Outreach, and Member Services; 
Sydney Terry, Policy Coordinator; Rebecca Tomilchik, Staff 
Assistant; Mike Bloomquist, Minority Staff Director; Michael 
Engel, Minority Detailee, Communications and Technology; Bijan 
Koohmaraie, Minority Deputy Chief Counsel, Consumer Protection 
and Commerce; Tim Kurth, Minority Deputy Chief Counsel, 
Communications and Technology; Brannon Rains, Minority 
Legislative Clerk; Evan Viau, Minority Professional Staff 
Member, Communications and Technology; and Nate Wilkins, 
Minority Fellow, Communications and Technology.
    Mr. Doyle. The committee will now come to order. The Chair 
now recognizes himself for 5 minutes for an opening statement.

   OPENING STATEMENT OF HON. MIKE DOYLE, A REPRESENTATIVE IN 
         CONGRESS FROM THE COMMONWEALTH OF PENNSYLVANIA

    Online content moderation has largely enabled the internet 
experience that we know today. Whether it is looking up 
restaurant reviews on Yelp, catching up on ``SNL'' on YouTube, 
or checking in on a friend or a loved one on social media, 
these are all experiences that we have come to know and rely 
on. And the platforms we go to to do these things have been 
enabled by user-generated content as well as the ability of 
these companies to moderate that content and create 
communities.
    Section 230 of the Communications Decency Act has enabled 
that ecosystem to evolve. By giving online companies the 
ability to moderate content without equating them to the 
publisher or speaker of that content, we have enabled the 
creation of massive online communities of millions and billions 
of people to come together and interact.
    Today, this committee will be examining that world that 
Section 230 has enabled, both the good and the bad.
    I would like to thank the witnesses for appearing before us 
today. Each of you represents important perspectives related to 
the content moderation and the online ecosystem.
    Many of you bring up complex concerns in your testimony, 
and I agree that this is a complex issue. I know that some of 
you have argued that Congress should amend 230 to address 
things such as online criminal activity, disinformation, and 
hate speech, and I agree these are serious issues.
    Like too many other communities, my hometown of Pittsburgh 
has seen what unchecked hate can lead to. Almost a year ago, 
our community suffered the most deadly attack on Jewish 
Americans in our Nation's history. The shooter did so after 
posting a series of anti-Semitic remarks on a fringe site 
before finally posting that he was ``going in.''
    A similar attack occurred in New Zealand, and the gunman 
streamed his despicable acts on social media sites. And while 
some of these sites moved to quell that spread of that content, 
many didn't move fast enough, and the algorithms meant to help 
sports highlights and celebrity selfies go viral helped amplify 
a heinous act.
    In 2016, we saw similar issues when foreign adversaries 
used the power of these platforms against us to disseminate 
disinformation and foment doubt in order to sow division and 
instill distrust in our leaders and institutions.
    Clearly, we all need to do better, and I would strongly 
encourage the witnesses before us that represent these online 
platforms and other major platforms to step up.
    The other witnesses on the panel bring up serious concerns 
with the kind of content available on your platforms and the 
impact that content is having on society. And as they point 
out, some of those impacts are very disturbing. You must do 
more to address these concerns.
    That being said, Section 230 doesn't just protect the 
largest platforms or the most fringe websites. It enables 
comment sections on individual blogs, people to leave honest 
and open reviews, and free and open discussion about 
controversial topics.
    The kind of ecosystem that has been enabled by more open 
online discussions has enriched our lives and our democracy. 
The ability of individuals to have voices heard, particularly 
marginalized communities, cannot be understated. The ability of 
people to post content that speaks truth to power has created 
political movements in this country and others that have 
changed the world we live in. We all need to recognize the 
incredible power this technology has for good as well as the 
risks we face when it is misused.
    I want to thank you all again for being here, and I look 
forward today to our discussion.
    [The prepared statement of Mr. Doyle follows:]

                 Prepared Statement of Hon. Mike Doyle

    Online content moderation has largely enabled the internet 
experience we know today. Whether it's looking up restaurant 
reviews on Yelp, catching up on S-N-L on YouTube, or checking 
in on a friend or loved one on social media, these are all 
experiences we have come to rely on.
    And the platforms we go to do these things have been 
enabled by user-generated content, as well as the ability of 
these companies to moderate that content and create 
communities.
    Section 230 of the Communications Decency Act has enabled 
that ecosystem to evolve.
    By giving online companies the ability to moderate content 
without equating them to the publisher or speaker of that 
content, we've enabled the creation of massive online 
communities of millions and billions of people who can come 
together and interact.
    Today, this committee will be examining the world that 
Section 230 has enabled--both the good and the bad.
    I'd like to thank the witnesses for appearing before us 
today. Each of you represents important perspectives related to 
content moderation in the online ecosystem. Many of you bring 
up complex concerns in your testimony, and I agree that this a 
complicated issue.
    I know some of you have argued that Congress should amend 
230 to address things such as online criminal activity, 
disinformation, and hate speech; and I agree that these are 
serious issues.
    Like too many other communities, my hometown of Pittsburgh 
has seen what unchecked hate can lead to.
    Almost a year ago, our community suffered the most deadly 
attack on Jewish Americans in our nation's history; the shooter 
did so after posting a series of anti-Semitic remarks on a 
fringe site before finally posting that he was ``going in.''
    A similar attack occurred in New Zealand, and the gunman 
streamed his despicable acts on social media sites. And while 
some of these sites moved to quell the spread of this content, 
many didn't move fast enough. And the algorithms meant to help 
sports highlights and celebrity selfies go viral helped amplify 
a heinous act.
    In 2016, we saw similar issues, when foreign adversaries 
used the power of these platforms against us to disseminate 
disinformation and foment doubt in order to sow division and 
instill distrust in our leaders and institutions.
    Clearly, we all need to do better, and I would strongly 
encourage the witnesses before us who represent online 
platforms and other major platforms to step up.
    The other witnesses on the panel bring up serious concerns 
with the kinds of content available on your platforms and the 
impact that content is having on our society. And as they point 
out, some of those impacts are very disturbing. You must do 
more to address these concerns.
    That being said, Section 230 doesn't just protect the 
largest platforms or the most fringe websites.
    It enables comment sections on individual blogs, honest and 
open reviews of goods and services, and free and open 
discussion about controversial topics.
    It has enabled the kind of ecosystem that, by producing 
more open online discussions, has enriched our lives and our 
democracy.
    The ability of individuals to have their voices heard, 
particularly marginalized communities, cannot be understated.
    The ability of people to post content that speaks truth to 
power has created political movements in this country and 
others that have changed the world we live in.
    We all need to recognize the incredible power this 
technology has had for good, as well as the risks we face when 
it's misused.
    Thank you all again for being here and I look forward to 
our discussion.
    I yield 1 minute to my good friend Ms. Matsui.

    Mr. Doyle. And I would now like to yield the balance of my 
time to my good friend, Ms. Matsui.
    Ms. Matsui. Thank you, Mr. Chairman.
    I want to thank the witnesses for being here today.
    In April 2018, Mark Zuckerberg came before Congress and 
said, ``It was my mistake, and I am sorry'' when pushed about 
Facebook's role in allowing Russia to influence the 2016 
Presidential election.
    Fast forward 555 days. I fear that Mr. Zuckerberg may not 
have learned from his mistake. Recent developments confirm what 
we have all feared. Facebook will continue to allow ads that 
push falsehoods and lies, once again making its online 
ecosystem fertile ground for election interference in 2020.
    The decision to remove blatantly false information should 
not be a difficult one. The choice between deepfakes, hate 
speech, online bullies, and a fact-driven debate should be 
easy. If Facebook doesn't want to play referee about the truth 
in political speech, then they should get out of the game.
    I hope this hearing produces a robust discussion, because 
we need it now more than ever.
    Mr. Chairman, I yield back. Thank you.
    Mr. Doyle. Thank you. The gentlelady yields back.
    The Chair now recognizes Mr. Latta, the ranking member for 
the subcommittee, for 5 minutes for his opening statement.

OPENING STATEMENT OF HON. ROBERT E. LATTA, A REPRESENTATIVE IN 
                CONGRESS FROM THE STATE OF OHIO

    Mr. Latta. Well, thank you, Mr. Chairman, for holding 
today's hearing.
    And thank you very much to our witnesses for appearing 
before us. And, again, welcome to today's hearing on content 
moderation and a review of Section 230 of the Communications 
Decency Act.
    This hearing is a continuation of a serious discussion we 
began last session as to how Congress should examine the law 
and ensure accountability and transparency for the hundreds of 
millions of Americans using the internet today.
    We have an excellent panel of witnesses that represent a 
balanced group of stakeholders who perform work closely tied to 
Section 230. They range from large to small companies as well 
as academics and researchers.
    Let me be clear: I am not advocating that Congress repeal 
the law, nor am I advocating that Congress consider niche 
carveouts that could lead to a slippery slope of the death by a 
thousand cuts that some have argued would upend the internet 
industry if the law was entirely repealed.
    But before we discuss whether or not Congress should make 
modest, nuanced modifications to the law, we should first 
understand how we got to this point. It is important to look at 
Section 230 in context and when it was written. At the time, 
the decency portion of the Telecom Act of 1996 included other 
prohibitions on objectionable or lewd content that polluted the 
early internet. Provisions that were written to target obscene 
content were ultimately struck down by the Supreme Court, but 
the Section 230 provisions remained.
    Notably, CDA 230 was intended to encourage internet 
platforms that interact with computer services like CompuServe 
and America Online to proactively take down offensive content. 
As Chris Cox stated on the House floor, ``We want to encourage 
people like Prodigy, like CompuServe, like America Online, like 
the new Microsoft Network, to do everything possible for us, 
the consumer, to help us control, at the portals of our 
computer, at the front door of our house, what comes in and 
what our children see.''
    It is unfortunate, however, that the courts took such a 
broad interpretation of Section 230, simply granting a broad 
liability protection without platforms having to demonstrate 
that they are doing, quote, ``everything possible.'' Instead of 
encouraging use of the sword that Congress envisioned, numerous 
platforms have hidden behind the shield and use procedural 
tools to avoid litigation without having to take the 
responsibility.
    Not only are Good Samaritans sometimes being selective in 
taking down harmful or illegal activity, but Section 230 has 
been interpreted so broadly that bad Samaritans can skate by 
without accountability.
    That is not to say all platforms never use the tools 
afforded by Congress. Many do great things. Many of the bigger 
platforms remove billions, and that is with a ``b,'' accounts 
annually. But oftentimes these instances are the exception, not 
the rule.
    Today we will dig deeper into those examples and learn how 
platforms decide to remove content, whether it is with the 
tools provided by Section 230 or with their own self-
constructed terms of service. Under either authority, we should 
be encouraging enforcement to continue.
    Mr. Chairman, I thank you for holding this important 
hearing so that we can have an open discussion on Congress' 
intent of CDA 230 and if we should reevaluate the law. We must 
ensure that platforms are held reasonably accountable for 
activity on their platform without drastically affecting the 
innovative startups.
    And with that, Mr. Chairman, I yield back the balance of my 
time.
    [The prepared statement of Mr. Latta follows:]

               Prepared Statement of Hon. Robert E. Latta

    Welcome to today's hearing on content moderation and a 
review of Section 230 of the Communications Decency Act. This 
hearing is a continuation of a serious discussion we began last 
session as to how Congress should examine the law and ensure 
accountability and transparency for the hundreds of millions of 
Americans using the internet today.
    We have an excellent panel of witnesses that represent a 
balanced group of stakeholders who perform work closely tied to 
Section 230--this well-respected group ranges from big 
companies to small companies, as well as academics to 
researchers.
    Let me be clear, I am not advocating that Congress repeal 
the law. Nor am I advocating for Congress to consider niche 
``carveouts'' that could lead to a slippery slope of the 
``death-by-a-thousand-cuts'' that some have argued would upend 
the internet industry as if the law were repealed entirely. But 
before we discuss whether or not Congress should make modest, 
nuanced modifications to the law, we first should understand 
how we've gotten to this point.
    It's important to take Section 230 in context of when it 
was written. At the time, the ``decency'' portion of the 
Telecom Act of 1996 included other prohibitions on 
objectionable or lewd content that polluted the early internet. 
Provisions that were written to target obscene content were 
ultimately struck down at the Supreme Court, but the Section 
230 provisions remained.
    Notably, CDA 230 was intended to encourage internet 
platforms--then, ``interactive computer services'' like 
CompuServe and America Online--to proactively take down 
offensive content. As Chris Cox stated on the floor of the 
House, ``We want to encourage people like Prodigy, like 
CompuServe, like America Online, like the new Microsoft 
Network, to do everything possible for us, the customer, to 
help us control, at the portals of our computer, at the front 
door of our house, what comes in and what our children see.''
    It is unfortunate, however, that the courts took such a 
broad interpretation of Section 230, simply granting broad 
liability protection without platforms having to demonstrate 
that they are doing, quote, ``everything possible.'' Instead of 
encouraging use of the sword that Congress envisioned, numerous 
platforms have hidden behind the shield and used procedural 
tools to avoid litigation without having to take any 
responsibility. Not only are ``Good Samaritans'' sometimes 
being selective in taking down harmful or illegal activity, but 
Section 230 has been interpreted so broadly that ``bad 
Samaritans'' can skate by without accountability, too.
    That's not to say all platforms never use the tools 
afforded them by Congress; many do great things. Some of the 
bigger platforms remove billions--with a B--accounts annually. 
But oftentimes, these instances are the exception, not the 
rule. Today we will dig deeper into those examples to learn how 
platforms decide to remove content--whether it's with the tools 
provided by Section 230 or with their own self-constructed 
terms of service. Under either authority, we should be 
encouraging enforcement to continue.
    Mr. Chairman, I thank you for holding this important 
hearing so that we can have an open discussion on Congress' 
intent of CDA 230 and if we should reevaluate the law. We must 
ensure platforms are held reasonably accountable for activity 
on their platform, without drastically affecting innovative 
startups.
    Thank you, I yield back.

    Mr. Doyle. The gentleman yields back.
    I should have mentioned this is a joint hearing between our 
committee and the Committee on Consumer Protection and 
Commerce. And I would like to recognize the chair of that 
committee for 5 minutes, Ms. Schakowsky.

 OPENING STATEMENT OF HON. JAN SCHAKOWSKY, A REPRESENTATIVE IN 
              CONGRESS FROM THE STATE OF ILLINOIS

    Ms. Schakowsky. Thank you, Mr. Chairman.
    And good morning, and thank all the panelists for being 
here today.
    The internet certainly has improved our lives in many, many 
ways and enabled Americans to more actively participate in 
society, education, and commerce.
    Section 230 of the Communications Decency Act has been at 
the heart of the United States' internet policy for over 20 
years. Many say that this law allowed free speech to flourish, 
allowing the internet to grow into what it is today.
    In the early days of the internet, it was intended to 
encourage online platforms to moderate user-generated content, 
to remove offensive, dangerous, or illegal content.
    The internet has come a long way since the law was first 
enacted. The amount and sophistication of user postings has 
increased exponentially.
    Unfortunately, the number of Americans who report 
experiencing extremism, extreme online harassment, which 
includes sexual harassment, stalking, bullying, and threats of 
violence, have gone up. Over the last 2 years, 37 percent of 
users say that they have experienced that this year. Likewise, 
extremism, hate speech, election interference, and other 
problematic content is proliferating.
    The spread of such content is problematic, that is for 
sure, and actually causes some real harm that multibillion-
dollar companies like Facebook, Google, and Twitter can't or 
won't fix.
    And if this weren't enough cause for concern, more for-
profit businesses are attempting to use Section 230 as a 
liability shield actively, that they have nothing to do with 
third-party content or content moderation policy.
    In a recent Washington Post article, Uber executives seemed 
to be opening the door to claiming vast immunity from labor, 
criminal, and local traffic liability based on Section 230. 
This would represent a major unraveling of 200 years of social 
contracts, community governance, and congressional intent.
    Also at issue is the Federal Trade Commission's Section 5 
authority on unfair or deceptive practices. The FTC pursues 
Section 5 cases on website-generated content, but the terms of 
service violations for third-party content may also be 
precluded by the 230 immunity.
    I wanted to talk a bit about injecting 230 into trade 
agreements. It seems to me that we have already seen that now 
in the Japan trade agreement, and there is a real push to 
include that now in the Mexico-Canada-U.S. trade agreement. 
There is no place for that. I think that the laws in these 
other countries don't really accommodate what the United States 
has done about 230.
    The other thing is, we are having a discussion right now, 
an important conversation about 230, and in the midst of that 
conversation, because of all the new developments, I think it 
is just inappropriate right now at this moment to insert this 
liability protection into trade agreements.
    As a member of the working group that is helping to 
negotiate that agreement, I am pushing hard to make sure that 
it just isn't there. I don't think we need to have any 
adjustment to 230. It just should not be in trade agreements.
    So all of the issues that we are talking about today 
indicate that there may be a larger problem that 230 no longer 
is achieving the goal of encouraging platforms to protect their 
users. And today I hope that we can discuss holistic solutions, 
not talking about eliminating 230 but having a new look at that 
in the light of the many changes that we are seeing into the 
world of big tech right now.
    I look forward to hearing from our witnesses and how it can 
be made even better for consumers.
    And I yield back. Thank you.
    [The prepared statement of Ms. Schakowsky follows:]

               Prepared Statement of Hon. Jan Schakowsky

    Good morning, and thank you all for attending today's 
hearing. The internet has improved our lives in many ways and 
enabled Americans to more actively participate in society, 
education, and commerce.
    Section 230 of the Communications Decency Act has been at 
the heart of the United States' internet policy for over 20 
years. Many say that this law allowed free speech to flourish, 
allowing the internet to grow into what it is today. In the 
early days of the internet, it was intended to encourage online 
platforms to moderate user-generated content--to remove 
offensive, dangerous, or illegal content.
    The internet has come a long way since the law was enacted. 
The amount and sophistication of user posts have increased 
exponentially. Unfortunately the number of Americans who report 
experiencing extreme online harassment, which includes sexual 
harassment, stalking, bullying, and threats of violence, has 
gone up over the last two years. Likewise, extremism, hate 
speech, election inference, and other problematic content is 
proliferating.
    The spread of such content is a problem that multibillion-
dollar companies--like Facebook, Google, and Twitter--can't or 
won't fix.
    As if this weren't enough cause for concern, more for-
profit businesses are attempting to use section 230 as a 
liability shield for activities that have nothing to do with 
3rd party content or content moderation policies.
    In a recent Washington Post article, Uber executives seem 
to be opening the door to claiming vast immunity from labor, 
criminal, and local traffic liability based on section 230. 
This would represent a major unraveling of 200 years of social 
contracts, community governance, and Congressional intent.
    Also at issue is the Federal Trade Commission's Section 5 
authority on unfair or deceptive practices. The FTC has pursued 
Section 5 cases on website-generated content, but terms of 
service violations for third-party content may also be 
precluded by the 230 immunity.
    Lastly, this committee must consider the effects of 
including 230 language in trade agreements. Today we are having 
a thoughtful discussion about 230 to ensure we find the right 
balance between protecting free speech, protecting Americans 
from violence and harassment online, and ensuring that 
multibillion-dollar companies can be held accountable to 
consumers. It strikes me as premature to export our own 
political debate on 230 to our trading partners, while at the 
same time limiting Congress' ability to have said debate.
    Each of the issues I mentioned are indications that there 
may be a larger problem, that 230 may no longer be achieving 
the goal of encouraging platforms to protect their users. 
Today, I hope we can discuss holistic solutions.
    The internet has provided many benefits to our society, and 
I look forward to hearing from our witnesses how it can be made 
even better for consumers.

    Mr. Doyle. The gentlelady yields back.
    The Chair now recognizes the ranking member of the 
committee, Mrs. McMorris Rodgers.

      OPENING STATEMENT OF HON. CATHY McMORRIS RODGERS, A 
    REPRESENTATIVE IN CONGRESS FROM THE STATE OF WASHINGTON

    Mrs. Rodgers. Good morning. Welcome to today's joint 
hearing on online content management.
    As the Republican leader on the Consumer Protection and 
Commerce Subcommittee, it is my priority to protect consumers 
while preserving the ability for small businesses and startups 
to innovate. In that spirit, today we are discussing online 
platforms in Section 230 of the Communications Decency Act.
    In the early days of the internet, two companies were sued 
for content posted on their website by users. One company 
sought to moderate content on their platform; the other did 
not. In deciding these cases, the Court found the company that 
did not make any content decisions was immune from liability, 
but the company that moderated content was not.
    It was after these decisions that Congress created Section 
230. Section 230 is intended to protect, quote, ``interactive 
computer services'' from being sued over what users post while 
also allowing them to moderate content that may be harmful, 
illicit, or illegal.
    This liability protection has played a critical and 
important role in the way we regulate the internet. It has 
allowed small businesses and innovators to thrive online 
without the fear of frivolous lawsuits from bad actors looking 
to make a quick buck.
    Section 230 is also largely misunderstood. Congress never 
intended to provide immunity only to websites who are, quote, 
``neutral.'' Congress never wanted platforms to simply be 
neutral conduits but, in fact, wanted platforms to moderate 
content. The liability protection also extended to allow 
platforms to make good-faith efforts to moderate material that 
is obscene, lewd, excessively violent, or harassing.
    There is supposed to be a balance to the use of Section 
230. Small internet companies enjoy a safe harbor to innovate 
and flourish online while also incentivizing companies to keep 
the internet clear of offensive and violent content by 
empowering these platforms to act and to clean up their own 
site.
    The internet also revolutionized the freedom of speech by 
providing a platform for every American to have their voice 
heard and to access an almost infinite amount of information at 
their fingertips. Medium and other online blogs have provided a 
platform for anyone to write an op-ed. Wikipedia provides free, 
in-depth information on almost any topic you can imagine 
through mostly user-generated and moderated content. Companies 
that started in dorm rooms and garages are now global 
powerhouses.
    We take great pride in being the global leader in tech and 
innovation. But while some of our biggest companies certainly 
have grown, have they matured? Today it is often difficult to 
go online without seeing harmful, disgusting, or somewhat 
illegal content.
    To be clear, I fully support free speech and believe 
society strongly benefits from open dialogue and free 
expression online. I know that there have been some calls for 
Big Government to mandate or dictate free speech or ensure 
fairness online, and it is coming from both sides of the aisle.
    Though I share similar concerns that others have expressed 
that are driving some of these policy proposals, I do not 
believe these proposals are consistent with the First 
Amendment. Republicans successfully fought to repeal the FCC's 
Fairness Doctrine for broadcast regulation during the 1980s, 
and I strongly caution against advocating for a similar 
doctrine online.
    It should not be the FCC, FTC, or any other Government 
agency's job to moderate free speech online. Instead, we should 
continue to provide oversight of big tech and their use of 
Section 230 and encourage constructive discussions on the 
responsible use of content moderation.
    This is a very important question that we are going to 
explore today with everyone on the panel. How do we ensure that 
companies with enough resources are responsibly earning their 
liability protection? We want companies to benefit not only 
from the shield but also use the sword Congress afforded them 
to rid their sites of harmful content.
    I understand it is a delicate issue and certainly very 
nuanced. I want to be very clear: I am not for gutting Section 
230. It is essential for consumers and entities in the internet 
ecosystem. Misguided and hasty attempts to amend or even repeal 
Section 230 for bias or other reasons could have unintended 
consequences for free speech and the ability for small 
businesses to provide new and innovative services.
    But at the same time, it is clear we have reached a point 
where it is incumbent upon us as policymakers to have a serious 
and thoughtful discussion about achieving the balance on 
Section 230.
    I thank you for the time, and I yield back.
    [The prepared statement of Mrs. Rodgers follows:]

           Prepared Statement of Hon. Cathy McMorris Rodgers

    Good morning and welcome to today's joint hearing on online 
content moderation.
    As the Republican Leader on the Consumer Protection and 
Commerce Subcommittee, it's my priority to protect consumers 
while preserving the ability for small business and startups to 
innovate.
    In that spirit, today we are discussing online platforms 
and Section 230 of the Communications Decency Act.
    In the early days of the internet, two companies were sued 
for content posted on their website by users.
    One company sought to moderate content on their platform; 
the other did not.
    In deciding these cases, the Court found the company that 
did not make any content decisions was immune from liability, 
but the company that moderated content was not.
    It was after these decisions that Congress enacted Section 
230.
    Section 230 is intended to protect ``interactive computer 
services'' from being sued over what users post, while allowing 
them to moderate content that may be harmful, illicit, or 
illegal.
    This liability protection has played a critically important 
role in the way we regulate the internet.
    It's allowed small businesses and innovators to thrive 
online without fear of frivolous lawsuits from bad actors 
looking to make a quick buck.
    Section 230 is also largely misunderstood. Congress never 
intended to provide immunity only to websites who are 
``neutral.''
    Congress never wanted platforms to simply be neutral 
conduits but--in fact--wanted platforms to moderate content.
    The liability protection also extended to allow platforms 
to make good faith efforts to moderate material that is 
obscene, lewd, excessively violent, or harassing.
    There is supposed to be a balance to the use of Section 
230. Small internet companies enjoy a safe harbor to innovate 
and flourish online while also incentivizing companies to keep 
the internet clear of offensive and violent content by 
empowering these platforms to act and clean up their own site.
    The internet has revolutionized the freedom of speech by 
providing a platform for every American to have their voice 
heard and to access an almost infinite amount of information at 
their fingertips.
    Medium and other online blogs have provided a platform for 
anyone to write an op-ed.
    Wikipedia provides free, in-depth information on almost any 
topic you can imagine, through mostly user-generated and 
moderated content.
    Companies that started in dorm rooms and garages are now 
global powerhouses.
    We take great pride in being the global leader in tech and 
innovation, but while some of our biggest companies certainly 
have grown, have they matured?
    Today, it's often difficult to go online without seeing 
harmful, disgusting, and sometimes illegal content.
    To be clear, I fully support free speech and believe 
society strongly benefits from open dialogue and free 
expression online.
    I know there have been some calls for a Big Government 
mandate to dictate free speech or ensure fairness online--even 
coming from some of my colleagues on my side of the aisle.
    Though I share similar concerns that others have expressed 
that are driving some of these policy proposals, I do not 
believe these proposals are consistent with the First 
Amendment.
    Republicans successfully fought to repeal the FCC's 
Fairness Doctrine for broadcast regulation during the 1980s and 
I strongly caution against advocating for a similar doctrine 
online.
    It should not be the FCC, FTC, or any Government agency's 
job to moderate free speech online.
    Instead, we should continue to provide oversight of Big 
Tech and their use of Section 230 and encourage constructive 
discussions on the responsible use of content moderation.
    This is an important question that we'll explore with our 
expert panel today: How do we ensure the companies with enough 
resources are responsibly earning their liability protection?
    We want companies to benefit not only from the ``shield'' 
to liability, but also to use the ``sword'' Congress afforded 
them to rid their sites of harmful content.
    I understand this is a delicate issue and certainly very 
nuanced.
    I want to be very clear, I am not for gutting Section 230. 
It is essential for consumers and entities in the internet 
ecosystem.
    Misguided and hasty attempts to amend or even repeal 
Section 230 for bias or other reasons could have disastrous, 
unintended consequences for free speech and the ability for 
small companies to provide new and innovative services.
    At the same time, it is clear we have reached a point where 
it is incumbent upon policymakers to have a serious and 
thoughtful discussion about achieving the balance Section 230 
is focused on:
    Ensuring small businesses can innovate and grow, while also 
incentivizing companies to take more responsibility over their 
platforms.
    Thank you. I yield back.

    Mr. Doyle. The gentlelady yields back.
    The Chair now recognizes Mr. Pallone, chairman of the full 
committee, for 5 minutes for his opening statement.

OPENING STATEMENT OF HON. FRANK PALLONE, Jr., A REPRESENTATIVE 
            IN CONGRESS FROM THE STATE OF NEW JERSEY

    Mr. Pallone. Thank you, Chairman Doyle.
    The internet is one of the single greatest human 
innovations. It promotes free expression, connections, and 
community. It also fosters economic opportunity, with trillions 
of dollars exchanged online every year.
    One of the principal laws that paved the way for the 
internet to flourish is Section 230 of the Communications 
Decency Act, which, of course, passed as part of the 
Telecommunications Act of 1996. And we enacted this section to 
give platforms the ability to moderate their sites to protect 
consumers without excessive risk of litigation, and to be 
clear, Section 230 has been an incredible success.
    But in the 20 years since Section 230 became law, the 
internet has become more complex and sophisticated. In 1996, 
the global internet reached only 36 million users, or less than 
1 percent of the world's population. Only one in four Americans 
reported going online every day.
    Compare that to now when nearly all of us are online almost 
every hour that we are not sleeping. Earlier this year, the 
internet passed 4.39 billion users worldwide. And here in the 
U.S., there are about 230 million smartphones that provide 
Americans instant access to online platforms. The internet has 
become a central part of our social, political, and economic 
fabric in a way that we couldn't have dreamed of when we passed 
the Telecommunications Act.
    And with that complexity and growth, we also have seen the 
darker side of the internet grow. Online radicalization has 
spread, leading to mass shootings in our schools, churches, and 
movie theaters. International terrorists are using the internet 
to groom recruits. Platforms have been used for the illegal 
sale of drugs, including those that sparked the opioid 
epidemic. Foreign governments and fraudsters have pursued 
political disinformation campaigns using new technology like 
deepfakes designed to sow civil unrest and disrupt democratic 
elections. And there are consent attacks against women, people 
of color, and other minority groups.
    And perhaps most despicable of all is the growth in the 
horrendous sexual exploitation of children online. In 1998, 
there were 3,000 reports of material depicting the abuse of 
children online. Last year, 45 million photo and video reports 
were made. And while platforms are now better at detecting and 
removing this material, recent reporting shows that law 
enforcement officers are overwhelmed by the crisis.
    And these are all issues that we can't ignore, and tech 
companies need to step up with new tools to help address these 
serious problems. Each of these issues demonstrates how online 
content moderation has not stayed true to the values underlying 
Section 230 and has not kept pace with the increasing 
importance of the global internet.
    And there is no easy solution to keep this content off the 
internet. As policymakers, I am sure we all have our ideas 
about how we might tackle the symptoms of poor content 
moderation online while also protecting free speech, but we 
must seek to fully understand the breadth and depth of the 
internet today, how it has changed, and how it can be made 
better. We have to be thoughtful, careful, and bipartisan in 
our approach.
    So it is with that in mind that I was disappointed that 
Ambassador Lighthizer, the U.S. Trade Representative, refused 
to testify today. The U.S. has included language similar to 
Section 230 in the United States-Mexico-Canada Agreement and 
the U.S.-Japan Trade Agreement.
    Ranking Member Walden and I wrote to the Ambassador in 
August raising concerns about why the USTR has included this 
language in trade deals as we debate them across the Nation, 
and I was hoping to hear his perspective on why he believes 
that that was appropriate, because including provisions in 
trade agreements that are controversial to both Democrats and 
Republicans is not the way to get support from Congress, 
obviously. So hopefully the Ambassador will be more responsive 
to bipartisan requests in the future.
    And with that, Mr. Chairman, I will yield back.
    [The prepared statement of Mr. Pallone follows:]

             Prepared Statement of Hon. Frank Pallone, Jr.

    The internet is one of the single greatest human 
innovations. It promotes free expression, connections, and 
community. It also fosters economic opportunity with trillions 
of dollars exchanged online every year.
    One of the principal laws that paved the way for the 
internet to flourish is Section 230 of the Communications 
Decency Act, which passed as part of the Telecommunications Act 
of 1996. We enacted this section to give platforms the ability 
to moderate their sites to protect consumers, without excessive 
risk of litigation. And to be clear, Section 230 has been an 
incredible success.
    But, in the 20 years since Section 230 became law, the 
internet has become more complex and sophisticated. In 1996, 
the global internet reached only 36 million users, or less than 
1 percent of the world's population. Only one in four Americans 
reported going online every day. Compare that to now, when 
nearly all of us are online almost every hour we are not 
sleeping. Earlier this year, the internet passed 4.39 billion 
users worldwide, and here in the U.S. there are about 230 
million smartphones that provide Americans instant access to 
online platforms. The internet has become a central part of our 
social, political, and economic fabric in a way that we 
couldn't have dreamed of when we passed the Telecommunications 
Act.
    And with that complexity and growth, we have also seen the 
darker side of the internet grow.
    Online radicalization has spread, leading to mass shootings 
in our schools, churches, and movie theaters.
    International terrorists are using the internet to groom 
recruits.
    Platforms have been used for the illegal sale of drugs, 
including those that sparked the opioid epidemic.
    Foreign governments and fraudsters have pursued political 
disinformation campaigns--using new technology like deepfakes--
designed to sow civil unrest and disrupt democratic elections.
    There are the constant attacks against women, people of 
color, and other minority groups.
    And perhaps most despicable of all is the growth in the 
horrendous sexual exploitation of children online. In 1998, 
there were 3,000 reports of material depicting the abuse of 
children online. Last year, 45 million photo and video reports 
were made. While platforms are now better at detecting and 
removing this material, recent reporting shows that law 
enforcement officers are overwhelmed by this crisis.
    These are all issues that cannot be ignored, and tech 
companies need to step up with new tools to help address these 
serious problems. Each of these issues demonstrates how online 
content moderation has not stayed true to the values underlying 
Section 230 and has not kept pace with the increasing 
importance of the global internet.
    There is no easy solution to keep this content off the 
internet. As policymakers, I'm sure we all have our ideas about 
how we might tackle the symptoms of poor content moderation 
online while also protecting free speech.
    We must seek to fully understand the breadth and depth of 
the internet today, how it has changed and how it can be made 
better. We must be thoughtful, careful, and bipartisan in our 
approach.
    It is with that in mind that I am disappointed Ambassador 
Lighthizer, the United States Trade Representative (USTR), 
refused to testify today. The United States has included 
language similar to Section 230 in the United States-Mexico-
Canada Agreement and the U.S.-Japan Trade Agreement. Ranking 
Member Walden and I wrote to the Ambassador in August raising 
concerns about why the USTR has included this language in trade 
deals as we debate them across the Nation, and I was hoping to 
hear his perspective on why he believes that is appropriate. 
Including provisions in trade agreements that are controversial 
to both Republicans and Democrats is not the way to get support 
from Congress. Hopefully, Ambassador Lighthizer will be more 
responsive to bipartisan requests in the future.

    Mr. Doyle. The gentleman yields back.
    The Chair would like to remind Members that, pursuant to 
committee rules, all Members' written opening statements shall 
be made part of the record.
    Oh.
    Mr. Walden. Could mine be made part of it?
    Mr. Doyle. I apologize. The Chair now yields to my good 
friend, the ranking member, for 5 minutes.

  OPENING STATEMENT OF HON. GREG WALDEN, A REPRESENTATIVE IN 
               CONGRESS FROM THE STATE OF OREGON

    Mr. Walden. How times have changed.
    Thank you, Mr. Chairman.
    And I want to welcome our witnesses today. Thank you for 
being here. It is really important work.
    And I will tell you at the outset, we have got another 
subcommittee meeting upstairs, so I will be bouncing in 
between. But I have all your testimony and really look forward 
to your comments. It is, without question, a balanced roster of 
experts in this field, so we are really blessed to have you 
here.
    Last Congress, we held significant hearings that jump-
started the discussion on the state of online protection as 
well as the legal basis underpinning the modern internet 
ecosystem, as you have heard today, and of course the future of 
content moderation as algorithms now determine much of what we 
see online. That is an issue our constituents want to know more 
about.
    Today we will undertake a deeper review of Section 230 of 
the Communications Decency Act portion of the 1996 
Telecommunications Act.
    In August of this year, as you just heard, Chairman Pallone 
and I raised the issue of the appearance of export language 
mirroring Section 230 in trade agreements. We did that in a 
letter to the U.S. Trade Representative, Robert Lighthizer. We 
expressed concerns of this internet policy being taken out of 
the context of its intent and that in the future, the Office of 
the United States Trade Representative should consult our 
committee in advance of negotiating on these very issues.
    Unfortunately, we have learned that derivative language of 
Section 230 appeared in an agreement with Japan and continues 
to be advanced in other discussions. We are very frustrated 
about that, and I hope the administration is paying attention 
and listening because they haven't up to this point on this 
matter.
    The USTR does not appear to be reflecting the scrutiny the 
administration itself says they are applying to how CDA 230 is 
being utilized in American society. That makes it even more 
alarming for the USTR to be exporting such policies without the 
involvement of this committee.
    To be clear, this section of the 1996 Telecom Act served as 
the foundation for the Information Age. So we are here by no 
means to condemn but rather to understand what truly is and see 
that the entirety of this section is faithfully followed rather 
than cherrypicking just a portion.
    I want to go back to the trade piece. You know, I thought 
the letter to the Ambassador was going to send the right 
message. We are not trying to blow up USTR or USMCA. I voted 
for every trade agreement going forward. I am a big free 
trader. But we are getting blown off on this, and I am tired of 
it. So let it be clear.
    Then we found out it is in the Japan agreement. So, you 
know, clearly they are not listening to our committee or us. So 
we are serious about this matter. We have not heard from USTR, 
and this is a real problem. So take note.
    If we only refer to Section 230 as ``the 26 words that 
created the internet,'' as has been popularized by some, we are 
already missing the mark since, by our word count, which you 
can use software to figure out, that excludes the Good 
Samaritan obligations in Section (c)(2). So we should start 
talking more about that section as the 83 words that can 
preserve the internet.
    All the sections and provisions of CDA 230 should be 
clearly taken together and not apart. Many of our concerns can 
be readily addressed if companies just enforce their terms of 
service.
    To put that in better context, I believe a quick history 
lesson is in order. Today's internet looks a lot different than 
the days that CompuServe and Prodigy and the message boards 
dominated the internet in the 1990s. While the internet is more 
dynamic and content rich than ever before, there were problems 
in its infancy managing the vast amounts of speech occurring 
online.
    As our friend Chris Cox, former Member, the author of the 
legislation, alum of this committee, pointed out on the House 
floor during debate over his amendment, ``No matter how big the 
army of bureaucrats, it is not going to protect my kids because 
I do not think the Federal Government will get there in time.'' 
That is his quote.
    So Congress recognized then, as we should now, that we need 
companies to step up to the plate and curb harmful and illegal 
content from their platforms. The internet is not something to 
be regulated and managed by government.
    Upon enactment, CDA 230 clearly bestowed on providers and 
users the ability to go after the illegal and harmful content 
without fear of being held liable in court.
    Now, while the law was intended to empower, we have seen 
social media platforms slow to clean up sites while being quick 
to use immunity from legal responsibility for such content. In 
some cases, internet platforms have clearly shirked the 
responsibility for the content on their platform.
    The broad liability shield now in place through common law 
has obscured the central bargain that was struck, and that is 
the internet platforms with user-generated content are 
protected from liability in exchange for the ability to make 
good faith efforts to moderate harmful and illegal content.
    So let me repeat for those that want to be included in the 
``interactive computer services'' definition: Enforce your own 
terms of service.
    I look forward to an informative discussion today on 
differentiating constitutionally protected speech from illegal 
content, how we should think of CDA 230 protections for small 
entities versus large ones, and how various elements of the 
internet ecosystem shape what consumers see or don't see.
    With that, Mr. Chairman, thank you for having this hearing, 
and I look forward to getting all the feedback from the 
witnesses, but, indeed, I have to go up to the other hearing. 
So thank you very much.
    [The prepared statement of Mr. Walden follows:]

                 Prepared Statement of Hon. Greg Walden

    Thank you, Mr. Chairman. I want to welcome our witnesses to 
this hearing--it is without question a balanced roster of 
experts in the field. Last Congress, we held significant 
hearings that jump-started the discussion on the state of 
online protections, as well as the legal basis underpinning the 
modern internet ecosystem, and of course the future of content 
moderation as algorithms now determine much of what see online. 
Today, we will undertake a deeper review of Section 230 of the 
Communications Decency Act portion of the 1996 
Telecommunications Act.
    In August of this year, Chairman Pallone and I raised the 
issue of the appearance of export of language mirroring Section 
230 in trade agreements in a letter to United States Trade 
Representative Robert Lighthizer. We expressed concerns of this 
internet policy being taken out of the context of its intent, 
and that in the future the Office of the United States Trade 
Representative should consult our committee in advance of 
negotiating on these issues. Unfortunately, we have learned 
that derivative language of Section 230 appeared in an 
agreement with Japan and continues to be advanced in other 
discussions. The USTR does not appear to be reflecting the 
scrutiny the administration itself is applying to how CDA 230 
is being utilized in American society, making it even more 
alarming for the USTR to be exporting such policies without the 
involvement of this committee.
    To be clear, this section of the '96 Telecom Act served as 
a foundation for the Information Age, so we are here by no 
means to condemn, but rather to understand what it truly is, 
and see that the entirety of the section is faithfully followed 
rather than cherry-picking just a portion. If we only refer to 
Section 230 as ``the 26 words that created the internet,'' as 
has been popularized by some, we are already missing the mark 
since, by my word count, that excludes the Good Samaritan 
obligations in section ``c2.'' We should start talking more 
about that section as the 83 words that can preserve the 
internet. All of the provisions of CDA 230 should be clearly 
taken together and not apart, and many of our concerns can be 
readily addressed if companies just enforce their terms of 
service. To put that in better context, I believe a quick 
history lesson is in order.
    Today's internet looks a lot different than when 
CompuServe, Prodigy, and the message boards dominated the 
internet in '90s. While the internet is more dynamic and 
content-rich today than ever before, there were problems in its 
infancy managing the vast amount of speech occurring online. As 
our friend Chris Cox, the author of the legislation and an alum 
of this committee, pointed out on the House floor during debate 
over his amendment, ``No matter how big the army of 
bureaucrats, it is not going to protect my kids because I do 
not think the Federal Government will get there in time.'' So, 
Congress recognized then, as we should now, that we need 
companies to step up to the plate and curb harmful and illegal 
content from their platforms--the internet is not something to 
be regulated and managed by a government.
    Upon enactment, CDA 230 clearly bestowed on providers and 
users the ability to go after the illegal and harmful content 
without fear of being held liable in court. While the law was 
intended to empower, we have seen social media platforms slow 
to clean up sites while being quick to use immunity from legal 
responsibility for such content. In some cases, internet 
platforms have clearly shirked responsibility for the content 
on their platform.
    The broad liability shield now in place through common law 
has obscured the central bargain that was struck: internet 
platforms with user-generated content are protected from 
liability in exchange for the ability to make good faith 
efforts to moderate harmful and illegal content.
    So, let me repeat for those that want to be included in the 
``interactive computer services'' definition: enforce your own 
terms of service.
    I look forward to an informative discussion today on 
differentiating constitutionally protected speech from illegal 
content; how we should think of CDA 230 protections for small 
entities versus large ones; and how various elements of the 
internet ecosystem shape what consumers see or don't see.
    Again, I hope today's discussion will help us back on the 
road to a balance for the betterment of our society. Thank you 
again to our witnesses for sharing their time and expertise.

    Mr. Doyle. So the administration doesn't listen to you guys 
either, huh?
    Mr. Walden. My statement spoke for itself pretty clearly, I 
think. We will find out if they are listening or not.
    Mr. Doyle. The gentleman yields back.
    I will reiterate that, pursuant to the committee rules, all 
Members' written opening statements will be made part of the 
record.
    We now want to introduce our witnesses for today's hearing.
    Mr. Steve Huffman, cofounder and CEO of Reddit.
    Welcome.
    Ms. Danielle Keats Citron, professor of law at Boston 
University School of Law.
    Welcome.
    Dr. Corynne McSherry, legal director of the Electronic 
Frontier Foundation.
    Welcome.
    Ms. Gretchen Peters, executive director of the Alliance to 
Counter Crime Online.
    Welcome.
    Ms. Katherine Oyama, global head of intellectual property 
policy for Google.
    Welcome.
    And Dr. Hany Farid, professor at the University of 
California, Berkeley.
    Welcome to all of you. We want to thank you for joining us 
today. We look forward to your testimony.
    At this time, the Chair will recognize each witness for 5 
minutes to provide their opening statement.
    Before we begin, I would like to explain our lighting 
system. In front of you is a series of lights. The light will 
initially be green at the start of your opening statement. The 
light will turn yellow when you have 1 minute remaining. Please 
wrap up your testimony at that point. When the light turns red, 
we just cut your microphone off. No, we don't, but try to 
finish before then.
    So, Mr. Huffman, we are going to start with you, and you 
are recognized for 5 minutes.

  STATEMENTS OF STEVE HUFFMAN, COFOUNDER AND CHIEF EXECUTIVE 
OFFICER, REDDIT, INC.; DANIELLE KEATS CITRON, PROFESSOR OF LAW, 
BOSTON UNIVERSITY SCHOOL OF LAW; CORYNNE MCSHERRY, Ph.D., LEGAL 
  DIRECTOR, ELECTRONIC FRONTIER FOUNDATION; GRETCHEN PETERS, 
EXECUTIVE DIRECTOR, ALLIANCE TO COUNTER CRIME ONLINE; KATHERINE 
  OYAMA, GLOBAL HEAD OF ECONOMIC PROPERTY POLICY, GOOGLE; AND 
    HANY FARID, Ph.D., PROFESSOR, UNIVERSITY OF CALIFORNIA, 
                            BERKELEY

                   STATEMENT OF STEVE HUFFMAN

    Mr. Huffman. Thank you. Good morning, chairpersons, ranking 
members, members of the committee. Thank you for inviting me. 
My name is Steve Huffman. I am the cofounder and CEO of Reddit, 
and I am grateful for this opportunity to share why 230 is 
critical to our company and the open internet.
    Reddit moderates content in a fundamentally different way 
than other platforms. We empower communities, and this approach 
relies on 230. Changes to 230 pose an existential threat not 
just to us but to thousands of startups across the country, and 
it would destroy what little competition remains in our 
industry.
    My college roommate and I started Reddit in 2005 as a 
simple user-powered forum to find news and interesting content. 
Since then, it has grown into a vast community-driven site 
where millions of people find not just news and a few laughs 
but new perspectives and a real sense of belonging. Reddit is 
communities, communities that are both created and moderated by 
our users.
    Our model has taken years to develop, with many hard 
lessons learned along the way. As some of you know, I left the 
company in 2009, and for a time Reddit lurched from crisis to 
crisis over questions of moderation that we are discussing 
today.
    In 2015, I came back because I realized the vast majority 
of our communities were providing an invaluable experience for 
our users and Reddit needed a better approach to moderation.
    The way Reddit handles content moderation today is unique 
in the industry. We use a governance model akin to our own 
democracy, where everyone follows a set of rules, has the 
ability to vote and self-organize, and ultimately shares some 
responsibility for how the platform works.
    First, we have our content policy, the fundamental rules 
that everyone on Reddit must follow. Think of these as our 
Federal laws. We employ a group, including engineers and data 
scientists, collectively known as the ``Anti-Evil'' Team, to 
enforce these policies.
    Below that, each community creates their own rules, State 
laws, if you will. These rules, written by our volunteer 
moderators themselves, are tailored to the unique needs of 
their communities and tend to be far more specific and complex 
than ours.
    The self-moderation our users do every day is the most 
scalable solution to the challenges of moderating content 
online.
    Individual users play a crucial role as well. They can vote 
up or down on any piece of content, posts or comments, and 
report it to our Anti-Evil Team. Through this system of voting 
and reporting, users can accept or reject any piece of content, 
thus turning every user into a moderator.
    The system isn't perfect. It is possible to find things on 
Reddit that break the rules. But its effectiveness has improved 
with our efforts. Independent academic analysis has shown our 
approach to be largely effective in curbing bad behavior.
    And when we investigated Russian attempts at manipulating 
our platform in 2016, we found that, of all accounts that 
tried, less than 1 percent made it past the routine defenses of 
our team, community moderation, and simple down votes from 
everyday users.
    We also constantly evolve our content policies, and since 
my return we have made a series of updates addressing violent 
content, deepfaked pornography, controlled goods, and 
harassment.
    These are just a few of the ways we have worked to moderate 
in good faith, which brings us to the question of what Reddit 
would look like without 230.
    For starters, we would be forced to defend against anyone 
with enough money to bankroll a lawsuit, no matter how 
frivolous. It is worth noting that the cases most commonly 
dismissed under 230 are regarding defamation. As an open 
platform where people are allowed to voice critical opinions, 
we would be a prime target for these, effectively enabling 
censorship through litigation.
    Even targeted limits to 230 will create a regulatory burden 
on the entire industry, benefiting the largest companies by 
placing a significant cost on smaller competitors.
    While we have 500 employees and a large user base, normally 
more than enough to be considered a large company, in tech 
today we are an underdog compared to our nearest competitors, 
who are public companies 10 to 100 times our size. Still, we 
recognize that there is truly harmful material on the internet, 
and we are committed to fighting it.
    It is important to understand that rather than helping, 
even narrow changes to 230 can undermine the power of community 
and hurt the vulnerable. Take the opioid epidemic, which has 
been raised in discussions on 230. We have many communities on 
Reddit where users struggling with addiction can find support 
to help them on their way to sobriety.
    Were there a carveout in this area, posting them may simply 
become too risky, forcing us to close them down. This would be 
a disservice to people who are struggling, yet this is exactly 
the type of decision that restrictions on 230 would force on 
us.
    Section 230 is a uniquely American law with a balanced 
approach that has allowed the internet and platforms like ours 
to flourish while also incentivizing good faith attempts to 
mitigate the unavoidable downsides of free expression. While 
these downsides are serious and demand the attention of both us 
and industry and you in Congress, they do not outweigh the 
overwhelming good that 230 has enabled.
    Thank you. I look forward to your questions.
    [The prepared statement of Mr. Huffman follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. Thank you, Mr. Huffman.
    Ms. Citron, you are recognized for 5 minutes.

               STATEMENT OF DANIELLE KEATS CITRON

    Ms. Citron. Thank you for having me and for having such a 
thoughtful bench with me on the panel.
    When Congress adopted Section 230 twenty years ago, the 
goal was to incentivize tech companies to moderate content. And 
although Congress, of course, wanted the internet, what they 
could imagine it at that time, to be open and free, they also 
knew that openness would risk offensive material, and I am 
going to use their words. And so what they did was devise an 
incentive, a legal shield for Good Samaritans who are trying to 
clean up the internet, both accounting for the failure to 
remove, so underfiltering, and overfiltering of content.
    Now, the purpose of the statute was fairly clear, but its 
interpretation, the words weren't, and so what we have seen are 
courts massively overextending Section 230 to sites that are 
irresponsible in the extreme and that produce extraordinary 
harm. Now, we have seen the liability shield be applied to 
sites whose entire business model is abuse. So revenge porn 
operators and sites that all they do is curate users' deepfake 
sex videos, they get to enjoy immunity, and have, from 
liability.
    And interestingly, not only is it bad Samaritans who have 
enjoyed the legal shield from responsibility, but it is also 
sites that really have nothing to do with speech, that traffic 
in dangerous goods, like Armslist.com. And the costs are 
significant. This overbroad interpretation allows bad Samaritan 
sites, reckless, irresponsible sites, to really have costs on 
people's lives.
    I am going to take the case of online harassment because I 
have been studying it for the past 10 years. The costs are 
significant, and especially to women and minorities. Online 
harassment that is often hosted on these sites is costly to 
people's central life opportunities.
    So when a Google search of your name contains rape threats, 
your nude photo without your consent, your home address because 
you have been doxxed, and lies and defamation about you, it is 
hard to get a job and it is hard to keep a job. And also for 
victims, they are driven offline in the face of online 
assaults. They are terrorized. They often change their names, 
and they move.
    And so in many respects, the calculus, the free speech 
calculus, it is not necessarily a win for free speech, as we 
are seeing really diverse viewpoints and diverse individuals 
being chased offline.
    So now the market, I think, ultimately is not going to 
solve this problem. So many of these businesses, they make 
money off of online advertising and salacious, negative, and 
novel content that attracts eyeballs. So the market itself I 
don't think we can rely on to solve this problem.
    So, of course, legal reform. The question is, how should we 
do it?
    I think we have to keep Section 230. It has tremendous 
upsides. But we should return it to its original purpose, which 
was to condition the shield on being a Good Samaritan, on 
engaging in what Ben Wittes and I have called reasonable 
content moderation practices.
    Now, there are other ways to do it. In my testimony, I sort 
of draw up some solutions. But we have got to do something 
because doing nothing has cost. It says to victims of online 
abuse that their speech and their equality is less important 
than the business profits of some of these most harmful 
platforms.
    Thank you.
    [The prepared statement of Ms. Citron follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. Thank you very much.
    The Chair now recognizes Dr. McSherry for 5 minutes.

              STATEMENT OF CORYNNE MCSHERRY, Ph.D.

    Dr. McSherry. Thank you.
    As legal director for the Electronic Frontier Foundation, I 
want to thank the chairs, ranking members, and members of the 
committee for the opportunity to share our thoughts with you 
today on this very, very important topic.
    For nearly 30 years, EFF has represented the interests of 
technology users, both in court cases and in broader policy 
debates, to help ensure that law and technology support our 
civil liberties.
    Like everyone in this room, we are well aware that online 
speech is not always pretty. Sometimes it is extremely ugly and 
it causes serious harm. We all want an internet where we are 
free to meet, create, organize, share, debate, and learn. We 
want to have control over our online experience and to feel 
empowered by the tools we use. We want our elections free from 
manipulation and for women and marginalized communities to be 
able to speak openly about their experiences.
    Chipping away at the legal foundations of the internet in 
order to pressure platforms to better police the internet is 
not the way to accomplish those goals.
    Section 230 made it possible for all kinds of voices to get 
their message out to the whole world without having to acquire 
a broadcast license, own a newspaper, or learn how to code. The 
law has thereby helped remove much of the gatekeeping that once 
stifled social change and perpetuated power imbalances, and 
that is because it doesn't just protect tech giants. It 
protects regular people.
    If you forwarded an email, a news article, a picture, or a 
piece of political criticism, you have done so with the 
protection of Section 230. If you have maintained an online 
forum for a neighborhood group, you have done so with the 
protection of Section 230. If you used Wikipedia to figure out 
where George Washington was born, you benefited from Section 
230. And if you are viewing online videos documenting events 
realtime in northern Syria, you are benefiting from Section 
230.
    Intermediaries, whether social media platforms, news sites, 
or email forwarders, aren't protected by Section 230 just for 
their benefit. They are protected so they can be available to 
all of us.
    There is another very practical reason to resist the 
impulse to amend the law to pressure platforms to more actively 
monitor and moderate user content. Simply put, they are bad at 
it. As EFF and many others have shown, they regularly take down 
all kinds of valuable content, partly because it is often 
difficult to draw clear lines between lawful and unlawful 
speech, particularly at scale, and those mistakes often silence 
the voices of already marginalized people.
    Moreover, increased liability risk will inevitably lead to 
overcensorship. It is a lot easier and cheaper to take 
something down than to pay lawyers to fight over it, 
particularly if you are a smaller business or a nonprofit.
    And automation is not the magical solution. Context matters 
very often when you are talking about speech, and robots are 
pretty bad at nuance.
    For example, in December 2018, blogging platform Tumblr 
announced a new ban on adult content. In an attempt to explain 
the policy, Tumblr identified several types of content that 
would still be acceptable under the new rules. Shortly 
thereafter, Tumblr's own filtering technology flagged those 
same images as unacceptable.
    Here is the last reason: New legal burdens are likely to 
stifle competition. Facebook and Google can afford to throw 
millions at moderation, automation, and litigation. Their 
smaller competitors or would-be competitors don't have that 
kind of budget. So, in essence, we would have opened the door 
to a few companies and then slammed that door shut for everyone 
else.
    The free and open internet has never been fully free or 
open, and the internet can amplify the worst of us as well as 
the best. But at root, the internet still represents and 
embodies an extraordinary idea: that anyone with a computing 
device can connect with the world to tell their story, 
organize, educate, and learn. Section 230 helps make that idea 
a reality, and it is worth protecting.
    Thank you, and I look forward to your questions.
    [The prepared statement of Dr. McSherry follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. Thank you, Dr. McSherry.
    Ms. Peters, you are recognized for 5 minutes.

                  STATEMENT OF GRETCHEN PETERS

    Ms. Peters. Thank you.
    Distinguished members of the subcommittee, it is an honor 
to be here today to discuss one of the premier security threats 
of our time, one that Congress is well positioned to solve.
    I am the executive director of the Alliance to Counter 
Crime Online. Our team is made up of academics, security 
experts, NGOs, and citizen investigators who have come together 
to eradicate serious organized crime and terror activity on the 
internet.
    I want to thank you for your interest in our research and 
for asking me to join the panel of witnesses here to testify. 
Like you, I hoped to hear the testimony of the U.S. Trade 
Representative, because keeping CDA 230 language out of 
America's trade agreements is critical to our national 
security.
    Distinguished committee members, I have a long history of 
tracking organized crime and terrorism. I was a war reporter, 
and I wrote a book about the Taliban and the drug trade. That 
got me recruited by U.S. military leaders to support our 
intelligence community. I mapped transnational crime networks 
and terror networks for Special Operations Command, the DEA, 
and CENTCOM. In 2014, I received State Department funding to 
map wildlife supply chains, and that is when my team discovered 
that the largest retail markets for endangered species are 
actually located on social media platforms like Facebook and 
WeChat.
    Founding the Alliance to Counter Crime Online, which looks 
at crime more broadly than just wildlife, has taught me the 
incredible range and scale of illicit activity happening 
online. It is far worse than I ever imagined. We can and must 
get this under control.
    Under the original intent of CDA 230, there was supposed to 
be a shared responsibility between tech platforms, law 
enforcement, and organizations like ACCO. But tech firms are 
failing to uphold their end of the bargain. Because of broad 
interpretations by the courts, they enjoy undeserved safe 
harbor for hosting illicit activity.
    Distinguished committee members, the tech industry may try 
and convince you today that most illegal activity is confined 
to the dark web, but that is not the case. Surface web 
platforms provide much the same anonymity, payment systems, and 
a much greater reach of people.
    We are tracking illicit groups ranging from Mexican drug 
cartels to Chinese triads that have weaponized social media 
platforms, I am talking about U.S., publicly listed social 
media platforms, to move a wide range of illegal goods.
    Now we are in the midst of a public health crisis, the 
opioid epidemic, which is claiming the lives of more than 
60,000 Americans a year. But Facebook, the world's largest 
social media company, only began tracking drug activity, drug 
postings on its platform, last year, and within 6 months the 
firm identified 1.5 million posts selling drugs. That is what 
they admitted to removing. To put that in perspective, that is 
100 times more postings than the notorious dark website the 
Silk Road ever carried.
    Study after study by ACCO members and others have shown 
widespread use of Google, Twitter, Facebook, Reddit, YouTube to 
market and sell fentanyl, oxycodone, and other highly 
addictive, often deadly substances to U.S. consumers in direct 
violation of U.S. law, Federal law. Every major internet 
platform has a drug problem. Why? Because there is no law that 
holds tech firms responsible, even when a child dies buying 
drugs on an internet platform.
    Tech firms play an active role in facilitating and 
spreading harm. Their algorithms, originally designed, well-
intentioned, to connect friends, also help criminals and terror 
groups connect to a global audience. ISIS and other terror 
groups use social media, especially Twitter, to recruit, 
fundraise, and spread their propaganda.
    The ACCO alliance, among others, includes an incredible 
team of Syrian archaeologists recording the online trafficking 
of thousands of artifacts plundered from ancient sites and sold 
in many cases by ISIS supporters. This is a war crime.
    We are also tracking groups on Instagram, Google, and 
Facebook where endangered species are sold, items ranging from 
rhino horn and elephant ivory to live chimpanzees and cheetahs. 
In some cases, the size of these online markets is literally 
threatening species with extinction.
    I could continue to sit here and horrify you all morning. 
Illegal dog fighting, live videos of children being sexually 
abused, weapons, explosives, human remains, counterfeit goods--
it is all just a few clicks away.
    Distinguished committee members, the tech industry 
routinely claims that modifying CDA 230 is a threat to freedom 
of speech. But CDA 230 is a law about liability, not freedom of 
speech. Please try and imagine another industry in this country 
that has ever enjoyed such an incredible subsidy from Congress, 
total immunity, no matter what harm their product brings to 
consumers.
    Tech firms could have implemented internal controls to 
prevent illicit activity from occurring, but it was cheaper and 
easier to scale while looking the other way. They were given 
this incredible freedom, and they have no one to blame but 
themselves for squandering it.
    We want to see reforms to the law to strip immunities for 
hosting terror and serious crime content, to regulate that 
firms must report crime and terror activity to law enforcement, 
and appropriations to law enforcement to contend with this 
data.
    Distinguished committee members, if it is illegal in real 
life, it ought to be illegal to host it online. It is 
imperative we reform CDA 230 to make the internet a safer place 
for all.
    Thank you very much.
    [The prepared statement of Ms. Peters follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. The gentlelady yields back.
    Ms. Oyama, you are recognized for 5 minutes.

                  STATEMENT OF KATHERINE OYAMA

    Ms. Oyama. Chairman Doyle, Chairwoman Schakowsky, Ranking 
Members Latta and McMorris Rodgers, distinguished members of 
the committee, thank you for the opportunity to appear before 
you today. I appreciate your leadership on these important 
issues and welcome the opportunity to discuss Google's work in 
these areas.
    My name is Katie Oyama, and I am the global head of IP 
policy at Google. In that capacity, I also advise the company 
on public policy frameworks for the management and moderation 
of online content of all kinds.
    At Google, our mission is to organize and make the world's 
information universally accessible and useful. Our services and 
many others are positive forces for creativity, learning, and 
access to information.
    This creativity and innovation continues to yield enormous 
economic benefits for the United States. However, like all 
means of communications that came before it, the internet has 
been used for both the best and worst of purposes. And this is 
why, in addition to respecting local law, we have robust 
policies, procedures, and community guidelines that govern what 
activity is permissible on our platforms, and we update them 
regularly to meet the changing needs of both our users and 
society.
    In my testimony today, I will focus on three areas: the 
history of 230 and how it has helped the internet grow; how 230 
contributes to our efforts to take down harmful content; and 
Google's policies across our products.
    Section 230 of the Communications Decency Act has created a 
robust internet ecosystem where commerce, innovation, and free 
expression thrive, while also enabling providers to take 
aggressive steps to fight online abuse. Digital platforms help 
millions of consumers find legitimate content across the 
internet, facilitating almost $29 trillion in online commerce 
each year.
    Addressing illegal content is a shared responsibility, and 
our ability to take action on problematic content is 
underpinned by 230. The law not only clarifies when services 
can be held liable for third-party content, but also creates 
the legal certainty necessary for services to take swift action 
against harmful content of all types.
    Section 230's Good Samaritan provision was specifically 
introduced to incentivize self-monitoring and to facilitate 
content moderation. It also does nothing to alter platform 
liability for violations of Federal criminal laws, which are 
expressly exempted from the scope of the CDA.
    Over the years, the importance of Section 230 has only 
grown and is critical in ensuring continued economic growth. A 
recent study found that over the next decade, 230 will 
contribute an additional 4.25 million jobs and $440 billion in 
growth to the economy.
    Furthermore, investors in the startup ecosystem have said 
that weakening online safe harbors would have a recessionlike 
impact on investment. And internationally, 230 is a 
differentiator for the U.S. China, Russia, and others take a 
very different approach to innovation and to censoring speech 
online, sometimes including speech that is critical of 
political leaders.
    Perhaps the best way to understand the importance of 230 is 
to imagine what might happen if it weren't in place. Without 
230, search engines, video sharing platforms, political blogs, 
startups, review sites of all kinds would either not be able to 
moderate content at all, or they would overblock, either way 
harming consumers and businesses that rely on their services 
every day.
    Without 230, platforms could be sued for decisions around 
removal of content from their platforms, such as the removal of 
hate speech, mature content, or videos relating to pyramid 
schemes.
    And because of 230, we can and do enforce rigorous policies 
that ensure that our platforms are safe, useful, and vibrant 
for our users. For each product, we have a specific set of 
rules and guidelines that are suitable for the type of 
platform, how it is used, and the risk of harm associated with 
it. These approaches range from clear content policies and 
community guidelines with flagging mechanisms to report content 
that violates them to increasingly effective machine learning 
that can facilitate removal of harmful content at scale before 
a single human user has ever been able to access it.
    For example, in the 3-month period from April to June 2019, 
YouTube removed over 9 million videos from our platform for 
violating our community guidelines, and 87 percent of this 
content was flagged by machines first rather than by humans. 
And of those detected by machines, 81 percent of that content 
was never viewed by a single user.
    We now have over 10,000 people across Google working on 
content moderation. We have invested hundreds of millions of 
dollars for these efforts.
    In my written testimony, I go into further detail about our 
policies and procedures for tackling harmful content on Search, 
Google Ads, and YouTube.
    We are committed to being responsible actors who are part 
of the solution. Google will continue to invest in the people 
and the technology to meet this challenge. We look forward to 
continued collaboration with the committee as it examines these 
issues.
    Thank you for your time, and I look forward to taking your 
questions.
    [The prepared statement of Ms. Oyama follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. Thank you.
    Dr. Farid, you have 5 minutes.

                 STATEMENT OF HANY FARID, Ph.D.

    Dr. Farid. Chairman, Chairwoman, ranking members, members 
of both subcommittees, thank you for the opportunity to speak 
with you today.
    Technology, as you have already heard, and the internet 
have had a remarkable impact on our lives and society. Many 
educational, entertaining, and inspiring things have emerged 
from the past two decades in innovation.
    But at the same time, many horrific things have emerged: a 
massive proliferation of child sexual abuse material; the 
recruitment and radicalization of domestic and international 
terrorists; the distribution of illegal and deadly drugs; the 
proliferation of mis- and disinformation campaigns designed to 
sow civil unrest, incite violence, and disrupt democratic 
elections; the proliferation of dangerous, hateful, and deadly 
conspiracy theories; the routine and daily harassment of women 
and underrepresented groups in the forms of threats of sexual 
violence and revenge and nonconsensual pornography; small and 
large-scale fraud; and spectacular failures to protect our 
personal and sensitive data.
    How in 20 short years did we go from the promise of the 
internet to democratize access to knowledge and make the world 
more understanding and enlightened to this litany of daily 
horrors? A combination of naivete, ideology, willful ignorance, 
and a mentality of growth at all costs have led the titans of 
tech to fail to install proper safeguards on their services.
    The problem that we face today, however, is not new. As 
early as 2003, it was well known that the internet was a boon 
for child predators. Despite early warnings, the technology 
sector dragged their feet through the early and mid-2000s and 
did not respond to the known problems at the time, nor did they 
put in place the proper safeguards to contend with what should 
have been the anticipated problems that we face today.
    In defense of the technology sector, they are contending 
with an unprecedented amount of data. Some 500 hours of video 
are uploaded to YouTube every minute, some 1 billion daily 
uploads to Facebook, and some 500 million tweets per day.
    On the other hand, these same companies have had over a 
decade to get their houses in order and have simply failed to 
do so. And at the same time, they have managed to profit 
handsomely by harnessing the scale and volume of the data that 
is uploaded to their services every day.
    And these services don't seem to have trouble dealing with 
unwanted material when it serves their interests. They 
routinely and quite effectively remove copyright infringement, 
and they effectively remove legal adult pornography because 
otherwise, their services would be littered with pornography, 
scaring away advertisers.
    During his 2018 congressional testimony, Mr. Zuckerberg 
repeatedly invoked artificial intelligence, AI, as the savior 
for content moderation in, we are told, 5 to 10 years. Putting 
aside that it is not clear what we should do in the intervening 
decade or so, this claim is almost certainly overly optimistic.
    So, for example, earlier this year, Facebook's chief 
technology officer showcased Facebook's latest AI technology 
for discriminating images of broccoli from images of marijuana. 
Despite all of the latest advances in AI and pattern 
recognition, this system is only able to perform the task with 
an average accuracy of 91 percent. This means that 
approximately 1 in 10 times, the system is simply wrong.
    At a scale of a billion uploads a day, this technology 
cannot possibly automatically moderate content. And this 
discrimination task is surely much easier than the task of 
identifying a broad class of child exploitation, extremism, and 
disinformation material.
    The promise of AI is just that, a promise, and we cannot 
wait a decade or more with the hope that AI will improve by 
some nine orders of magnitude when it might be able to contend 
with automatic online content moderation.
    To complicate things even more, earlier this year Mr. 
Zuckerberg announced that Facebook is implementing end-to-end 
encryption on its services, preventing anyone--the government, 
Facebook--from seeing the contents of any communications. 
Blindly implementing end-to-end encryption will make it even 
more difficult to contend with the litany of abuses that I 
enumerated at the opening of my remarks.
    We can and we must do better when it comes to contending 
with some of the most violent, harmful, dangerous, and hateful 
content online. I simply reject the naysayers that argue that 
it is too difficult from a policy or technological perspective 
or those that say that reasonable and responsible content 
moderation will lead to the stifling of an open exchange of 
ideas.
    Thank you, and I look forward to taking your questions.
    [The prepared statement of Dr. Farid follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. Thank you, Dr. Farid.
    Well, we have concluded our openings. We are going to move 
to Member questions. Each Member will have 5 minutes to ask 
questions of our witnesses, and I will start by recognizing 
myself for 5 minutes.
    Well, I have to say, when I said at the beginning of my 
remarks this is a complex issue, it is a very complex issue, 
and I think we have all heard the problems. What we need to 
hear is solutions.
    Let me just start by asking all of you, just by a show of 
hands: Who thinks that online platforms could do a better job 
of moderating their content on their websites?
    So that is unanimous, and I agree. And I think it is 
important to note that we all recognize that content moderation 
online is lacking in a number of ways and that we all need to 
address this issue better. And if not you, who are the 
platforms and the experts in this technology, and you put that 
on our shoulders, you may see a law that you don't like very 
much and that has a lot of unintended consequences for the 
internet.
    So I would say to all of you, you need to do a better job. 
You need to have an industry getting together and discussing 
better ways to do this. The idea that you can buy drugs online 
and we can't stop that, to most Americans hearing that, they 
don't understand why that is possible, why it wouldn't be easy 
to identify people that are trying to sell illegal things 
online and take those sites down. Child abuse. It is very 
troubling.
    On the other hand, I don't think anybody on this panel is 
talking about eliminating Section 230. So the question is, what 
is the solution between not eliminating 230, because of the 
effects that would have just on the whole internet, and making 
sure that we do a better job of policing this?
    Mr. Huffman, Reddit, a lot of people know of Reddit, but it 
is really a relatively small company when you place it against 
some of the giants. And you host many communities, and you rely 
on your volunteers to moderate discussions. I know that you 
have shut down a number of controversial sub-Reddits that have 
spread deepfakes, violent and disturbing content, 
misinformation, and dangerous conspiracy theories. But what 
would Reddit look like if you were legally liable for the 
content your users posted or for your company's decision to 
moderate user content and communities?
    Mr. Huffman. Sure. Thank you for the question.
    What Reddit would look like would be--we would be forced to 
go to one of two extremes. In one version, we would stop 
looking. We would go back to the pre-230 era, which means if we 
don't know, we are not liable. And that, I am sure, is not what 
you intend, and it is certainly not what we want. It would be 
not aligned with our mission of bringing community and 
belonging to everybody in the world.
    The other extreme would be to remove any content or 
prohibit any content that could be remotely problematic. And 
since Reddit is a platform where 100 percent of our content is 
created by our users, it fundamentally undermines the way 
Reddit works. It is hard for me to give you an honest answer of 
what Reddit would look like, because I am not sure Reddit, as 
we know it, could exist in a world where we had to remove all 
user-generated content.
    Mr. Doyle. Yes.
    Dr. McSherry, you talk about the risk to free speech if 
Section 230 were repealed or substantially altered, but what 
other tools could Congress use to incentivize online platforms 
to moderate dangerous content and encourage a healthier online 
ecosystem? What would your recommendation be short of 
eliminating 230?
    Dr. McSherry. Well, I think a number of the problems that 
we have talked about today so far--which I think everyone 
agrees are very, very serious, and I want to underscore that--
are actually often addressed by existing laws that target the 
conduct itself. So, for example, in the Armslist case, we had a 
situation where what Armslist--the selling of the gun that was 
so controversial was actually perfectly legal under Wisconsin 
law.
    Similarly, many of the problems that we have talked about 
today are already addressed by Federal criminal laws that 
already exist, and so they aren't--Section 230 is not a 
barrier, because, of course, there is a carveout for Federal 
criminal laws.
    So I would urge this committee to look carefully at the 
laws that actually target the actual behavior that we are 
concerned about and perhaps start there.
    Mr. Doyle. Ms. Peters, you did a good job horrifying us 
with your testimony. What solution do you offer short of 
repealing 230?
    Ms. Peters. I don't propose repealing 230. I think that we 
want to continue to encourage innovation in this country. It is 
our core economic--a core driver of our economy. But I do 
believe that CDA 230 should be revised so that, if something is 
illegal in real life, it is illegal to host it online. I don't 
think that that is an unfair burden for tech firms. Certainly 
some of the wealthiest firms in our country should be able to 
take that on.
    I, myself, have a small business. We have to run checks to 
make sure when we do business with foreigners that we are not 
doing business with somebody that is on a terror blacklist. Is 
it so difficult for companies like Google and Reddit to make 
sure that they are not hosting an illegal pharmacy?
    Mr. Doyle. I see my time is getting way expired, but I 
thank you, and I think we get the gist of your answer.
    The chairman now yields to my ranking member for 5 minutes.
    Mr. Latta. Well, thank you, Mr. Chairman.
    And again, thanks to our witnesses.
    Ms. Oyama, if I could start with you. A recent New York 
Times article outlined the horrendous nature of child sex abuse 
online and how it has exponentially grown over the last decade. 
My understanding is tech companies are only legally required to 
report images of child abuse only when they discover it. They 
are not required to actively look for it.
    While I understand you make voluntary efforts to look for 
this type content, how can we encourage platforms to better 
enforce their terms of service or proactively use their sword 
provided by subsection (c)(2) of Section 230 to take good faith 
efforts to create accountability within the platforms?
    Ms. Oyama. Thank you for the question and particularly for 
focusing on the importance of section (c)(2) to incentivize 
platforms to moderate content.
    I can say that, for Google, we do think that transparency 
is critically important, and so we publish our guidelines, we 
publish our policies, we publish on YouTube a quarterly 
transparency report where we show across the different 
categories of content what is the volume of content that we 
have been removing.
    And we also allow for users to appeal. So if their content 
is stricken and they think that was a mistake, they also have 
the ability to appeal and track what is happening with the 
appeal.
    So we do understand that this piece of transparency is 
really critical to user trust and for discussions with 
policymakers on these critically important topics.
    Mr. Latta. Thank you.
    Ms. Citron, a number of defendants have claimed Section 230 
immunity in the courts, some of which are tech platforms that 
may not use any user-generated content at all. Was Section 230 
intended to capture those platforms?
    Ms. Citron. So platforms are solely responsible for the 
content. The question is, there is no user-generated content, 
and they are creating the content? That is the question, would 
that be covered by the legal shield of 230? I am asking, is 
that the question?
    Mr. Latta. Right.
    Ms. Citron. No. They would be responsible for the content 
that they have created and developed. So Section 230, that 
legal shield, would not apply.
    Mr. Latta. Thank you.
    Mr. Farid, are there tools available, like PhotoDNA or 
Copyright ID, to flag the sale of illegal drugs online? If the 
idea is that platforms should be incentivized to actively scan 
their platforms and take down blatantly illegal content, 
shouldn't key words or other indicators associated with opioids 
be searchable through an automated process?
    Dr. Farid. The short answer is yes.
    There are two ways of doing content moderation. Once 
material has been identified, typically by a human moderator, 
whether that is child abuse material, illegal drugs, terrorism-
related material, whatever it is, that material, copyright 
infringement, can be fingerprinted, digitally fingerprinted, 
and then stopped from future upload and distribution.
    That technology has been well understood and has been 
deployed for over a decade. I think it has been deployed 
anemically across the platforms and not nearly aggressively 
enough. That is one form of content moderation that works 
today.
    The second form of content moderation is what I call the 
day zero, finding the Christchurch video on upload. That is 
incredibly difficult and still requires law enforcement, 
journalists, or the platforms themselves to find. But once that 
content has been identified, it can be removed from future 
uploads.
    And I will point out, by the way, that today you can go 
onto Google and you can type ``buy fentanyl online'' and it 
will show you in the first page illegal pharmacies where you 
can click and purchase fentanyl.
    That is not a difficult find. We are not talking about the 
dark web. We are not talking about things buried on page 20. It 
is on the first page. And in my opinion, there is no excuse for 
that.
    Mr. Latta. Let me follow up, because you said it is anemic, 
what some of the platforms might be doing out there.
    You know, last year in this room, we passed over 60 pieces 
of legislation dealing with the drug crisis that we have in 
this country, fentanyl being one of them. You just mentioned 
that you can just type in ``fentanyl'' and you can find it. OK. 
Because again, what we are trying to do is make sure we don't 
have the 72,000 deaths that we had in this country over a year 
ago and with over 43,000 being associated with fentanyl.
    So how do we go into the platforms and say, ``We have got 
to enforce this because we don't want the stuff flowing in from 
China''? And how do we do that?
    Dr. Farid. Well, this is what the conversation is. So I am 
with everybody else on the panel, we don't repeal 230, but we 
make it a responsibility, not a right. If your platform can be 
weaponized in the way that we have seen across the boards from 
the litany of things that I had in my opening remarks, surely 
something is not working.
    If I can find on Google in page 1--and not just me, my 
colleagues on the table, also investigative journalists--we 
know this content is there. It is not hiding. It is not 
difficult. And we have to ask the question that, if a 
reasonable person can find this content, surely Google with its 
resources can find it as well, and now what is the 
responsibility?
    And I think you said earlier, too, is that you just enforce 
your terms of service. So if we don't want to talk about 230, 
let's talk about terms of service. The terms of service of most 
of the major platforms are actually pretty good. It is just 
that they don't really do very much to enforce them in a clear, 
consistent, and transparent way.
    Mr. Latta. Thank you very much.
    Mr. Chairman, my time has expired, and I yield back.
    Mr. Doyle. The gentleman yields back.
    The Chair now recognizes Ms. Schakowsky, chair for the 
Subcommittee on Consumer Protection, for 5 minutes.
    Ms. Schakowsky. Thank you, Mr. Chairman.
    Ms. Oyama, you said in one of the sentences that you 
presented to us that without 230. I want to see if there are 
any hands that would go up, that we should abandon 230. Has 
anybody said that? OK.
    So this is not the issue. This is a sensible conversation 
about how to make it better.
    Mr. Huffman, you said--and I want to thank you for--we had, 
I think, a really productive meeting yesterday--explaining to 
me what your organization does and how it is unique. But you 
also said in your testimony that Section 230 is a unique 
American law. And so--but, yes. When we talked yesterday, you 
thought it was a good idea to put it into a trade agreement 
dealing with Mexico and Canada.
    If it is a unique American law, let me just say that I 
think trying to fit it into the regulatory structure of other 
countries at this time is inappropriate.
    And I would like to just quote, I don't know if he is here, 
from a letter that both Chairman Pallone and Ranking Member 
Walden wrote some time ago to Mr. Lighthizer that said, ``We 
find it inappropriate for the United States to export language 
mirroring Section 230 while such serious policy discussions are 
ongoing.'' And that is what is happening right now. We are 
having a serious policy discussion.
    But I think what the chairman was trying to do and what I 
want to do is try to figure out, what do we really want to do 
to amend or change in some way? And so again, briefly, if the 
three of you that have talked about the need for changes, let 
me start with Ms. Citron, on what you want to see in 230.
    Ms. Citron. So I would like to bring the statute back to 
its original purpose, was to apply to Good Samaritans who are 
engaged in responsible and reasonable content moderation 
practices. And I have the language to change the statute that 
would condition that we are not going to treat a provider or 
user of an interactive service that engages in reasonable 
content moderation practices as a publisher or a speaker. So it 
would keep the immunity, but it would----
    Ms. Schakowsky. Let me just suggest that if there is 
language, I think we would like to see suggestions.
    Ms. Peters, if you could, and I think you pretty much 
scared us as to what is happening, and then how we can make 230 
responsive to those concerns.
    Ms. Peters. Thank you for your question, Chair Schakowsky.
    We would love to share some proposed language with you 
about how to reform 230 to protect better against organized 
crime and terror activity on platforms.
    One of the things I am concerned about that a lot of tech 
firms are involved in is, when they detect illicit activity or 
it gets flagged to them by users, their response is to delete 
it and forget about it. What I am concerned about is two 
things.
    Number one, that essentially is destroying critical 
evidence of a crime. It is actually helping criminals to cover 
their tracks, as opposed to a situation like what we have for 
the financial industry and even aspects of the transport 
industry. If they know that illicit activity is going on, they 
have to share it with law enforcement and they have to do it in 
a certain timeframe.
    I certainly want to see the content removed, but I don't 
want to see it simply deleted, and I think that is an important 
distinction. I would like to see a world where the big tech 
firms work collaboratively with civil society and with law 
enforcement to root out some of these evil entities.
    Ms. Schakowsky. I am going to cut you off just because my 
time is running out and I do want to get to Dr. Farid with the 
same thing. So we would welcome concrete suggestions.
    Dr. Farid. Thank you.
    I agree with my colleague, Professor Citron. I think 230 
should be a privilege, not a right. You have to show that you 
are doing reasonable content moderation.
    I think we should be worried about the small startups. If 
we start regulating now, the ecosystem will become even more 
monopolistic. So we have to think about how do we make 
carveouts for small platforms who can now compete where these 
companies did not have to deal with that regulatory pressure.
    And the last thing I will say is the rules have to be 
clear, consistent, and transparent.
    Ms. Schakowsky. Thank you. I yield back.
    Mr. Doyle. The Chair now recognizes Mrs. McMorris Rodgers 
for 5 minutes.
    Mrs. Rodgers. Thank you, Mr. Chairman.
    Section 230 was intended to provide online platforms with a 
shield from liability as well as a sword to make good faith 
efforts to filter, block, or otherwise address certain 
offensive content online.
    Professor Citron, do you believe companies are using the 
sword enough, and if not, why do you think that is?
    Ms. Citron. We are seeing the dominant platforms--I have 
been working with Facebook and Twitter for about 8 years--and 
so I would say the dominant platforms and folks on this panel 
at this point are engaging in what I would describe at a broad 
level as fairly reasonable content moderation practices.
    I think they could do far better on transparency about what 
they mean by when they forbid hate speech. What do they mean by 
that? What is the harm that they want to avoid? Examples. And 
they could be more transparent about the processes that they 
use when they make decisions, right, to have more 
accountability.
    But what really worries me are the sort of renegade sites 
as well, the 8chans, who foment incitement with no moderation, 
dating apps that have no ability to ban impersonators and have 
IP addresses. And frankly, sometimes, it is the biggest of 
providers, not the small ones, who know they have illegality 
happening on their platforms and do nothing about it.
    Mrs. Rodgers. And why are they doing that?
    Ms. Citron. Because of Section 230 immunity. So the dating 
app Grindr comes to mind, hosting impersonations of someone's 
ex. And the person was using Grindr to send thousands of men to 
this man's home. Grindr heard 50 times from the individual who 
was being targeted, did nothing about it.
    Finally, when they responded after getting a lawsuit, their 
response was, ``Our technology doesn't allow us to track IP 
addresses.''
    But Grindr is fairly dominant in this space. But when the 
person went to SCRUFF, it is a smaller dating site, the 
impersonator was again posing as the individual, sending men to 
his home, and SCRUFF responded right away. They said, ``We can 
ban the IP address'' and took care of it.
    So I think the notion that the smaller versus large, by my 
lights, is there are good practices, responsible practices, and 
irresponsible, harmful practices.
    Mrs. Rogers. OK. Thank you for that.
    Mr. Huffman and Ms. Oyama, your company policies 
specifically prohibit illegal content or activities on your 
platform. Regarding your terms of service, how do you monitor 
content on your platform to ensure that it does not violate 
your policies?
    Maybe I will start with Mr. Huffman.
    Mr. Huffman. Sure. So, in my opening statement, I described 
the three layers of moderation that we have on Reddit, our 
company's moderation and our team. This is the group that both 
writes the policies and enforces the policies.
    Primarily the way they work is enforcing these policies at 
scale, so looking for aberrational behavior, looking for known 
problematic sites or words. We participate in the cross-
industry hash sharing, which allows us to find images, for 
example, exploitive of children that are shared industrywide, 
or fingerprints thereof.
    Next, though, are our community moderators. These are the 
people who--these are users--and then following the users 
themselves, those two groups participate together in removing 
content that is inappropriate for their community and in 
violation of our policies.
    We have policies against hosting. Our content policy is not 
very long, but one of the points is no illegal content. So no 
regulated goods, no drugs, no guns, anything of that sort, 
controlled----
    Mrs. Rogers. So you are seeking it out, and if you find it, 
then you get it off the platform.
    Mr. Huffman. That is right, because 230 doesn't provide us 
criminal liability protection. And so we are not in the 
business of committing crimes or helping people commit crimes. 
That would be problematic for our business. So we do our best 
to make sure it is not on the platform.
    Mrs. Rogers. Thank you.
    Ms. Oyama, would you address that, and then just what you 
are doing if you find that illegal content?
    Ms. Oyama. Thank you. Yes.
    Across YouTube, we have very clear content policies. We 
publish those online. We have YouTube videos that give more 
examples and some specific ways so people understand.
    We are able to detect, of the 9 million videos that we 
removed from YouTube in the last quarter, 87 percent of those 
were detected first by machine. So automation is one very 
important way.
    And then the second way is human reviewers. So we have 
community flagging where any user that sees problematic content 
can flag it and follow what happens with that complaint. We 
also have human reviewers that look, and then we are very 
transparent in explaining that.
    When it comes to criminal activity on the internet, you 
know, of course, CDA 230 has a complete carveout. So in the 
case of Grindr we have policies against harassment. But in the 
case of Grindr where there was real criminal activity, my 
understanding is there is a defendant in that case, and there 
is a criminal case for harassment and stalking that are 
proceeding against him.
    And so in certain cases, opioids--again, controlled 
substance--under criminal law there is a section that says, I 
think, controlled substances on the internet, sale of 
controlled substances on the internet, that is a provision.
    In cases like that where there is actually a law 
enforcement rule, we would, you know, if there is correct legal 
process, then we would work with law enforcement to also 
provide information under due process or a subpoena.
    Mrs. Rogers. Thank you.
    OK. My time has expired. I yield back.
    Mr. Doyle. The gentlelady yields. Thank you.
    Ms. DeGette, you are recognized for 5 minutes.
    Ms. DeGette. Thank you so much, Mr. Chairman.
    I really want to thank this panel. I am a former 
constitutional lawyer, so I am always interested in the 
intersection between criminality and free speech.
    And in particular, Professor Citron, I was reading your 
written testimony, which you confirmed with Ms. Schakowsky, 
about how Section 230 should be revised to both continue to 
provide First Amendment protections but also return the statute 
to its original purpose, which is to let companies act more 
responsibly, not less.
    And, in that vein, I want to talk during my line of 
questioning about online harassment, because this is a real--
sexual harassment--this is a real issue that has just only 
increased. The Anti-Defamation League reported that 24 percent 
of women and 63 percent of LGBTQ individuals have experienced 
online harassment because of their gender or sexual 
orientation, and this is compared to only 14 percent of men, 
and 37 percent of all Americans of any background have 
experienced severe online harassment, which includes sexual 
harassment, stalking, physical threats, and sustained 
harassment.
    So I want to ask you, Professor Citron, and also I want to 
ask you, Ms. Peters, very briefly to talk to me about how 
Section 230 facilitates illegal activities, and do you think it 
undermines the value of those laws, and if so, how.
    Professor Citron.
    Ms. Citron. So let me say that in cases involving 
harassment, of course, there is a perpetrator and then the 
platform that enables it. And most of the time the perpetrators 
are not pursued by law enforcement. So in my book ``Hate Crimes 
in Cyberspace'' I explore the fact that law enforcement, really 
they don't get the--they don't understand the abuse, they don't 
know how to investigate it.
    In the case of Grindr, police--there were, like, 10 
protective orders that were violated, and law enforcement in 
New York has done nothing about it.
    So it is not true that we can always find the perpetrator, 
nor especially in the cases of stalking, harassment, and 
threats. We see a severe underenforcement of law, particularly 
when it comes to gendered harms.
    Ms. DeGette. And that is really where it falls to the 
sites, then, to try to protect.
    Ms. Peters, do you want to comment on that?
    Ms. Peters. I just wanted to say that in this issue there 
needs to be something akin to like a cyber restraining order, 
so that if somebody is stalking somebody on Grindr or OkCupid 
or Google, that site can be ordered to block that person from 
communicating with the other.
    Ms. DeGette. OK. And even under Section 230 immunity, can 
platforms ignore requests to take down this type of material?
    Ms. Peters. They have.
    Ms. DeGette. Professor Citron, you are nodding your head.
    Ms. Citron. They do and they can, especially if those 
protective orders are coming from State criminal law.
    Ms. DeGette. OK.
    I wanted to ask you, Dr. McSherry, sexual harassment 
continues to be a significant problem on Twitter and other 
social platforms, and I know Section 230 is a critical tool 
that facilitates content moderation. But, as we have heard in 
the testimony, a lot of the platforms aren't being aggressive 
enough to enforce the terms and conditions. So what I want to 
ask you is, what can we do to encourage platforms to be more 
aggressive in protecting consumers and addressing issues like 
harassment?
    Dr. McSherry. I imagine this hearing will encourage many of 
them to do just that.
    Ms. DeGette. But we keep having hearings----
    Dr. McSherry. No, no, no. I understand. Absolutely. I 
understand that.
    So I actually think that many, many of the platforms are 
pretty aggressive already in their content moderation policies. 
I agree with what many have said here today, which is that it 
would be nice if they would start by clearly enforcing their 
actual terms of service, which we share a concern about because 
often they are enforced very inconsistently, and that is very 
challenging for users.
    A concern that I have is, if we institute what I think is 
one proposal, which is that whenever you get a notice you have 
some duty to investigate, that could actually backfire for 
marginalized communities, because one of the things that also 
happens is if you want to silence someone online, one thing you 
might do is flood a service provider with complaints about 
them. And then they end up being the ones who are silenced 
rather than the other way around.
    Ms. DeGette. Dr. Farid, what is your view of that?
    Dr. Farid. Pardon me?
    Ms. DeGette. What is your view of what Dr. McSherry said?
    Dr. Farid. There are two issues at hand here. When you do 
moderation, you risk overmoderating or undermoderating.
    Ms. Degette. Right.
    Dr. Farid. What I would argue is we are way, way 
undermoderating. When I look at where we fall down and where we 
make mistakes and take down content we should, and I weigh that 
against 45 million pieces of content just last year to NCMEC 
and child abuse material and terrorism and drugs, the weights 
are imbalanced. We have to sort of rebalance, and we have to 
try to get it right.
    We are going to make mistakes, but we are making way more 
mistakes on allowing content right now than we are on not 
allowing.
    Ms. DeGette. Thank you.
    Thank you very much, Mr. Chairman. I yield back.
    Mr. Doyle. The gentlelady yields back.
    The Chair now recognizes Mr. Johnson for 5 minutes.
    Mr. Johnson. Thank you, Mr. Chairman, to you and to 
Chairwoman Schakowsky, for holding this very important hearing.
    You know, I have been in information technology for most of 
my adult life, and social responsibility has been an issue that 
I have talked about a lot. In the absence of heavy-handed 
government and regulating, I think the absence of regulations 
is what has allowed the internet and the social media platforms 
to grow like they have. But I hate to sound cliche-ish, but 
that old line from the ``Jurassic Park'' movie: Sometimes we 
are more focused on what we can do, and we don't think about 
what we should do. And so I think that is where we find 
ourselves with some of this.
    We have heard from some of our witnesses, accessibility of 
a global audience through internet platforms is being used for 
illegal and illicit purposes by terrorist organizations and 
even for the sale of opioids, which continues to severely 
impact communities across our Nation, particularly in rural 
areas like I live in, in eastern and southeastern Ohio.
    However, internet platforms also provide an essential tool 
for legitimate communication and the free, safe, and open 
exchange of ideas, which has become a vital component of modern 
society and today's global economy.
    I appreciate hearing from all of our witnesses as our 
subcommittees examine whether Section 230 of the Communications 
Decency Act is empowering internet platforms to effectively 
self-regulate under this light-touch framework.
    So, Mr. Huffman, in your testimony you discuss the ability 
of not only Reddit employees but its users to self-regulate and 
remove content that goes against Reddit's stated rules and 
community standards. Do you think other social media platforms, 
for example, Facebook or YouTube, have been able to 
successfully implement similar self-regulating functions and 
guidelines? If not, what makes Reddit unique in their ability 
to self-regulate?
    Mr. Huffman. Sure. Thank you, Congressman.
    I am only familiar with the other platforms to the extent 
that you probably are, which is to say I am not an expert. I do 
know they are not sitting on their hands. I know they are 
making progress.
    But Reddit's model is unique in the industry in that we 
believe that the only thing that scales with users is users. 
And so, when we are talking about user-generated content, 
sharing some of this burden with those people, in the same way 
that in our society here in the United States there are many 
unwritten rules about what is acceptable or not to say, the 
same thing exists on our platforms. And by allowing and 
empowering our users and communities to enforce those unwritten 
rules, it creates an overall more healthy ecosystem.
    Mr. Johnson. OK.
    Ms. Oyama, in your testimony you discuss the responsibility 
of determining which content is allowed on your platforms, 
including balancing respect for diverse viewpoints and giving a 
platform for marginalized voices. Would a system like Reddit's 
up votes and down votes impact the visibility of diverse 
viewpoints on platforms like YouTube? And do dislikes on 
YouTube impact a video's visibility?
    Ms. Oyama. Thank you for the question.
    As you have seen, users can give thumbs up or thumbs down 
to a video. It is one of many, many signals, so it certainly 
wouldn't be determinative in terms of a recommendation of a 
video on YouTube. That would mostly be for relevance.
    And I really appreciate your point about responsible 
content moderation. I did want to make the point that, on the 
piece about harassment and bullying, we did remove 35,000 
videos from YouTube just in the last quarter, and we can do 
this because of CDA 230.
    Whenever someone's content is removed, they may also be 
upset, so there could be cases against a service provider for 
defamation, for breach of contract. And service providers, 
large and small, are able to have these policies and implement 
procedures to identify bad content and take it down because of 
the provisions of CDA 230.
    Mr. Johnson. OK. Well, I have got some other questions that 
I am going to submit for the record, Mr. Chairman, but let me 
just summarize with this, because I want to stay within my 
time, and you are going to require me to stay within my time.
    So in the absence of regulations, as I mentioned in my 
opening remarks, that takes social responsibility to a much 
higher bar. And I would suggest to the entire industry of the 
internet, social media platforms, we better get serious about 
this self-regulating, or you are going to force Congress to do 
something that you might not want to have done.
    With that, I yield back.
    Mr. Doyle. The gentleman yields back.
    The Chair recognizes Ms. Matsui for 5 minutes.
    Ms. Matsui. Thank you very much, Mr. Chairman.
    I want to once again thank the witnesses for being here 
today.
    Ms. Oyama and Mr. Huffman, last week the Senate Intel 
Committee released a bipartisan report on Russia's use of 
social media. The report found that Russia used social media 
platforms to sow social discord and influence the outcome of 
the 2016 election.
    What role can Section 230 play in ensuring that platforms 
are not used again to disrupt our political process?
    Ms. Oyama, Mr. Huffman, comments?
    Ms. Oyama. Thank you. Again, CDA 230 is critically 
important for allowing services like us to protect citizens and 
users against foreign interference in elections. It is a 
critical issue, especially with the election cycle coming up.
    We found on Google across our systems in the 2016 election, 
fortunately, due to the measures we have been able to take and 
add removals, there were only two accounts that had infiltrated 
our systems. They had a spend of less than $5,000 back in 2016.
    We continue to be extremely vigilant. So we do publish a 
political ads transparency report. We require that ads are 
disclosed, who paid for them. They show up in a library. They 
need to be----
    Ms. Matsui. So you feel that you are effective?
    Ms. Oyama. We can always do more, but on this issue, we are 
extremely focused on it and working with campaigns to protect--
--
    Ms. Matsui. Mr. Huffman.
    Mr. Huffman. Yes, Congresswoman. So, in 2016, we found that 
the--we saw the same fake news and misinformation submitted to 
our platform as we saw on the others. The difference is, on 
Reddit it was largely rejected by the community, by the users, 
long before it even came to our attention.
    If there is one thing Reddit is good at or our community is 
good at, it is being skeptical and rejecting also or 
questioning everything, for better or for worse.
    Between then and now, we have become dramatically better at 
finding groups of accounts that are working in a coordinated or 
inauthentic matter, and we collaborate with law enforcement. So 
based on everything we have learned in the past and can see 
going forward, I think we are in a pretty good position coming 
into the 2020 election.
    Ms. Matsui. OK.
    Dr. Farid, in your written testimony, you mention the 
proliferation of mis- and disinformation campaigns designed to 
disrupt democratic elections. This sort of election 
interference really troubles me and a lot of other people.
    You mentioned there is more that platforms could be doing 
about moderating content online. What more should they be doing 
about this issue now, this time?
    Dr. Farid. Yes. So let me just give you one example. A few 
months ago, we saw a fake video of Speaker Pelosi make the 
rounds, OK, and the response was really interesting. So 
Facebook said, ``We know it is fake, but we are leaving it up. 
We are not in the business of telling the truth.''
    So that was not a technological problem, that was a policy 
problem. That was not satire. It was not comedy. It was meant 
to discredit the Speaker.
    And so I think, fundamentally, we have to relook at the 
rules. And in fact, if you look at Facebook's rules, it says 
you cannot post things that are misleading or fraudulent. That 
was a clear case where the technology worked, the policy is 
unambiguous, and they simply failed to implement the policy.
    Ms. Matsui. They failed. OK.
    Dr. Farid. To YouTube's credit, they actually took it down. 
And to Twitter's discredit, they didn't even respond to the 
issue.
    So in some cases, there is a technological issue, but more 
often than not we are simply not enforcing the rules that are 
already in place.
    Ms. Matsui. So that is a decision they made----
    Dr. Farid. Right.
    Ms. Matsui [continuing]. Nnot to enforce the rules.
    OK.
    Ms. Oyama and Mr. Huffman, what do you think about what Mr. 
Farid just said?
    Mr. Huffman. Sure. I will respond.
    There are two aspects to this. First, specifically towards 
Reddit, we have a policy against impersonation.
    Ms. Matsui. OK.
    Mr. Huffman. So a video like that can both be used to 
manipulate people or serve as misinformation. It also raises 
question about the veracity of the things that we see and hear 
and prompts important discussions.
    So the context around whether a video like that stays up or 
down on Reddit is really important, and those are difficult 
decisions.
    I will observe that we are entering into a new era where we 
can manipulate videos. We have historically been able to 
manipulate text and images with Photoshop, and now videos.
    So I do think not only do the platforms have a 
responsibility, but we as a society have to understand that the 
source of materials--for example, which publication--is 
critically important because there will come a time, no matter 
what any of my tech peers say, where we will not be able to 
detect that sort of fakery.
    Ms. Matsui. Exactly.
    And, Ms. Oyama, I know I only have 15 seconds.
    Ms. Oyama. Thank you.
    I mean, on the specific piece of content that you 
mentioned, YouTube, we do have a policy against deceptive 
practices and removed it.
    But there is ongoing work that needs to be done to be able 
to better identify deepfakes. I mean, of course, even comedians 
sometimes use them, but in political context or other places, 
it could severely undermine democracy. And we have opened up 
data sets, we are working with researchers to build technology 
that can better detect when media is manipulated in order for 
those policies to kick in.
    Ms. Matsui. Well, I appreciate the comment. I have a lot 
more to say, but you know how this is.
    But anyway, I yield back the balance of my time. Thank you.
    Mr. Doyle. The gentlelady yields back.
    The Chair recognizes Mr. Kinzinger for 5 minutes.
    Mr. Kinzinger. Thank you, Mr. Chairman.
    And thank you all for being here today. We very much 
appreciate it.
    It is interesting, on the last line of questions, you know, 
one of the best things about democracy is our ability to have 
free speech and share opinions, but this can also be something 
that is a real threat. So I thank the chairman for yielding.
    And I think it is safe to say that not every Member of 
Congress has a plan for what to do about Section 230 of the 
Communications Decency Act, but I think we all agree that the 
hearing is warranted. We need to have a discussion about the 
origins and intent of that section and whether the companies 
that enjoy these liability protections are operated in the 
manner intended.
    And I will state up front that I generally appreciate the 
efforts certain platforms have made over the years to remove 
and block unlawful content. But I would also say that it is 
clearly not enough and that the status quo is unacceptable.
    It has been frustrating for me in recent years that my 
image and variations of my name have been used by criminals to 
defraud people on social media, and this goes back 10 years, 
and literally, I think, could approach in the fifties to 
hundreds given on the ones that we just know about. These scams 
are increasingly pervasive, and I not only brought it up in the 
hearing with Mark Zuckerberg last year, I also wrote him again 
this summer to continue to press him to act more boldly to 
protect his users.
    So I have a question. Sources indicate that in 2018 people 
reported hundreds of millions of dollars lost to online 
scammers, including $143 million through romance scams. Given 
what so many people have gone through, it has become more and 
more important for platforms to verify user authenticity.
    So both to Mr. Huffman and Ms. Oyama, what do your 
platforms do to verify the authenticity of user accounts?
    Mr. Huffman. Sure. Thank you for the question.
    So there are again two parts to my answer. The first is on 
the scams themselves. My understanding is you are probably 
referring to scams that target veterans in particular.
    We have a number of veterans communities on Reddit around 
support and shared experiences. They all, like all of our 
communities, create their own rules, and these communities have 
actually all created rules that prohibit fundraising generally, 
because the community and the members of those communities know 
that they can be targeted by this sort of scam in particular.
    So that is the sort of nuance that we think is really 
important and highlights the power of our community model, 
because I, as a nonveteran, might not have had that same sort 
of intuition.
    Now, in terms of what we know about our users, Reddit is 
not--we are different from our peers in that we don't require 
people to share their real world identity with us. We do know 
where they register from, what IPs they use, maybe their email 
address, but we don't force them to reveal their full name or 
their gender. And this is important, because on Reddit there 
are communities that discuss sensitive topics, in those very 
same veteran communities or, for example, drug addiction 
communities or communities for parents who are struggling being 
new parents. These are not things that somebody would go onto a 
platform like Facebook, for example, and say, ``Hey, I don't 
like my kids.''
    Mr. Kinzinger. Yes, I understand. I don't mean to cut you 
off, but I want to go to Ms. Oyama.
    Ms. Oyama. Sure. And I am very sorry to hear that that 
happened to you, Congressman.
    On YouTube we have a policy against impersonation. So if 
you were to ever see a channel that was impersonating you or a 
user saw that, there is a form where they can go in and submit. 
I think they upload their government ID, but that would result 
in the channel being struck.
    On Search, spams can show up across the web. Search is an 
index of the web. We are trying to give relevant information to 
our users every single day on Search. We suppress 19 billion 
links that are spam, that could be scama, to defend the users. 
And then on Ads, we have something called the Risk Engine that 
can actually kick out bad or fraudulent accounts before they 
enter the system.
    Mr. Kinzinger. Thank you.
    And, you know, look, I am not upset about the sites that 
are, like, ``Kinzinger is the worst Congressman ever,'' right, 
that is understandable, I guess, for some people. But when you 
have, again, in my case, somebody that flew--as an example, and 
there are multiple cases--flew from India using her entire life 
savings because she thought we were dating for a year, not to 
mention all the money that she gave to this perpetrator, and 
all these other stories.
    I think one of the biggest and most important things is 
people need to be aware of that. If you have somebody over a 
period of a year dating you and never authenticated that, it is 
probably not real.
    Ms. Peters, what are the risks associated with people not 
being able to trust other users' identities online?
    Ms. Peters. I think there are multiple risks of that, but I 
want to come back to the key issue for us, which is if it is 
illicit the sites should be required to hand over data to law 
enforcement, to work proactively with law enforcement.
    We have heard a lot today from the gentleman from Reddit 
about their efforts to better moderate. Some of our members 
were able to go online just the other day, type in a search for 
``buy fentanyl'' online, and came up with many, many results. 
The same for ``buy Adderall online,'' ``buy Adderall for cheap 
without prescription.''
    Those are fairly simply search terms. I am not talking 
about a super high bar. To get rid of that on your platform 
doesn't seem too hard, or to have that automatically direct to 
a site that would advise you to get counseling for drug abuse.
    We are not trying to be the thought police. We are trying 
to protect people from organized crime and terror activity.
    Mr. Kinzinger. Thank you. And I will yield back, but I have 
a bunch more questions I will submit. Thank you.
    Mr. Doyle. The gentleman yields back.
    And for the record, I want to say I don't think the 
gentleman is the worst Member of Congress. I don't even think 
you are at the very bottom, Adam. You are not a bad guy.
    The Chair recognizes Ms. Castor for 5 minutes.
    Ms. Castor. Well, thank you, Chairman Doyle, for organizing 
this hearing.
    And thanks to all of our witnesses for being here today.
    I would like to talk about the issue of 230 in the context 
of this horrendous tragedy in Wisconsin a few years ago and 
Armslist.com, where a man walked into a salon where his wife 
was working and shot her dead in front of their daughter and 
killed two others in that salon and then killed himself. And 
this is the type of horrific tragedy that is all too common in 
America today.
    But, Dr. McSherry, you mentioned--I think you misspoke a 
little bit because you said that was all legal, but it wasn't, 
because 2 days before the shooting there was a temporary 
restraining order issued against that man. He went online 
shopping on Armslist.com 2 days after that TRO was issued, and 
the next day he commenced his murder spree.
    And what happened is Armslist knows that they have domestic 
abusers shopping, they have got felons, they have got 
terrorists shopping for firearms, and yet they are allowed to 
proceed with this.
    Earlier this year, the Wisconsin Supreme Court ruled that 
Armslist is immune even though they know that they are 
perpetuating illegal content in these kind of tragedies. They 
said, the Wisconsin Supreme Court ruled that Armslist is immune 
because of Section 230. They basically said it did not matter 
that Armslist actually knew or even intended that its website 
would facilitate illegal firearms sales to dangerous persons, 
Section 230 still granted immunity.
    And then, Ms. Peters, you have highlighted that this is not 
an isolated incident. We are talking about child sexual abuse 
content, illegal drug sales. I mean, it has just--it has gone 
way too far.
    So I appreciate that you all have proposed some solutions 
for this.
    Dr. Citron, you have highlighted a safe harbor, that if 
companies use their best efforts to moderate content they would 
have some protection. But how would this work in reality? Would 
this be, then, it is left up to the courts in those type of 
liability lawsuits, which kind of speaks to the need for very 
clear standards coming out of the Congress, I think?
    Ms. Citron. So yes, it would. And thank you so much for 
your question. How would we do this? It would be in the courts. 
So it would be an initial motion to dismiss. The company would 
then--whoever is being sued, the question would be: Are you 
being reasonable in your content moderation practices writ 
large, not with regard to any one piece of content or activity? 
And it is true that it would then, the enforcing mechanism, the 
12(b)(6) motion in Federal court, have companies then explain 
what constitutes reasonableness.
    Now, I think we can come up right now, with all of us, we 
have come up with some basic sort of threshold what we think is 
reasonable content moderation practices, what we might describe 
as technological due process. Transparency, accountability, 
clarity of what it is having a process, having clarity about 
what it is you prohibit.
    But it is going to have to be case by case, context by 
context, because what is a reasonable response to a deepfake, 
and I have done a considerable amount of work on deepfakes, is 
going to be different from the kind of advice I would give to 
Facebook, Twitter, and others about what constitutes a threat 
and how one figures that out. How we can use--and I am thinking 
about Dr. Farid's testimony about what we do about--there are 
certain issues----
    Ms. Castor. And then let me--and it would be in the public 
interest, I believe, that if it is explicit illegal content, 
that they don't--it wouldn't wind up as an issue of fact in a 
lawsuit.
    What do you think, Dr. Farid? If it is illegal content 
online, there really shouldn't be a debatable question, right?
    Dr. Farid. I am not a lawyer, to be clear, I am a 
mathematician by training, so I don't think you really want to 
be asking me that question, but I completely agree with you. In 
some cases we have seen over the years, and we saw this when we 
were deploying PhotoDNA, is the technology companies want to 
get you muddled up in the gray area.
    So we had conversations when we were trying to remove child 
abuse material saying: What happens when it is an 18-year-old? 
You know, what happens when it is not sexually explicit?
    And my answer is, yes, those are complicated questions, but 
there is really clearcut bad behavior. We are doing awful 
things to kids as young as 2 months old. There is no issue.
    Ms. Castor. I am going to interrupt you, because my time is 
short, and just going to highlight to the witnesses. There is 
also an issue with the number of moderators who are being hired 
to go through this content. A publication called The Verge had 
a horrendous story of Facebook moderators, and it caught my 
attention because one of the places is in Tampa, Florida, my 
district.
    I am going to submit follow-up questions about moderators 
and some standards for that practiceas follow-up, and I 
encourage you to answer and send it back. Thank you.
    Mr.McNerney [presiding]. The gentlelady yields.
    Now the Chair recognizes the gentleman from Illinois, Mr. 
Shimkus, for 5 minutes.
    Mr. Shimkus. Thank you, Mr. Chairman. It is great to be 
with you. I am sorry I missed a lot of this because I am 
upstairs. But in my 23 years being a Member, I have never had a 
chance to really address the same question to two different 
panels on the same day. So it was kind of an interesting 
convergence. Upstairs we are talking about e-vaping and 
underage use and what is in the product.
    So I was curious, when we were in the opening statements 
here, someone, and I apologize, I am not sure, mentioned two 
cases. One was dismissed because they really did nothing, and 
one, the one who tried to be the good actor, got slammed. I 
don't know about slammed. But I see a couple heads being--Ms. 
Citron, can you address that first? You are shaking it the 
most.
    Ms. Citron. Yes, enthusiastically, because those are the 
two cases that effectively gave rise to Section 230. So what 
animates Chris Cox to go to Ron Wyden and say, you know, ``We 
have got to do something about this'' is two--a pair of 
decisions in which one basically says, if you do nothing you 
are not going to be punished for it, but if you try and you 
moderate, actually that heightens your responsibility.
    Mr. Shimkus. So no good deed goes unpunished.
    Ms. Citron. Exactly. Right. So that is why we are in heated 
agreement about those two cases. That is why we are here today 
in many respects.
    Mr. Shimkus. So, if I tie into this what is going on 
upstairs, and someone uses a platform to encourage underage 
vaping with unknown nicotine content, and the site then decides 
to clean it up, because of the way the law is written right now 
this good deed, which we most would agree that it probably is a 
good deed, would go punished?
    Ms. Citron. No, no. Now we have Section 230. That is why we 
have Section 230. They are encouraged, just so long as they are 
doing it in good faith, under section 230 (c)(2), they can 
remove it, and they are Good Samaritans.
    Mr. Shimkus. Right. OK. So that is the benefit of it. Is 
there fear? OK. So in this debate that we heard earlier in 
opening comments from some of my colleagues in the USMCA 
debate, that part of that would remove the protections of 230, 
and then we would fall back to a regime by which the good-deed 
person could get punished. Is that correct? Everybody is kind 
of shaking their head mostly?
    Ms. Peters, you are not. Go ahead.
    Ms. Peters. We need to keep the 230 language out of the 
trade agreements. It is currently an issue of great debate here 
in the United States. It is not fair to put that in a trade 
agreement. It will make it impossible for--or make it harder 
for----
    Mr. Shimkus. Well, don't get me wrong, I want USMCA passed 
as soon as possible without any encumbered work that doesn't 
happen, and I am not a proponent of trying to delay this 
process, but I am just trying to work through this debate. I 
mean, the concern upstairs to those of us--we believe in legal 
products that have been, me, approved by the FDA, and we are 
concerned about a black market operation that would then use 
platforms illicitly to sell to underage kids. That would be how 
I would tie these two hearings together, which, again, I still 
think is pretty interesting.
    When we had the Facebook hearing a couple years ago, I 
referred to a book called ``The Future Computed,'' which talks 
about the ability of industry to set those standards. I do 
think that industry--we do this across the board in a lot of 
this, whether it is engineering of heating and air cooling 
equipment or that. We do have industry that comes together for 
the good of the whole, for the good actors, and say, ``Here are 
our standards.''
    And the fear is that, if this sector doesn't do that, then 
the heavy hand of government will do it, which I think would 
really cause a little more problem.
    Dr. Farid, you are shaking your head.
    Dr. Farid. We have been saying to the industry, ``You have 
to do better because, if you don't, somebody is going to do it 
for you. So you do it on your terms or somebody else's terms.''
    Mr. Shimkus. That would be us.
    Dr. Farid. So do it on your terms. I agree.
    Mr. Shimkus. We are not the experts.
    So part of the book talks about fairness, reliability, 
privacy, inclusion, transparency, and accountability. I would 
encourage the industry and those who are listening to help us 
move in that direction on their own before we do it for them.
    And with that, Mr. Chairman, I yield back my time.
    Mr. McNerney. The gentleman yields, and the Chair 
recognizes the chair for 5 minutes.
    I would like to--I mean, it is very interesting testimony 
and jarring in some ways.
    Ms. Peters, your testimony was particularly jarring. Have 
you seen any authentic offers of weapons of mass destruction 
being offered for sale online?
    Ms. Peters. I have not personally, but we certainly have 
members of our alliance that are tracking weapons activity. And 
I think what is more concerning to me in a way is the number of 
illegal groups, from Hezbollah, designated Hezbollah groups, to 
al-Qaida, that maintain web pages and links to their Twitter 
and Facebook pages from those and then run fundraising 
campaigns off of them. There are many, many----
    Mr. McNerney. I am just interested in the weapons of mass 
destruction issue.
    Ms. Peters. There are many platforms that allow for secret 
and private groups. It is inside--those groups are the 
epicenter of illicit activity. So it is hard for us to get 
inside those. We have actually run undercover operations to get 
inside some of them. But we haven't gotten----
    Mr. McNerney. All right. Thank you, Ms. Peters.
    Mr. Farid, in your testimony, you talked about the tension 
at tech companies between the motivation to maximize amount of 
time online on their platforms on the one hand, and on the 
other hand content moderation. Could you talk about that 
briefly, please?
    Dr. Farid. So we have been talking a lot about 230, and 
that is an important conversation, but there is another tension 
point here, and there is another thing, which is the underlying 
business model of Silicon Valley today is not to sell a 
product. You are the product.
    And in some ways that is where a lot of the tension is 
coming from, because the metrics we use at these companies for 
success is how many users and how long do they stay on the 
platforms. You can see why that is fundamentally in tension 
with removing users, removing content.
    And so the business model is also at issue, and the way we 
deal with privacy of user data is also at issue here, because 
if the business model is monetizing your data, well, then I 
need to feed you information. There is a reason why we call it 
the rabbit hole effect on YouTube. There is a reason why, if 
you start watching certain types of videos of children or 
conspiracies or extremism, you are fed more and more and more 
of that content down the rabbit hole.
    And so there is real tension there, and it is the bottom 
line. It is not just ideological. We are talking about the 
underlying profits.
    Mr. McNerney. OK.
    Ms. Oyama, would you like to add to that?
    Ms. Oyama. Thank you.
    I think many of these issues that we are discussing today, 
whether it is harassment, extremism, it is important to 
remember the positive and productive potential for the 
internet. On YouTube we have seen It Gets Better, we have seen 
countermessaging. We have a program called Creators for Change 
who are able to create really compelling content for youth to 
counter extremist messages.
    And I think it is just good to remember the CDA 230 was 
born out of this committee. It has been longstanding policy. It 
is relevant to foreign policy as well. We would support its 
inclusion in USMCA or any other modern digital trade framework. 
It is responsible for the $172 billion surplus the United 
States has in digital services. It is critically important for 
small businesses to be able to moderate content and to prevent 
censorship from other, more oppressive regimes abroad.
    Mr. McNerney. It is a great issue, and it is kind of hard 
to restrain yourself to brief answers. I understand that.
    But clearly, companies could be doing more today within the 
current legal framework to address problematic content. I would 
like to ask each of you very briefly what you think could be 
done today with today's tools to moderate content, starting 
with Mr. Huffman. Very briefly, please.
    Mr. Huffman. Sure. So for us, the biggest challenge is 
evolving our policies to meet new challenges. But as such, we 
have evolved our policies a dozen times over the last couple 
years, and we continue to do so into the future. For example, 
two recent ones for us were expanding our harassment policy and 
banning deepfake pornography.
    So undoubtedly there will be--``deepfake pornography'' 
wasn't even a word 2 years ago. So undoubtedly there will be 
new challenges in the future, and being able to stay nimble and 
address them is really important. 230 actually gives us the 
space to adapt to these sorts of new challenges.
    Mr. McNerney. OK.
    Ms. Citron.
    Ms. Citron. I would say so would a reasonableness standard. 
The nimbleness that reasonable enables is ensuring that we do 
respond to changing threats. The threats landscape is going to 
change. We can't have a checklist right now. But I would 
encourage companies to not only have policies but be clear 
about them and to be accountable.
    Mr. McNerney. OK.
    Dr. McSherry.
    Dr. McSherry. Just quickly, the issue for me with the 
reasonableness standard is, as a litigator, that is terrifying. 
That means as a practical matter, especially for a small 
business, a lot of litigation risk as courts try to figure out 
what counts as reasonable.
    To your question, one of the crucial things I think we need 
if we want better moderation practices and we want users not to 
be treated just as products is to incentivize alternative 
business models. We need to make sure that we clear a space so 
there is competition so then, when a given site is behaving 
badly, such as Grindr, people have other places to go with 
other practices and they are encouraged to--you know, other 
sites are encouraged to develop and evolve. That will make--
market forces sometimes can work. We need to let them work.
    Mr. McNerney. Thank you.
    I am going to have to cut off my time now, and I am going 
to yield to the gentlelady from Indiana, Mrs. Brooks, for 5 
minutes.
    Mrs. Brooks. Thank you, Mr. Chairman. Thank you so much for 
this very important hearing.
    Dr. Farid, actually, to set the record, and the reason I am 
asking these questions, I am a former U.S. attorney. I was very 
involved in the Internet Crimes Against Children Task Force. We 
did a lot of work from 2001 to 2007.
    And you are right, Mr. Huffman, deepfake pornography was 
not a term at that time.
    And so we certainly know that law enforcement has been 
challenged for now decades in dealing with pornography over the 
internet. And yet, I believe that we have to continue to do 
more to protect children and protect kids all around the globe.
    A concept, or tool, PhotoDNA, was developed a long time ago 
to detect criminal online child pornography, yet it means 
nothing to detect that illegal activity if the platforms don't 
do anything about it. And so now we have been dealing with this 
now for decades. This is not new. And yet, we now have new 
tools, right, so PhotoDNA. Is it a matter of tools or effort? 
Or how is it that it is still happening?
    Dr. Farid.
    Dr. Farid. I have got to say this is a source of incredible 
frustration. So first of all, I was part of the team that 
developed PhotoDNA back in 2008 with Microsoft. And I will tell 
you, for an industry that prides itself on rapid and aggressive 
development, there have been no tools in the last decade that 
have gone beyond PhotoDNA. That is pathetic, that is truly 
pathetic when we are talking about this kind of material.
    How does an industry that prides itself on innovation say 
we are going to use 10-year-old technology to combat some of 
the most gut-wrenching, heartbreaking content online? It is 
completely inexcusable. This is not a technological limitation. 
This is we are simply not putting the effort into developing 
and deploying the tools.
    Mrs. Brooks. And let me just share that having watched some 
of these videos, it is something you never want to see and you 
cannot get out of your mind.
    Dr. Farid. I agree.
    Mrs. Brooks. And so I am curious. Ms. Oyama, you wanted to 
respond, and how is it that we are still at this place?
    Ms. Oyama. Yes. Thank you for the question.
    I mean, I will say at Google that is not true at all. We 
have never stopped working on prioritizing this. We can always 
do better. But we are constantly adopting new technologies. We 
initiated one of the first ones, which was called CSAI Match, 
which enabled us to create digital fingerprints of this 
imagery, prevent it from ever being reuploaded on YouTube, and 
we also share it with NCMEC.
    And there is a new tool that we have called a Content 
Safety API, it is very new, and we are sharing it with others 
in the industry, with NGOs. It has resulted in a 7X increase in 
the speed at which this type of content is able to identify.
    So it is going to continue to be a priority, but I just 
wanted to be clear that, from the very top of our company, we 
need to be a safe, secure place for parents and children, and 
we will not stop working on this issue.
    Mrs. Brooks. Well, and I am very pleased to hear that there 
have been advances then, and that you are sharing them, and 
that is critically important.
    However, I will say that Indiana State Police Captain Chuck 
Cohen, who has actually testified before Energy and Commerce, 
recently told me that one of the issues that law enforcement 
runs into when working with internet companies is an attitude 
that he calls minimally compliant. And he said that internet 
companies will frequently not preserve content that can be used 
for investigation if law enforcement makes the companies aware 
of the concerning materials or automatically flags that content 
to law enforcement for review without actually checking if it 
is truly objectionable or not.
    Do any of you have thoughts specifically on his comment? He 
has been an expert. Do any of you have thoughts on how we 
balance this law enforcement critical need? Because they are 
saving children all around the globe, Ms. Peters, without 
restricting companies' immunity from hosting concerning 
content.
    Ms. Peters. I just feel like if companies start getting 
fines or some sort of punitive damage every time there is 
illicit content, we are going to see a lot less illicit content 
very, very quickly. If it is illegal in real life, it should be 
illegal to host it online. And that is a very simple approach 
that I think we could apply industrywide.
    Mrs. Brooks. And so I have a question, particularly because 
I asked Mark Zuckerberg this relative to terrorism and to 
recruitment and ISIS, and now we need to be even be more 
concerned about ISIS. And I understand that you have teams of 
people that take it down. How many people are on your team, Mr. 
Huffman?
    Mr. Huffman. Dedicated to?
    Mrs. Brooks. Removing content.
    Mr. Huffman. Removing contents at scale and writing our 
policies, it is about 20 percent of our company. It is about 
100 people.
    Mrs. Brooks. Twenty percent of your company, about 100 
people.
    Ms. Oyama, how many people?
    Ms. Oyama. More than 10,000 people working on content 
moderation.
    Mrs. Brooks. That actually remove content?
    Ms. Oyama. That are involved in the content moderation, 
development of the policies, or the human----
    Mrs. Brooks. But how many people are on the team that 
actually do that work?
    Ms. Oyama. Again, I am happy to get back to you.
    Mrs. Brooks. OK. Thank you.
    With that, I yield back. Thank you.
    Mr. McNerney. The gentlelady yields.
    At this point I would like to introduce a letter for the 
record. Without objection, so ordered.
    [The information appears at the conclusion of the hearing.]
    Mr. McNerney. Next, the Chair recognizes the gentlewoman 
from New York, Ms. Clarke, for 5 minutes.
    Ms. Clarke. I thank our chairman and our chairwoman and our 
ranking members for convening this joint subcommittee hearing 
today on fostering a healthier internet to protect consumers.
    I introduced the first House bill on deepfake technology, 
called the DEEPFAKES Accountability Act, which would regulate 
fake videos. Deepfakes can be used to impersonate political 
candidates, create fake revenge porn, and theater the very 
notion of what is real.
    Ms. Oyama, Mr. Huffman, your platforms are exactly where 
deepfakes are shared. What are the implications of Section 230 
on your deepfakes policies?
    Mr. Huffman. Sure, I will go. Thank you for the question.
    So we released--actually, I think, with most of our peers 
around the same time--prohibition of deepfake pornography on 
Reddit because we saw that as a new, emerging threat that we 
wanted to get ahead of as quickly as possible.
    The challenge we face, of course, is the challenge you 
raise, which is the increasing challenge of being able to 
detect what is real or not. This is where we believe that 
Reddit's model actually shines. By empowering our users and 
communities to adjudicate on every piece of content, they often 
highlight things that are suspicious, not just videos and 
images but also texts and news sources.
    I do believe very strongly that we as a society, not just 
us as platforms, but in addition to, have to develop defenses 
against this sort of manipulation, because it is only going to 
increase.
    Ms. Clarke. Ms. Oyama.
    Ms. Oyama. Thank you.
    Yes, on YouTube our overall policy is a policy against 
deceptive practices. So there has been instances where we have 
seen these deepfakes. I think the Speaker Pelosi video is one 
example where we identified that. It was a deepfake, and it was 
removed from the platform.
    For both Search and for YouTube, surfacing authoritative, 
accurate information is core to our business, core to our long-
term business incentives.
    I would agree with what Mr. Huffman said, is that one of 
the things that we are doing is investing deeply in the 
academic side, the research side, the machine learning side to 
open up data sets where we know these are deepfakes and get 
better at being able to identify when content is manipulated.
    We also do have a revenge porn policy for Search for users 
who are victimized by that, and we did also expand that to 
include synthetic images or deepfakes in that area, too.
    Ms. Clarke. Very well.
    Ms. Citron, could you discuss the implication of Section 
230 on deepfakes monitoring and removal?
    Ms. Citron. Section 230, sort of the activities that we 
have seen YouTube and Reddit engage in, are precisely the kinds 
of activities that are proactive in the face of clear 
illegality, moving quickly.
    But the real problem isn't these folks at the table. There 
are now--so Deeptrace Labs just issued a poll 2 weeks ago 
showing that 8 out of the 10 biggest porn sites have deepfake 
sex videos, and there are 4 sites now that basically their 
business model is deepfake sex videos and that 99 percent of 
those videos involve women.
    Ms. Clarke. So let me ask you. Does the----
    Ms. Citron. Section 230 provides them immunity because it 
is users posting them.
    Ms. Clarke. Does the current immunity structure reflect the 
unique nature of this threat?
    Ms. Citron. I don't think that--so, Section 230, as it is 
devised, it is, at its best, it is supposed to incentivize the 
kind of nimbleness that we are seeing for some dominant 
platforms. But it is not, the way the plain language is written 
under 230(c)(1), it doesn't condition the immunity on being 
responsible and reasonable. And so you have these outliers that 
cause enormous harm because it can be that in a search of your 
name that there is a deepfake sex video until it is, you know, 
de-indexed. And it is findable and people then contact you, and 
it is terrifying for victims.
    So it is really these outlier companies that their business 
model is this kind of abuse, and Section 230 is what they point 
to when they gleefully say, ``Sue me. Too bad, so sad.'' And 
that is the problem.
    Ms. Clarke. Very well.
    One of the many issues that has become an existential 
threat to civil society is the rise of hate speech and 
propaganda on social media platforms.
    Ms. Oyama, if 230 were removed, would platforms be liable 
for hosting distasteful speech, and would it change their 
incentives around moderating such speech?
    Ms. Oyama. Thank you for the question. I think this is a 
really important area to show the power and the importance of 
CDA 230.
    I mean, as you know, there are First Amendment restrictions 
on government regulation of speech. So there is additional 
responsibility for service providers like us in the private 
sector to step up. We have a policy against hate speech. 
Incitement to violence is prohibited. Hate speech is 
prohibited, speech targeting hate at specific groups for 
attributes based on race, religion, veteran status, age.
    And the takedowns that we do every single quarter through 
automated flagging, through machine learning, or through human 
reviewers are lawful and possible because of 230. When we take 
down content, someone's content is being taken down. And so 
they can regularly come back to any service provider, big or 
small. They may sue them for defamation or other things.
    I think looking at the equities of the small business 
interests in this space would be really important as well, 
because I think they would say that they are even more deeply 
reliant on this flexibility and this space to innovate new ways 
to identify bad content and take it down without fear of 
unmitigated, you know, litigation, or legal risk, or legal 
uncertainty.
    Ms. Clarke. Very well. Thank you very much.
    I yield back, Madam Chairman.
    Ms. Schakowsky [presiding]. The gentlelady yields back.
    And now, Mr. Walberg, you are recognized for 5 minutes.
    Mr. Walberg. I thank the chairwoman.
    And I appreciate the panel being here.
    Today's hearing and the issues at hand hit home for a lot 
of us, as we have discussed here. The internet is such an 
amazing, amazing tool. It has brought about great innovation, 
connecting millions of people in ways that were never even 
thought of before. And, I mean, truthfully we look forward to 
what we will see in the future. But these are issues we have to 
wrestle with.
    Earlier this year I was pleased to invite Haley Petrowski 
from my district to the State of the Union as my guest to 
highlight her good work that she is doing in my district and 
surrounding areas to help combat cyberbullying, a very much 
comprehensive individual who understands so much as a young 
person of what is going on and is having a real impact in high 
schools and in colleges now as a result of her experience and 
trying to attempt to make some positive things out of it after 
she almost committed suicide, and thankfully it wasn't 
successful, as a result of cyberbullying. She has shined a 
light on that.
    So, Mr. Huffman and Ms. Oyama, what are your companies 
doing to address cyberbullying on your platforms?
    Mr. Huffman. Sure. Thank you for the question, Congressman.
    Just 2 weeks ago we updated our policies around 
harassments. It is one of the, I think, most complex or nuanced 
challenges we face because it appears in many ways.
    One of the big changes we made is to allow harassment 
reports not just from the victim but from third parties. 
Basically, if somebody else sees instances of harassment, they 
will report it to us and our team so that we can investigate.
    This is a nationwide issue, but particularly on our 
platform when people come to us in times of need. For example, 
a teenager struggling with their own sexuality has no place to 
turn, maybe not their friends, not their family, so they come 
to a platform like ours to talk to others in difficult 
situations; or people who are having suicidal thoughts come to 
our platform. And it is our first priority, regardless of the 
law, though we fully support lawmakers in this initiative, to 
make sure that those people have safe experiences on Reddit.
    So we have made a number of changes, and we will continue 
to do so in the future.
    Mr. Walberg. OK.
    Ms. Oyama.
    Ms. Oyama. Thank you for the question.
    On YouTube, harassment and cyberbullying is prohibited. And 
so we would use our policies to help us enforce, and either 
through automated detection, human flagging, community flagging 
we would be able to identify that content and take it down. 
Last quarter we removed 35,000 videos under that policy against 
harassment and bullying.
    And I did just want to echo Mr. Huffman's perspective that 
the internet and content sharing is also a really valuable 
place. It can serve as a lifeline to a victim of harassment or 
bullying. And we see that all the time when someone may be 
isolated in their school or somewhere else. Being able to reach 
out across borders to another State or to find another 
community has really created a lot of hope. And we also want to 
continue to invest in that important educational, mental health 
resources, content like that.
    Mr. Walberg. Well, I am glad to hear you both are willing 
to continue investing and helping us as we move forward in this 
area.
    Ms. Oyama, Google's Ad network has come a long way in the 
last few years and won't serve ads next to potentially illegal 
activity. This is laudable and demonstrates Google has come a 
long way in identifying illegal activity. Given that Google is 
able to identify such activity, why would it not just take down 
the content in question?
    Ms. Oyama. [Inaudible.] I am sorry.
    Mr. Walberg. That was for Ms. Oyama, for you.
    Ms. Oyama. It is true that on our Ad system we do have a 
risk engine, and so we prohibit illegal content. There are many 
different policies, and they are stricken, more than 2 billion 
ads every year are stricken out of the Ad network for violating 
those policies, illegal and beyond.
    Mr. Walberg. So you are taking them down.
    Ms. Oyama. Yes, absolutely, before they are ever able to 
hit any page. I think it is very squarely in line with our 
business interests. We want advertisers to feel that our 
network, that our platforms are safe. Our advertisers only want 
to be serving good ads to good content.
    Mr. Walberg. One final question. I understand that Google 
offers a feature to put a tag on copyrighted work that would 
automatically take it down if pirated and uploaded, but that 
Google charges a fee for this. Can this technology be applied 
to other legal content? And why doesn't Google offer this tool 
for free?
    Ms. Oyama. Thank you for the question.
    I think that may be a misperception, because we do have 
Content ID, which is our copyright management system. It is 
automated. We have partners across the music industry, film, I 
think every leading publisher is part of it. It is part of our 
partner program, so it is offered for free, and actually it 
doesn't cost the partners anything.
    It is a revenue generator. So last year we sent $3 billion 
based on Content ID claims of corrected material that right 
holders claimed. They were able to take the majority of the ad 
revenue associated with that content and it was sent back out 
to them.
    And that is system of being able to identify and detect 
algorithmically content, to then set controls, whether it 
should be in the entertainment space perhaps monetized and 
served or in the case of violent extremism absolutely blocked 
is something that powers much of YouTube.
    Mr. Walberg. Thank you. I yield back.
    Ms. Schakowsky. The gentlemen yields back.
    And, Mr. Loebsack, you are recognized for 5 minutes.
    Mr. Loebsack. Thank you, Madam Chair.
    I do want to thank Chairman Doyle and Chair Schakowsky and 
the two ranking members of the subcommittees for holding this 
hearing today.
    And I want to thank the witnesses for your attendance as 
well. This has been very informative, even if we are not able 
to answer all the questions we would like to be able to answer.
    And it is not the first time our committee has examined how 
social media and the internet can be both a force for 
innovation and human connection--which we all enjoy when we are 
making those connections, so long as they are positive, 
obviously--but also a vector of harm and criminality.
    I think everyone assembled here today is clearly very 
expert in your field, and I appreciate hearing from you all 
today as we consider how Section 230 has been interpreted by 
the courts since its initial passage and what, if any, changes 
we should be considering.
    I think there is a lot to consider as we discuss the full 
scope of what Section 230 covers. From cyberbullying and hate 
speech, whether on Facebook, YouTube or elsewhere, to the 
illicit transaction of harmful substances or weapons, I think 
the question today is twofold.
    First, we must ask if content moderators are doing enough. 
And, second, we must ask whether congressional action is 
required to fix these challenges. That second one has kind of 
been referred to obliquely throughout by some of you, by some 
of us, but I think that is essentially the second question that 
we are really facing today.
    And after reviewing the testimony you have submitted, we 
clearly have some differences of opinion on whether Section 230 
is where Congress should be focusing its resources.
    So, to begin, I would like to ask everyone the same 
question, and this is probably at once the easiest question to 
answer and the most difficult because it is exceedingly vague. 
What does the difference between good and bad content 
moderation look like?
    Start with you, Mr. Huffman.
    Mr. Huffman. Thank you, Congressman, for that 
philosophically impossible question, but I think there are a 
couple of easy answers that I hope everybody on this panel 
would agree with.
    Bad content moderation is ignoring the problem. And that 
was the situation we were in pre-230, and that was the sort of 
perverse incentives we were facing.
    I think there are many forms of good content moderation. 
What is important to us at Reddit is twofold. One, empowering 
our users and communities to set standards of discourse in 
their communities and amongst themselves. We think this is the 
only truly scalable solution. And the second is what 230 
provides us, which is the ability to look deeply in our 
platform to investigate, to use some finesse and nuance when we 
are addressing new challenges.
    Mr. Loebsack. Thank you.
    Ms. Citron.
    Ms. Citron. What was the question? To be about what makes 
bad--what makes content bad, or was it what makes----
    Mr. Loebsack. Moderation.
    Ms. Citron. OK.
    Mr. Loebsack. What is the difference between good and back 
content moderation.
    Ms. Citron. Moderation. OK.
    Mr. Loebsack. Because that is what we are talking about.
    Ms. Citron. No, of course, but it precedes the question of 
why we are here. That is, what kinds of harms get us to the 
table to say why we should even try to talk about changing 
Section 230.
    And I would say what is bad or incredibly troubling is when 
sites are permitted to have an entire business model which is 
abuse and harm. So, by my rights, that is the worst of the 
worst, and sites that induce and solicit illegality and harm, 
that to me is the most troubling.
    Mr. Loebsack. And that is the problem. But then the 
question is how to deal with the problem in terms of 
moderation.
    Ms. Citron. And I have got some answers for you, but, you 
know, if we want to wait to do that.
    Mr. Loebsack. You can submit them to us in writing if you 
would like.
    Ms. Citron. I did in my testimony.
    Mr. Loebsack. I understand that.
    Ms. Citron. We have got to deal with the bad Samaritans and 
then a broader approach.
    Mr. Loebsack. Thank you.
    Ms. McSherry.
    Dr. McSherry. Thank you. Thank you for the question.
    I actually think it is great question. And I think, as 
someone who supports civil liberties online as a primary goal 
for us, I think good content moderation is precise, 
transparent, and careful. What we see far too often is that, in 
the name of content moderation and making sure the internet is 
safe for everybody, actually all kinds of valuable and lawful 
content is taken offline.
    There are details about this submitted in our testimony, 
but I would just point to one example where we have an archive 
of--there is an archive of videos attempting to document war 
atrocities, but those videos are often flagged as violating 
terms of service because, of course, they contain horrible 
material. But the point is to actually support political 
conversations, and it is very difficult for the service 
providers to apparently tell the difference.
    Mr. Loebsack. Thank you.
    Ms. Peters.
    Ms. Peters. If it is illegal in real life, it ought to be 
illegal online. Content moderation ought to focus on illegal 
activity. And I think there has been little investment in 
technology that would improve this for the platforms precisely 
because of Section 230 immunities.
    Mr. Loebsack. Thank you.
    I do realize I am out of time. I am sorry I asked such a 
broad question of all of you, but I would like to get your 
response, if I could, the final two witnesses here, in writing, 
if I could, please.
    Thank you so much. And I yield back. Thank you.
    Ms. Schakowsky. The gentleman yields back.
    And now I recognize Mr. Carter for 5 minutes.
    Mr. Carter. Thank you, Madam Chair.
    And thank all of you for being here.
    I know that you all understand how important this is, and I 
hope that you--and I believe you all take it seriously. So 
thank you for being here, and thank you for participating in 
this.
    Ms. Peters, I am going start with you. I would like to ask 
you, in your testimony you pointed out that there is clearly 
quite a bit of illegal conduct that the online platforms still 
are hosting, for instance, illegal pharmacies where you can buy 
pills without a prescription, terrorists that are profiteering 
off of looted artifacts, and also products from endangered 
species. And then it even gets worse. You mentioned the sale of 
human remains and child exploitation, I mean, just gross 
things, if you will.
    How much effort do you feel like the platforms are putting 
into containing this and to stopping this?
    Ms. Peters. Well, it depends on the platform. But that is a 
very good question. And I would like to respond with a question 
to you and to the committee: When was the last time anybody 
here saw a dick pic on Facebook? Simple question.
    If they can keep genitalia off of these platforms, they can 
keep drugs off these platforms. They can keep child sexual 
abuse off these platforms. The technology exists. These are 
policy issues, whether it is the policy to allow the video of 
Nancy Pelosi on or the policy to allow pictures of human 
genitalia.
    Mr. Carter. I get it. I understand.
    Let me ask you this. Do you ever go to them and meet with 
them and express this to them?
    Ms. Peters. Absolutely.
    Mr. Carter. And how are you received?
    Ms. Peters. We are typically told that the firm has quite 
intelligent people working on it, that they are creating AI, 
and that in a few years that AI is going to work. And when we 
have presented evidence of specific, identifiable crime 
networks and terror networks, we have been told that they will 
get back to us, and then they don't. That has happened multiple 
times.
    Mr. Carter. Are you ever told that they don't want to meet 
with you? I mean----
    Ms. Peters. No, we have usually gotten meetings or calls.
    Mr. Carter. So you feel like you got a good relationship. 
Do you feel like the effort is being put forth?
    Ms. Peters. I don't feel like effort is being put forth. I 
feel like----
    Mr. Carter. You see, that is where I struggle, because I 
don't want the--you know, I am doing my best to keep the 
Federal Government out of this. I don't want to stifle 
innovation, and I am really concerned about that.
    But at the same time, look, we cannot allow this to go on. 
This is irresponsible. And if you don't do it, then you are 
going to force us to do it for you, and I don't want that to 
happen. I mean, it is just as clear as that.
    Let me ask, Ms. Peters, you also mentioned in your 
testimony that you were getting funding from the State 
Department to map wildlife supply chains, and that is when you 
discovered that there was a large retail market for endangered 
species that exists on some platforms like Facebook and WeChat. 
Have any of these platforms made a commitment to stop this? And 
if they have, is it working? It getting any better?
    Ms. Peters. I mean, that is a terrific example to bring up, 
sir. A number of tech firms have joined a coalition with World 
Wildlife Fund and IFAW and have taken a pledge to remove 
endangered species content and wildlife markets from their 
platforms by 2020.
    I am not aware that anything has changed. We have 
researchers going online and logging wildlife markets all the 
time.
    Mr. Carter. All right. I am going to be fair. OK. I am 
going to be fair and I am going to let the Google--I am sorry, 
I can't see that far--I am going to let you respond to that.
    Do you feel like you are doing everything you can?
    Ms. Oyama. Thank you.
    We can always do more. I think we are committed to always 
doing more.
    Mr. Carter. I appreciate that. I know that. I don't need 
you to tell me that. I need you to tell me ``We have got a plan 
in place, and it is fixed'' and then stop this.
    Ms. Oyama. Let me tell what you we are doing in the two 
categories that you mentioned.
    So for wildlife, the sale of the endangered species is 
prohibited from Google Ads, we are part of the coalition that 
Ms. Peters mentioned.
    On the national epidemic that you mentioned for opioids, we 
are hugely committed to helping and playing our part in 
combating this epidemic.
    So there is an online component and an offline component. 
The online component, the research has showed that less than 
0.05 percent of misuse of opioids originates on the internet. 
And what we have done, especially with Google Search, is work 
with the FDA. So the FDA can send us a warning letter if they 
see that there is a link in Search for a rogue pharmacy, and we 
will delist that out of Search.
    There is a really important offline component, too. So we 
work with the DEA on Prescription Takeback Day. We feature 
these places in Google Maps, on CBS. Happy to come in and----
    Mr. Carter. OK. And I invite you to do just that, OK. I 
would like to see you and talk to you further about this.
    Mr. Huffman, I am going to give you the opportunity, 
because we have gone, my staff has gone on Reddit, and they 
have Googled, if you will, or searched for illegal drugs, and 
it comes up. And I suspect you are going to tell me the same 
thing: We are working on it. We have almost it got it under 
control. But it is still coming up.
    Mr. Huffman. I have got a slightly different answer, if you 
will indulge me.
    First of all, it is against our rules to have controlled 
goods on our platform, and it is also illegal. 230 doesn't give 
us protection against criminal liability.
    We do see content like that on our platform. And, in fact, 
if you went to any technology service with a search bar, 
including your own emails, and typed in ``buy Adderall,'' I am 
sure you would find a hit in your spam folder at least, and 
that is the case on Reddit as well.
    That sort of content that has come up today is spam first 
gets removed by our filters, but there is a lag sometimes 
between something being submitted and something being removed. 
Naturally, that is how the system works.
    That said, we do take this issue very seriously, and so our 
technologies have continued to improve along these lines. And 
that is exactly the sort of ability that 230 gives us, is the 
ability to look for this content and remove it.
    Now, to the extent that you or your staff have found this 
content specifically, and to the extent that it is still on our 
platform, we would be happy to follow up later, because it 
shouldn't be.
    Mr. Carter. You know, my sons are grown now, but I feel 
like a parent pleading with their child again: Please don't 
make me have to do this.
    Thank you, Madam Chair. I yield back.
    Ms. Schakowsky. The gentleman yields back.
    And now I recognize Congresswoman Kelly for 5 minutes.
    Ms. Kelly. Thank you, Madam Chair. Thank you for holding 
this important hearing on Section 230 and fostering a 
healthier, more consumer-friendly internet.
    The intended purpose of Section 230 was to allow companies 
to moderate content under the Good Samaritan provision, and yet 
this law seems to be widely misapplied. The Good Samaritan 
provision in Section 230 was intended ``in good faith to 
restrict access or availability of material that the provider 
or user considers to be obscene, lewd, lascivious, filthy, 
excessively violent, harassing, or otherwise objectionable, 
whether or not such material is constitutionally protected.''
    Last Congress, Section 230 was amended through SESTA and 
FOSTA to make platforms liable for any activity related to sex 
trafficking. Since passage, some have criticized the law for 
being too ambiguous.
    In addition to my work on this committee, I chair the House 
Tech Accountability Caucus. In that capacity, I have sought to 
work with stakeholders to protect family users in an 
accountable manner while allowing innovators to innovate.
    Today, as we look to foster a healthier, more consumer-
friendly internet, it is my hope our discussion will set the 
standard of doing so in a responsible, effective, and balanced 
way.
    Professor Citron, in your testimony you discussed giving 
platforms immunity from liability if they could show that their 
content moderation practices writ large are reasonable. As the 
chairman referenced, how should companies know where the line 
is or if they are doing enough? Where is that line?
    Ms. Citron. And the sort of genius of reasonableness is 
that it matters and depends on the context. There are certainly 
some baseline presumptions, I would say defaults, about what 
would constitute reasonable content moderation practices, and 
that includes having them. There are some sites that don't 
engage in that at all. In fact, they absolutely don't engage in 
moderation, and they encourage abuse and illegality.
    But there are some baseline, I think, academic writing for 
the last 10 years and work I have done with companies for 10 
years is there is a baseline set of speech rules and policies 
that we have seen that are best practices, but naturally that 
is going to change, depending on the challenge.
    So we are going to have different approaches to different 
new and evolving challenges. And that is why a reasonableness 
approach which preserves the liability shield, right, but it 
does it in exchange for those efforts.
    Ms. Kelly. And would you agree that any changes we make, we 
have to ensure that it doesn't further ambiguity?
    Ms. Citron. Right. And I think just to, if I may, about 
FOSTA and SESTA, what was disappointing to someone who 
certainly helped some offices work on the language is when you 
included the language ``knowingly facilitate,'' that is the 
moderator's dilemma, that is, to either sit on your hands or to 
be overly aggressive.
    And so my biggest disappointment was unfortunately how it 
came out, because we do see--we almost see ourselves back to 
Prodigy and CompuServe, those initial cases, and either we are 
seeing way overly aggressive responses to sexual expression 
online, which is a shame, and we see the doing nothing. So I 
hope we don't do that.
    Ms. Kelly. Thank you.
    The way people communicate is changing rapidly, as we all 
know. Information can start on one platform and jump to another 
and go viral very quickly. The 2016 election showcased how 
false information can spread and how effective it can be to 
motivate or deter different populations. Often offensive 
content is first shared in groups and then filtered out to a 
wider audience.
    Ms. Peters, what do you believe is the responsibility of 
tech companies to monitor and proactively remove content that 
is rapidly spreading before being flagged by users?
    Ms. Peters. I believe that companies need to moderate and 
remove content when it concerns a clearly illegal activity. If 
it is illegal in real life, it ought to be illegal to host it 
online. Drug trafficking, human trafficking, wildlife 
trafficking, serious organized crime, and designated terror 
groups should not be given space to operate on our platforms.
    I also think that CDA 230 needs to be revised to provide 
more opportunities for State and local law enforcement to have 
the legal tools to respond to illicit activity. That is one of 
the reasons FOSTA/SESTA was passed.
    Ms. Kelly. And Ms. Oyama and Mr. Huffman, what steps are 
you taking beyond machine learning to stop the spread of 
extremist or misinformation content that is being shared 
widely? Are there flags that pop up if the same content is 
shared 10,000 or 100,000 times?
    Ms. Oyama. Yes. Thank you for the question.
    So on YouTube we are using machines and algorithms. Once 
content is identified and removed, our technology prevents it 
from being reuploaded.
    But I think to your really important point about working 
across platforms and cross-industry collaboration, a good 
example would be the GIFCT, the Global Internet Forum to 
Counter Terrorism. We are one of the founding members. Many of 
the leading players in tech are part of that.
    One of the things that we saw during the Christchurch 
shooting was how quickly this type of content can spread. And 
we were grateful to see that last week some of the crisis 
protocols we put into place kicked in. So there was a shooting 
in Germany. There was a piece of content that appeared on 
Twitch, and the companies were able to engage in the crisis 
protocol. There was a hash made of the content, it was spread 
across the companies, and that enabled all of us to block it.
    Ms. Kelly. And now I am out of time.
    Thank you.
    Ms. Schakowsky. The gentlelady yields back.
    And Mr. Bilirakis is recognized for 5 minutes.
    Mr. Bilirakis. Thank you, Madam Chair. I appreciate it very 
much.
    My first question is for Dr. McSherry, a yes or no. I 
understand in the past EFF has argued for including language 
mirroring legislation in trade deals explicitly for the purpose 
of baking language into an agreement to protect the statute 
domestically. Do you see the intent of including such 230-like 
language in trade agreements is to ensure that we may not 
revisit the statute?
    Dr. McSherry. No.
    Mr. Bilirakis. OK. All right. Thank you very much.
    And then what I would like to do, Madam Chair, I would like 
to ask that EFF, the blog post from January 23, 2018, by Jeremy 
Malcolm, be entered into the record.
    Ms. Schakowsky. Without objection, so ordered.
    [The information appears at the conclusion of the hearing.]
    Mr. Bilirakis. Thank you, Madam Chair. I appreciate it.
    The next question is for Mr. Huffman and Ms. Oyama. In 
April 2018, I questioned Mark Zuckerberg about how soon illegal 
opioid ads would be removed from their website. His answer was 
that the ads would be reviewed when they were flagged by users 
as being illegal or inappropriate. This, of course, is a 
standard answer in the social media space.
    However, Mr. Zuckerberg also said at the time that industry 
needs to, and I quote, ``build tools that proactively go out 
and identify ads for opioids before people even have to flag 
them for us to review,'' and that ends the quote. This would 
significantly, in my opinion, cut down the time an illegal ad 
would be on their website.
    Again, Mr. Huffman and Ms. Oyama, it has been a year and a 
half. This is an epidemic, and people are dying. I am sure you 
will agree with this. Has the industry been actively working on 
artificial intelligence flagging standards that can 
automatically identify illegal ads? And then what is the status 
of this technology, and when can we expect implementation, if 
they have been working on it?
    Whoever would like to go first is fine.
    Mr. Huffman.
    Mr. Huffman. Sure. Thank you, Congressman.
    So Reddit is a little different than our peers in that all 
of our ads go through a strict human review process, making 
sure that not only are they on the right side of our content 
policy, which prohibits the buying and selling of controlled 
substances, but also our much more strict ads policy, which has 
a much higher bar to cross because we do not want ads that 
cause any sort of controversy on our platform.
    Mr. Bilirakis. OK. But, I mean, you know, we have to be 
proactive as far as this is concerned, and Mr. Zuckerberg 
indicated that that is the case. You know, these kids are 
dying, people are dying, and we just can't stand by and have 
this happen and have access to these, well, in most cases 
opioids and drugs, different types of drugs.
    But, Ms. Oyama, would you like to comment, please?
    Ms. Oyama. Thank you.
    We certainly agree with your comment about the need for 
proactive efforts. So on Google Ads we have something called a 
risk engine that helps us identify if an ad is bad when it is 
coming into the system. We can kick it out. Last year, in 2018, 
we kicked out 3.2 billion ads out of our system for violating 
our policies.
    For any prescription that would show up in an ad, that is 
also independently verified by an independent group called 
LegitScript. So that would need to also be verified by them.
    And then, of course, in the specific case of opioids, those 
are a controlled substance under Federal law. So, there is a 
lot of important work that we have done with the DEA, with the 
FDA, even with pharmacies like CVS offline to help them promote 
things like Take Back Your Drugs Day where people can take 
opioids in and drop them off so they are not misused later on.
    One of the things that we have seen is that the vast 
majority, more than 99 percent of opioid misuse, happens in the 
offline world, so from a doctor that is prescribing it or a 
family member or a friend. And so using technology to also 
educate and inform people that might be potentially victimized 
from this is equally important to some of the work that we are 
doing in the ad space.
    Mr. Bilirakis. OK. How about anyone else on the panel, 
would they like to comment? Is the industry doing enough?
    Ms. Peters. I don't think the industry is doing enough. 
There is an enormous amount of drug sales taking place on 
Google Groups, on Instagram, on Facebook groups. The groups on 
these platforms are the epicenter, and this is why industry has 
to be monitoring this. If you leave this up to users to flag it 
and they are inside a private or a secret group, it is just not 
going to happen.
    These firms know what users are getting up to. They are 
monitoring all of us all the time so they can sell us stuff. 
They can figure this out.
    Dr. Farid. Congressman, can I also add there are two issues 
here. There are the ads, but there is also the native content. 
So you heard Ms. Peters say that she went this morning and 
searched on Reddit, and that content is there, even if it is 
not in the ads, and the same is true on Google Search. I can 
search for this. So there are two places you have to worry 
about these things, not just the ads.
    Mr. Bilirakis. Very good.
    All right. Thank you, Madam Chair. I yield back.
    Ms. Schakowsky. The gentleman yields back.
    And now I call on the chairman of our full committee for 5 
minutes, Mr. Pallone.
    Mr. Pallone. Thank you, Madam Chair.
    I wanted to start with Ms. Oyama. In your written testimony 
you discuss YouTube's community guidelines for hate speech, and 
I am concerned about news reports that hate speech and abuse is 
on the rise on social media platforms.
    How does Section 230 incentivize platforms to moderate such 
speech? And does Section 230 also incentivize platforms to take 
a hands-off approach to removing hate speech, if you will?
    Ms. Oyama. Thank you so much for the question.
    So on the category of hate speech, YouTube prohibits hate 
speech. We have a very clear policy against it. So that would 
be speech that incites violence or speech that is hateful 
against groups with specific attributes. So that could be 
speech based on their race, their religion, their sex, their 
age, their disability status, their veteran status.
    And so that is prohibited. It can be either detected by our 
machines, which is the case in more than 87 percent, by 
community flaggers, by individual users. And all of those 
actions that we take, last quarter, we saw a 5X increase in the 
amount of content that our machines were able to find and 
remove. Those removals are vitally dependent on the protection 
in CDA 230 to give service providers the ability to moderate 
content, to flag bad content, and to take it down.
    We do have claims against us when we remove speech. People 
may sue us for defamation. They may have other legal claims. 
And 230 is what enables not only Google or not only YouTube but 
any site with user comments, with user-generated content, any 
site on the internet, large or small, to be able to moderate 
that content.
    So I think we would just encourage Congress to think about 
not harming the good actors, the innocent actors that are 
taking these steps in an effort to go after a truly bad 
criminal actor where criminal law is fully exempted from the 
scope of the CDA 230. And they should be penalized, and law 
enforcement will play a really important role in bringing them 
down, as they did with Backpage that was taken down or on civil 
cases like Roommates.com where there is platform liability for 
bad actors that break the law.
    Mr. Pallone. Thank you.
    Dr. Farid, in your written testimony you state that the 
internet has led to the proliferation of domestic and 
international terrorism. As you may know, there is both 
criminal and civil liability associated with providing material 
support for terrorism.
    But I want to start with Dr. McSherry. Understanding that 
Section 230 doesn't apply to Federal criminal law, have U.S. 
social medial companies used 230 to shield themselves from 
civil liability for allowing their platforms to be used as 
propaganda in recruitment platforms for terrorists with regard 
to civil liability?
    Dr. McSherry. So there are ongoing cases, and there have 
been several cases where platforms have been accused of 
violating civil laws for hosting certain kinds of content on 
their platforms, and they have invoked Section 230 in those 
cases quite successfully.
    And I think that is not--if you look at the facts of a lot 
of those cases, that is actually quite appropriate. The reality 
is, it's very difficult for a platform to always be able to 
tell in advance, always draw the line in advance between 
content that is talking, that is simply protected political 
communications, and content that steps over a line. So these 
cases are hard, and they are complicated, and they have to get 
resolved on their facts.
    Section 230, though, also creates a space in which, because 
of the additional protections that it provides, it creates a 
space for service providers when they choose to, to moderate 
and enforce their own policies.
    Mr. Pallone. Let me go back to Dr. Farid.
    Do you have any thoughts on how this should be addressed 
from a technological perspective?
    Dr. Farid. I want to start by saying, when you hear about 
the moderation that is happening today--we have heard it from 
Google, we have heard it from Reddit--you should understand 
that has only come after intense pressure. It has come from 
pressure from advertisers. It has come from pressure on Capitol 
Hill. It has come from pressure in the EU. And it has come from 
pressure from the press. So there is bad news, there is bad PR, 
and then we start getting serious.
    For years we have been struggling with the social media 
companies to do more about extremism and terrorism online, and 
we have hit a hard wall. And then the EU started putting 
pressure. Capitol Hill started putting pressure. Advertisers 
started putting pressure. And we started getting responses.
    I think this is exactly what this conversation is about, is 
what is the underlying motivating factor? The self-regulation 
of ``trust us, we will do everything'' is not working. So the 
pressure has to come from other avenues.
    And I think putting pressure by modest changes to CDA 230 
is the right direction. And I agree with Ms. Oyama, is that if 
these are good actors, then they should encourage that change 
and help us clean up and deal with the problems that we are 
dealing with.
    I have been in this fight for over a decade now, and it is 
a very consistent pattern. You deny the problem exists, you 
minimize the extent of it, you deny the technology exists, and 
eventually you get enough pressure and then we start making 
changes. I think we should skip to the end part of that and 
just recognize that we can do better, and let's just start 
doing better.
    Mr. Pallone. Thank you.
    Thank you, Madam Chair.
    Ms. Schakowsky. The gentleman yields back.
    And now I recognize for 5 minutes Congressman Gianforte.
    Mr. Gianforte. Thank you, Madam Chair.
    And thank you for being here today.
    About 20 years ago I harnessed the power of the internet to 
launch a business to improve customer service. That company was 
called RightNow Technologies. And from a spare bedroom in our 
home, we eventually grew that business to be one of the largest 
employers in Montana. We had about 500 high-wage jobs there.
    The platform we created had about 8 million unique visitors 
per day. And I understand how important Section 230 can be for 
small business. This important liability shield has gotten 
mixed up, however, with complaints about viewpoint 
discrimination.
    And I want to cite one particular case. In March of this 
year, Missoula-based Rocky Mountain Elk Foundation reached out 
to my office because Google had denied one of their 
advertisements. The foundation did what it had done many times. 
They had tried to use paid advertising on the Google network to 
promote a short video about a father hunting with his daughter.
    This time, however, the foundation received an email from 
Google, and I quote: ``Any promotions about hunting practices, 
even when they are intended as a healthy method of population 
control or conservation, is considered animal cruelty and 
deemed inappropriate to be shown on our network.''
    The day I heard about this, I sent a letter to Google and 
you were very responsive, but the initial position taken was 
absurd. Hunting is a way of life in Montana, in many parts of 
the country. I am very thankful that you worked quickly to 
reverse that, but I remain very concerned about Google's effort 
to stifle the promotion of Rocky Mountain Elk Foundation, and 
how they were treated. I worry that other similar groups have 
faced similar efforts to shut down their advocacy.
    We really don't know how many hunting ads Google has 
blocked in the last 5 years. In my March letter, I invited 
Google's CEO to meet with leaders of our outdoor recreation 
businesses in Montana. I haven't heard anything back.
    And, Ms. Oyama, I would extend the invitation again.
    I think, frankly, it would help Google to get out of 
Silicon Valley, come to Montana, sit down with some of your 
customers, and hear from them directly about the things that 
are important to them. I would be happy to host that visit. We 
would love to meet with you there.
    I think it is important to understand the work that these 
groups do to further conservation and to help species thrive. 
And as an avid hunter and outdoorsman myself, I know many 
businesses in Montana focus on hunting and fishing. And I worry 
they may be denied the opportunity to advertise on one of the 
largest online platforms that you have built, to your credit.
    I also worry that an overburdensome regulatory regime could 
hurt small businesses and stifle Montana's rapidly growing 
high-tech sector. So the invitation is open.
    Dr. Farid, one question for you. How can we walk this line 
between protecting small business and innovation versus 
overburdensome regulations?
    Dr. Farid. It is absolutely the right question to ask, 
Congressman. I think you have to be very careful here, because 
right now we have near monopolies in the technology sector. And 
if we start regulating now, the small companies coming up are 
not going to be able to compete.
    There are ways of creating carveouts. In the EU and the 
U.K., as they are talking about regulations, they are creating 
carveouts for small platforms that have 8 million versus 3 
billion users.
    So I do think we want to tread very lightly here. I think 
Ms. Peters also made the point that we want to inspire 
competition for better business models and allow these small 
companies. But I think there are mechanisms to do that. We just 
have to think carefully about it.
    Mr. Gianforte. We have had a lot of discussion today about 
the efforts you are taking to get criminal activity off the 
network, so I applaud that. We should continue to do that.
    But as a follow-on, Doctor, how do we ensure that content 
moderation doesn't become censorship and a violation of our 
First Amendment?
    Dr. Farid. Good. So the way we have been thinking about 
content moderation is a collaboration between humans and 
computers. What computers are very good at doing is the same 
thing over and over and over again, but what they are not good 
at still is nuance and subtlety and complexity and inference 
and context.
    So the way content moderation works today, for example, in 
the child sexual abuse space is human moderators say ``this is 
a child, this is sexually explicit.'' We fingerprint that 
content, and then we remove very specifically and very targeted 
that piece of content.
    False alarm raids for PhotoDNA that we developed a decade 
ago are about 1 in 50 billion. That is the scale you need to be 
operating at. So if you are going to deploy automatic 
technology, you have to be operating at very high scale. And so 
the humans--the computers can't do that on their own, so we 
need more human moderators.
    You heard from Google, 10,000 moderators. There are 500 
hours of video uploaded a minute. That is not enough 
moderators. You can do the arithmetic yourself. Those 
moderators would have to be looking at hours and hours of video 
per hour. So we have to also beef up our human moderation.
    Mr. Gianforte. OK. Thank you.
    And, Ms. Oyama, I look forward to seeing you in Montana.
    And I yield back.
    Ms. Schakowsky. The gentleman yields back.
    And now I recognize--Congresswoman Blunt Rochester is next 
for 5 minutes.
    Ms. Blunt Rochester. Thank you, Madam Chairwoman.
    And to the chairmen and ranking members, thank you for 
holding this important hearing.
    I think many of us here today are seeking to more fully 
understand how Section 230 of the Communications Decency Act 
can work well in an ever-changing virtual and technological 
world. This hearing is really significant, and as Ms. Oyama 
said, I want us to not forget the important things that the 
internet has provided to us, from movements to applications to 
TikTok.
    But also, as Mr. Huffman said, we--and you applied it to 
Reddit, but I think it applies to all of us--must constantly be 
evolving, our policies must be evolving to face the new 
challenges while also balancing our civil liberties. So we have 
a really important balance here.
    So my questions really are surrounded around this, the 
question that Mr. Loebsack asked about bad content moderation. 
And I want to start off by saying that the utilization of 
machine-learning algorithms and artificial intelligence to 
filter through content posted on websites as large as YouTube 
provides an important technological solution to increasing the 
amount of content to moderate.
    However, as we become more and more reliant on algorithms, 
we are increasingly finding blind spots and gaps that may be 
difficult to breach with simply more and better code.
    I think there is a real concern that groups already facing 
prejudice and discrimination will be further marginalized and 
censored. And as I thought about this, I even thought about 
groups like the veterans or the African-American community in 
the 2016 elections.
    Dr. Farid, can you describe some of the challenges with 
moderation by algorithm, including possible bias?
    Dr. Farid. Yes. So I think you are absolutely right, 
Congresswoman. When we automate at the scale of the internet, 
we are going to have problems, and we have already seen that. 
We know, for example, that face recognition does much, much 
worse on women, on people of color than it does on White men.
    The problem with the automatic moderation is that it 
doesn't work at scale. When you are talking about billions of 
uploads, and if your algorithm is 99 percent accurate--which is 
very, very good--you are still making 1 in 100 mistakes. That 
is literally tens of millions of mistakes a day you are going 
to be making at the scale of the internet.
    And so the underlying idea that we can fully automate this, 
not to take on the responsibility and the expense of hiring 
human moderators simply doesn't work. And so I fear that we 
have moved too far to the ``Give us time to find the AI 
algorithms because we don't want to hire the human moderators 
because of the expense.''
    And we know today that is not going to work in the next 
year, 2 years, 5 years, 10 years. And it is a little bit worse 
than that, because it also assumes an adversary that is not 
adapting, and we know that the adversaries can adapt. So we 
know, for example, that all machine learning and AI algorithms 
today that are meant to identify content are vulnerable to what 
are called adversarial attacks. You could add small amounts of 
content to the information, and you can completely fool the 
system.
    Ms. Blunt Rochester. I want to ask a quick question of Mr. 
Huffman and Ms. Oyama. Both of you talked about the number of 
human moderators that you have available to you, and I know 
that we have had many hearings on challenges of diversity in 
the tech field.
    I am assuming, Mr. Huffman, yours are more from the user 
perspective in terms of moderators, or are they people that you 
hire, and the 10,000 or so that you mentioned, these are people 
that you hire or are they users? Just a quick--so everybody 
knows--users, combination?
    Mr. Huffman. For us, it is about 100 employees out of 500, 
and, of course, millions of users participate as well.
    Ms. Blunt Rochester. Got you. That is what I thought.
    OK. Same?
    Ms. Oyama. So the 10,000 set that I mentioned is the 
mixture of the full-time employees. We also work with 
specialized vendors. And then we also have community flagging, 
which could be an NGO, could be law enforcement, could be an 
average user.
    Ms. Blunt Rochester. OK. I know in the interest of time, I 
don't have a lot of time, but could you provide us with 
information on the diversity of your moderators? That is one of 
my questions.
    And then also, I don't like to make assumptions, but I am 
going to assume that it might be a challenge to find diverse 
populations of individuals to do this role, what you are doing 
in that vein. So if we could have a follow-up with that.
    And then my last question is just going to be for the 
panel. What should the Federal Government, what should we be 
doing to help in this space? Because I am really concerned 
about the capacity to do this and do it well. If anybody has 
any suggestion, recommendation. Mr. Farid is already pushing 
his button.
    Dr. Farid. I think this conversation is helping. I think 
you are going to scare the bejesus out of the technology 
sector, and I think that is a really good thing to do.
    Ms. Blunt Rochester. OK. I have to yield back. I am out of 
time. But thank you so much to all of you for your work.
    Ms. Schakowsky. The gentlewoman yields back.
    And now, last but not least, Representative Soto, you are 
recognized for 5 minutes.
    Mr. Soto. Thank you, Madam Chairwoman.
    First of all, thank you for being here. I am the last one, 
so you are in the homestretch here.
    It is amazing that we are here today when we think about 
how far the internet has progressed. One of the greatest 
inventions in human existence, connecting the world, giving 
billions a voice, while before their stories would never be 
told, providing knowledge at our fingerprints. It is just 
incredible.
    And we know Section 230 has been a big part of it, 
providing that safe harbor against a dam, essentially the dam 
holding back the flood of lawsuits. It has created innovation. 
But it has also created a breeding ground for defamation and 
harassment, for impersonation and election interference, and 
also a breeding ground for White supremacists, disinformation, 
global terrorism, and other extremism.
    So we have these wonderful gifts to humanity on one side 
and then all the terrible things with humanity on the other 
side.
    My biggest concern is that lies spread faster than the 
speed of light in the internet, while truth seems to go at a 
snail's pace on it. So that is one thing that I constantly hear 
from my constituents.
    So I want to start with some basics just so I know 
everybody's opinion on it. Who do you all each think should be 
the cop on the beat to be the primary enforcer, with the 
choices being FCC, FTC, or the courts? And it would be great to 
go down the line to hear what each of you think on that.
    Mr. Huffman. If those are my only three options, I would 
choose----
    Mr. Soto. You could give a fourth if you could give a few-
word answer.
    Mr. Huffman. I think, in the United States, society, and on 
our platform, our users.
    Mr. Soto. OK. Who do you think should be the cop on the 
beat?
    Ms. Citron. I am going to take your second-best option, 
which is the courts.
    Mr. Soto. The courts.
    Ms. Citron. Because it forces in some sense the companies 
actually to be the norm producers.
    Mr. Soto. OK. Dr. McSherry.
    Dr. McSherry. Yes. So I think the courts have a very 
important role to play, but also a cardinal principle for us at 
EFF is, at the end of the day, users should be able to control 
their internet experience.
    Mr. Soto. OK.
    Dr. McSherry. We need to have many, many more tools to make 
that possible.
    Mr. Soto. Ms. Peters.
    Ms. Peters. I think that is a ridiculous argument. The vast 
majority of people--I study organized crime.
    Mr. Soto. Well, let's get back to----
    Ms. Peters. Hold on. I am going to answer the question: 
Courts and law enforcement.
    Mr. Soto. Thank you.
    Ms. Peters. Most people are good. A small percentage of 
people statistically in any community commit crime.
    Mr. Soto. OK. Ms. Oyama.
    Ms. Peters. You have to control for it.
    Mr. Soto. Thank you.
    Ms. Oyama.
    Ms. Oyama. Content moderation has always been a 
multistakeholder approach, but I wanted to point out that the 
courts and the FTC do have jurisdiction. And, as you know, the 
FTC does have broad jurisdiction over tech companies already, 
and the courts are always looking at the outer contours of CDA 
230.
    Mr. Soto. Thank you.
    Dr. Farid.
    Dr. Farid. I agree it is a multistakeholder. We all have a 
responsibility here.
    Mr. Soto. And if we were to tighten up rules on the courts, 
it would be great to hear, first starting with you, Dr. Farid, 
if--limit it to injunctive relief--do you think that would be 
enough, and whether or not there should be attorney's fees at 
stake.
    Dr. Farid. Please understand, I am not a policymaker, I am 
not a lawyer. I am a technologist. I am not the one who should 
be answering that question, with due respect.
    Mr. Soto. OK. Ms. Oyama and Mr. Huffman, would injunctive 
relief in the courts be enough to change certain behaviors, do 
you think?
    Ms. Oyama. I think I just said courts do have the power of 
injunctive relief. I would want to echo the start businesses 
and startup voices where they do say that the framework has 
created certainty, and that is essential for their content 
moderation and their economic viability.
    Mr. Soto. Thank you.
    Mr. Huffman.
    Mr. Huffman. Similar answer, sir. I would shudder to think 
what would happen if we, when we were smaller, or even now, 
were on the receiving end of armies of tort lawyers.
    Mr. Soto. Ms. Citron, I see you nodding quite a bit. 
Injunctive relief, attorney's fees, are these things we should 
be looking at?
    Ms. Citron. So I just, as you say injunctive relief, all I 
can see is the First Amendment and prior restraint. So I think 
we need to be sort of careful the kinds of remedies that we 
think about. But law operates. If we allow law to operate, if 
people act unreasonably and recklessly, then I think the array 
of possibilities should be available.
    Mr. Soto. The last thing, I want to talk a little bit about 
230, Section 230, as far as being incorporated in our trade 
deals. I am from Orlando, the land where a fictional mouse and 
a fictional wizard are two of our greatest assets.
    Ms. Peters, I know you talked a little bit about the issue 
of including 230 in trade deals. How would that be problematic 
for a region like ours, where intellectual property is so 
critical?
    Ms. Peters. It is problematic because it potentially is 
going to tie Congress' hands from reforming the bill down the 
line, and that is precisely why industry is pushing to have it 
inside the trade deals.
    Ms. Oyama. There are 90 pages of copyright language in 
existing U.S. trade agreements. I think CDA 230 can just be 
treated the same if U.S. law doesn't bind Congress' hands at 
all.
    Mr. Soto. So if we adjusted laws here, that would affect 
the trade deals, is your opinion then?
    Ms. Oyama. There is no language in the trade deals that 
binds Congress' hands. Congress regularly has hearings on 
copyright, patent, pharmaceuticals, labor, climate, CDA 230. 
There is nothing in the trade agreement, the template language 
of U.S. law to create a U.S. framework when countries like 
China and Russia are developing their own frameworks for the 
internet, there is nothing in the current USMCA or the U.S.-
Japan FTA that would limit your ability to later look at 230 
and decide that it needs tweaks later on.
    Mr. Soto. Thanks. I yield back.
    Ms.  Schakowsky. The gentleman yields back, and that 
concludes our period for questioning.
    And now I seek unanimous consent to put into the record a 
letter from Creative Future with attachments, a letter from 
American Hotel and Lodging Association, a letter from Consumer 
Technology Association, a letter from Travel Technology 
Association, a white paper from Airbnb, a letter from Common 
Sense Media, a letter from Computer & Communications Industry 
Association, a letter from Representative Ed Case, a letter in 
support of the PLAN Act, a letter from the i2Coalition, a 
letter to the FCC from Representative Gianforte, a letter from 
TechFreedom, a letter from the Internet Association, a letter 
from the Wikimedia Foundation, a letter from the Motion Picture 
Association, an article from The Verge titled ``Searching for 
Help,'' a statement from R Street.
    Without objection, so ordered.
    [The information appears at the conclusion of the 
hearing.]\1\
---------------------------------------------------------------------------
    \1\ The CreativeFuture letter and attachments have been retained in 
committee files and also is available at https://docs.house.gov/
meetings/IF/IF16/20191016/110075/HHRG-116-IF16-20191016-SD005.pdf.
---------------------------------------------------------------------------
    Ms. Schakowsky. And let me thank our witnesses. I think 
this was a really useful hearing. I think those of you who have 
suggestions, more concrete ones than sometimes came up today, 
our committee would appreciate it very, very much. I am sure 
the joint committee would appreciate that as well, this joint 
hearing.
    So I want to thank all of you so much for your thoughtful 
presentations and for the written testimony, which also often 
went way beyond what we were able to hear today.
    And so I want to remind Members that, pursuant to committee 
rules, they have 10 business days to submit additional 
questions for the record to be answered by witnesses who have 
appeared.
    And I want to ask witnesses to please respond promptly to 
any such questions that you may receive.
    And at this time the committees are adjourned. Thank you.
    [Whereupon, at 1:11 p.m., the subcommittees were 
adjourned.]
    [Material submitted for inclusion in the record follows:]

                Prepared Statement of Hon. Anna G. Eshoo

    Chairman Doyle and Chairwoman Schakowsky, thank you for 
holding today's joint-subcommittee hearing, and thank you to 
each witness for testifying today. In particular, I welcome Ms. 
Katherine Oyama of Google, which is headquartered in my 
district, and Mr. Steve Huffman of Reddit, who joined me for a 
town hall meeting on net neutrality at Stanford University 
earlier this year. This important discussion is happening at a 
critical juncture in the development of the internet ecosystem.
    Section 230 of the Communications Decency Act is the reason 
that the internet economy took off in the United States. It 
undergirds our ability to look up answers to questions, 
communicate with friends, stream videos, share photos, and so 
many other parts of our lives. As we discuss amending Section 
230, we can't forget that it is a critical foundation for much 
of modern society.
    I was a conferee for the Telecommunications Act of 1996, 
which included Section 230. I believed in the value of Section 
230 then, and I believe in the importance of maintaining 
Section 230 now. I'm always open to debating how laws, 
including this one, can be improved, but I caution my 
colleagues to proceed very carefully in considering amendments 
to Section 230, since such a large part of our economy and 
society depends on it.
    All of that being said, there are many issues with today's 
internet that could not have been conceived of in 1996. 
Congress can and should aim to solve these problems. The 
illegal sale of arms and opioids; radicalization of vulnerable 
individuals; planning mass violence; child sex abuse imagery; 
abuse and harassment of women and marginalized communities, 
especially through revenge pornography; deepfakes; 
misinformation, disinformation, and election interference; and 
doxxing and swatting are among the problematic practices that 
we should demand platforms moderate vigorously. When platforms 
fall short, we should consider making these acts violations of 
criminal law, to the degree that they are not already, before 
we view them through the lens of Section 230.
    I look forward to a healthy and vigorous discussion to help 
inform our efforts to ensure that we have a healthy internet 
ecosystem that protects all users.

[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]

                                 [all]