[House Hearing, 116 Congress]
[From the U.S. Government Publishing Office]


                   A COUNTRY IN CRISIS: HOW DISINFORMATION 
                         ONLINE IS DIVIDING THE NATION

=======================================================================

                         JOINT VIRTUAL HEARING

                               BEFORE THE

             SUBCOMMITTEE ON COMMUNICATIONS AND TECHNOLOGY

                                AND THE

            SUBCOMMITTEE ON CONSUMER PROTECTION AND COMMERCE

                                 OF THE

                    COMMITTEE ON ENERGY AND COMMERCE
                        HOUSE OF REPRESENTATIVES

                     ONE HUNDRED SIXTEENTH CONGRESS

                             SECOND SESSION

                               __________

                             JUNE 24, 2020

                               __________

                           Serial No. 116-116
                           
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]                           


      Printed for the use of the Committee on Energy and Commerce

                   govinfo.gov/committee/house-energy
                        energycommerce.house.gov
                        
                                __________

                   U.S. GOVERNMENT PUBLISHING OFFICE                    
54-858 PDF                  WASHINGTON : 2024                    
          
-----------------------------------------------------------------------------------                            
                     
                        
                  SUBCOMMITTEE ON ENERGY AND COMMERCE

                        MIKE DOYLE, Pennsylvania
                                 Chairman
JERRY McNERNEY, California           ROBERT E. LATTA, Ohio
YVETTE D. CLARKE, New York             Ranking Member
DAVID LOEBSACK, Iowa                 JOHN SHIMKUS, Illinois
MARC A. VEASEY, Texas                STEVE SCALISE, Louisiana
A. DONALD McEACHIN, Virginia         PETE OLSON, Texas
DARREN SOTO, Florida                 ADAM KINZINGER, Illinois
TOM O'HALLERAN, Arizona              GUS M. BILIRAKIS, Florida
ANNA G. ESHOO, California            BILL JOHNSON, Ohio
DIANA DeGETTE, Colorado              BILLY LONG, Missouri
G. K. BUTTERFIELD, North Carolina    BILL FLORES, Texas
DORIS O. MATSUI, California, Vice    SUSAN W. BROOKS, Indiana
    Chair                            TIM WALBERG, Michigan
PETER WELCH, Vermont                 GREG GIANFORTE, Montana
BEN RAY LUJAN, New Mexico            GREG WALDEN, Oregon (ex officio)
KURT SCHRADER, Oregon
TONY CARDENAS, California
DEBBIE DINGELL, Michigan
FRANK PALLONE, Jr., New Jersey (ex 
    officio)
             Subcommittee on Communications and Technology

                        JAN SCHAKOWSKY, Illinois
                                Chairwoman
KATHY CASTOR, Florida                CATHY McMORRIS RODGERS, Washington
MARC A. VEASEY, Texas                  Ranking Member
ROBIN L. KELLY, Illinois             FRED UPTON, Michigan
TOM O'HALLERAN, Arizona              MICHAEL C. BURGESS, Texas
BEN RAY LUJAN, New Mexico            ROBERT E. LATTA, Ohio
TONY CARDENAS, California, Vice      BRETT GUTHRIE, Kentucky
    Chair                            LARRY BUCSHON, Indiana
LISA BLUNT ROCHESTER, Delaware       RICHARD HUDSON, North Carolina
DARREN SOTO, Florida                 EARL L. ``BUDDY'' CARTER, Georgia
BOBBY L. RUSH, Illinois              GREG GIANFORTE, Montana
DORIS O. MATSUI, California          GREG WALDEN, Oregon (ex officio)
JERRY McNERNEY, California
DEBBIE DINGELL, Michigan
FRANK PALLONE, Jr.,  New Jersey (ex 
    officio)
                             C O N T E N T S

                              ----------                              
                                                                   Page
Hon. Mike Doyle, a Representative in Congress from the 
  Commonwealth of Pennsylvania, opening statement................     2
    Prepared statement...........................................     4
Hon. Robert E. Latta, a Representative in Congress from the State 
  of Ohio, prepared statement....................................     5
    Prepared statement...........................................     6
Hon. Jan Schakowsky, a Representative in Congress from the State 
  of Illinois, opening statement.................................     7
    Prepared statement...........................................     9
Hon. Brett Guthrie, a Representative in Congress from the 
  Commonwealth of Kentucky, opening statement....................     9
    Prepared statement...........................................    11
    Hon. Cathy McMorris-Rodges, a Representative in Congress from 
      the State of Washington, prepared statement................   115
    Hon. Frank Pallone, Jr., a Representative in Congress from 
      the State of New Jersey, prepared statement................   116
    Hon. Greg Walden, a Representative in Congress from the State 
      of Oregon, prepared statement..............................   117
    Hon. Anna G. Eshoo, a Representative in Congress from the 
      State of California, prepared statement....................   118

                               Witnesses

Brandi Collins-Dexter, Senior Campaign Director, Color of Change.    15
    Prepared statement...........................................    17
    Answers to submitted questions...............................   228
Hany Farid, Ph.D., Professor, University of California, Berkeley.    25
    Prepared statement...........................................    27
    Answers to submitted questions...............................   233
Neil Fried, Former Chief Counsel for Communications and 
  Technology, Energy and Commerce Committee, Principal, Digital 
  Frontiers Advocacy.............................................    34
    Prepared statement...........................................    36
    Answers to submitted questions...............................   236
Spencer Overton, President, Joint Center for Political and 
  Economic Studies, Professor of Law, George Washington 
  University.....................................................    43
    Prepared statement...........................................    45
    Answers to submitted questions...............................   244

                           Submitted Material

Letter of June 22, 2020, to Mr. Doyle, et al., by Brenda Victoria 
  Castillo, President and CEO, National Hispanic Media Coalition, 
  submitted by Mr. Doyle.........................................   119
Letter of June 24, 2020, to Mr. Doyle and Ms. Schakowsky, by Amb. 
  Marc Ginsberg, President, Coalition for a Safer Web, submitted 
  by Mr. Doyle...................................................   124
Letter of June 23, 2020, to Mr. Doyle and Ms. Schakowsky, by 
  Arthur Sidney, Vice President, Public Policy Computer and 
  Communication Industry Association and Carl Szabo, Vice 
  President and General Counsel, NetChoice, CCIA and Net Choice, 
  submitted by Mr. Doyle.........................................   132
Letter to Mr. Doyle, et al., by Zeve Sanderson, Executive 
  Director, NYU's Center for Social Media and Politics, submitted 
  by Mr. Doyle...................................................   134
Letter of June 24, 2020, to Ms. Schakowsky, et al., by Lisa 
  Macpherson, Senior Policy Fellow and Bertram Lee Jr., Policy 
  Counsel, Public Knowledge, submitted by Mr. Doyle..............   144
Letter of June 24, 2020, to Subcommittee on Communications and 
  Technology and Consumer Protection and Commerce, from 
  Leadership Conference on Civil and Human Rights, submitted by 
  Mr. Doyle......................................................   151
Essay by Mr. Spencer Overton Mr. Doyle...........................   158
Article of May 26, 2020, ``Facebook Executives Shut Down Efforts 
  to Make the Site Less Divisive'' A Wall Street Journal, by Jeff 
  Horwitz and Deepa Seetharaman, submitted by Mr. Doyle..........   195
Letter to Mr. Zuckerberg, from Ms. Rochester, et al., submitted 
  by Mr. Doyle...................................................   205
Letter of June 24, 2020, to Mr. Pallone and Mr. Walden, by 
  Arvydas Urbonavicius, President, Lithuanian-American Community, 
  Inc., and Krista Bard, Chair, Public Affairs Committee, 
  Lithuanian-American Community, Inc., Honorary Consul General of 
  the Republic of Lithuania to the Commonwealth of Pennsylvania, 
  submitted by Mr. Doyle.........................................   209
Statement of Central and East European Coalition, June 23, 2020, 
  submitted by Mr. Doyle.........................................   211
Research on May 5, 2020, Conspiracy theories in Lithuania spread 
  nearly 200 false statements--from scary delusions to dangerous 
  disinformation, from Debunk EU, submitted by Mr. Doyle.........   212
Letter of June 24, 2020, to Ms. Schakowsky, et al, by Koustubh 
  ``K.J.'' Bagchi, Senior Policy Counsel and Spandana Singh, 
  Policy Analyst, Open Technology Institute, submitted by Mr. 
  Doyle..........................................................   215
Letter of June 24, 2020, to Mr. Doyle, et al., by Jonathan 
  Schwantes, Senior Policy Counsel and Laurel Lehman Policy 
  Analyst, Consumer Reports, submitted by Mr. Doyle..............   223

 
 A COUNTRY IN CRISIS: HOW DISINFORMATION ONLINE IS DIVIDING THE NATION

                              ----------                              


                        WEDNESDAY, JUNE 24, 2020

                  House of Representatives,
     Subcommittee on Communications and Technology,
                            joint with the,
  Subcommittee on Consumer Protection and Commerce,
                          Committee on Energy and Commerce,
                                                    Washington, DC.
    The subcommittees met, pursuant to call, at 11:32 a.m., via 
Cisco Webex online video conferencing, Hon. Mike Doyle 
(chairman of the subcommittee on Communications and Technology) 
and Hon. Janice Schakowsky (chairwoman of the subcommittee on 
Consumer Protection and Commerce) presiding.
    Members present from Subcommittee on Communications and 
Technology: Representatives Doyle, McNerney, Clarke, Veasey, 
Soto, O'Halleran, Butterfield, Matsui, Welch, Schrader, 
Cardenas, Dingell, Pallone (ex officio), Latta (subcommittee 
ranking member), Shimkus, Kinzinger, Bilirakis, Johnson, Long, 
Flores, Brooks, Walberg, Gianforte, and Walden (ex officio).
    Members present from Subcommittee on Consumer Protection 
and Commerce: Representatives Schakowsky, Castor, Veasey, 
Kelly, O'Halleran, Cardenas, Blunt Rochester, Soto, Rush, 
Matsui, McNerney, Dingell, Pallone (ex officio), Burgess, 
Latta, Guthrie, Hudson, Carter, Duncan, Gianforte, and Walden 
(ex officio).
    Also present: Representative Sarbanes.
    Staff present: Billy Benjamin, System Administrator; 
Jeffrey C. Carroll, Staff Director; Parul Desai, FCC Detailee; 
Lisa Goldman, Senior Counsel; Waverly Gordon, Deputy Chief 
Counsel; Tiffany Guarascio, Deputy Staff Director; Alex Hoehn-
Saric, Chief Counsel, Communications and Consumer Protection; 
Jerry Leverich, Senior Counsel; Dan Miller, Jr. Professional 
Staff; Phil Murphy, Policy Coordinator for Communications and 
Technology; Joe Orlando, Executive Assistant; Kaitlyn Peel, 
Digital Director; Tim Robinson, Chief Counsel; Chloe Rodriguez, 
Policy Analyst; Sydney Terry, Policy Coordinator for Consumer 
Protection and Commerce; Nolan Ahern, Professional Staff, 
Health; Jennifer Barblan, Minority Chief Counsel, Oversight and 
Investigations; Mike Bloomquist, Minority Staff Director; S.K. 
Bowen, Minority Press Secretary; William Clutterbuck, Minority 
Staff Assistant; Jerry Couri, Minority Deputy Chief Counsel, 
Environment and Climate Change; Diane Cutler, Minority 
Detailee, Oversight and Investigations; Jordan Davis, Minority 
Senior Advisor; Theresa Gambo, Minority Human Resources/Office 
Administrator; Caleb Graff, Minority Professional Staff Member, 
Health; Tyler Greenberg, Minority Staff Assistant; Brittany 
Havens, Minority Professional Staff, Oversight and 
Investigations; Tiffany Haverly, Minority Communications 
Director; Peter Kielty, Minority General Counsel; Bijan 
Koohmaraie, Minority Deputy Chief Counsel, Consumer Protections 
and Commerce; Tim Kurth, Minority Chief Counsel, Communications 
and Technology; Ryan Long, Minority Deputy Staff Director; Mary 
Martin, Minority Chief Counsel, Energy and Environment and 
Climate Change; Brandon Mooney, Minority Deputy Chief Counsel, 
Energy; Kate O'Connor, Minority Chief Counsel, Communications 
and Technology; James Paluskiewicz, Minority Chief Counsel, 
Health; Brannon Rains, Minority Policy Analyst; Kristin Seum, 
Minority Counsel, Health; Kristen Shatynski, Minority 
Professional Staff Member, Health; Alan Slobodin, Minority 
Chief Investigative Counsel, Oversight and Investigations; 
Peter Spencer, Minority Senior Professional Staff Member, 
Environment and Climate Change; Natalie Sohn, Minority Counsel, 
Oversight and Investigations; Evan Viau, Minority Professional 
Staff, Communications and Technology; and Everett Winnick, 
Minority Director of Information Technology.
    Mr. Doyle. OK. So the committee will now come to order. 
Today, the Subcommittee on Communications and Technology and 
the Subcommittee on Consumer Protections and Commerce are 
holding a joint hearing entitled, ``A Country in Crisis: How 
Disinformation is Dividing the Nation.''
    Due to the COVID-19 public health emergency, today's 
hearing is being held remotely. All Members and witnesses will 
be participating via videoconferencing.
    As a part of our hearing, microphones will be set on mute 
for the purpose of eliminating inadvertent background noise.
    Members and witnesses, you will need to unmute your 
microphone each time you wish to speak. Documents for the 
record can be sent to Chloe Rodriguez at the email address we 
provided to staff. All documents will be entered into the 
record at the conclusion of the hearing.
    The Chair will now recognize himself for 5 minute opening 
statement.

   OPENING STATEMENT OF HON. MIKE DOYLE, A REPRESENTATIVE IN 
         CONGRESS FROM THE COMMONWEALTH OF PENNSYLVANIA

    Good morning, and welcome to today's joint hearing on 
disinformation and the crisis it is creating in our country and 
for our democracy.
    I want to thank our panel of witnesses for joining us 
virtually today. While the committee has held several virtual 
hearings so far, this is the first one I have chaired. So 
please bear with me as we get through this.
    The matter before the committee today is one of pressing 
importance: the flood of disinformation online, principally 
distributed by social media companies, and the dangerous and 
divisive impact it is having on our nation as we endure the 
COVID-19 epidemic.
    Over 120,000 Americans have already lost their lives to 
this virus and nearly 2.4 million Americans have been infected. 
Tens of millions of people are out of work as we attempt to 
stop the spread of this virus and prevent an even greater 
disaster.
    In the midst of this historic crisis, we are also facing a 
historic opportunity. Tens of millions of Americans are calling 
for racial justice and systematic changes to end racism and 
police brutality in the wake of the horrific murders of George 
Floyd, Breonna Taylor, and countless other black Americans at 
the hands of law enforcement.
    The Black Lives Matter movement has resulted in protests 
around the globe and online as people are taking to the 
streets, social media to express their support for change.
    But as we march for progress and grapple with this deadly 
disease, the divisions in our country are growing. While our 
nation has long been divided, today we see that much of this 
division is driven by misinformation distributed and amplified 
by social media companies, the largest among them being 
Facebook, YouTube, and Twitter.
    These platforms have become central to the daily lives of 
so many around the globe and to the way people get their news, 
interact with each other, and engage in political discourse.
    Our nation and the world are facing a heretofore 
unprecedented tsunami of disinformation that threatens to 
devastate our country and the world.
    It has been driven by hostile foreign powers seeking to 
weaken our democracy and divide our people, by those in our 
country who seek to divide us for their own political gain, and 
by social media companies themselves, who have put profits 
before people as platforms have become awash in disinformation 
and their business models have come to depend on these engaging 
and enraging nature of these false truths.
    When Congress enacted Section 230 of the Communications 
Decency Act in 1996, this provision provided online companies 
with a sword and a shield to address concerns about content 
moderation and a website's liability for hosting third-party 
content.
    And while a number of websites have used 230 for years to 
remove sexually explicit and overtly violent content, they have 
failed to act to curtail the spread of disinformation.
    Instead, they have built systems to spread it at scale and 
to monetize the way it confirms our implicit biases. Everyone 
likes to hear and read things that confirm what they think is 
true, and these companies have made trillions of dollars by 
feeding people what they want to hear.
    As a result, these platforms have peddled lies about COVID-
19, Black Lives Matter, voting by mail, and much, much more.
    When companies have done the right thing and stepped up to 
take down disinformation, they have been attacked by those who 
have benefitted from it. Recently, Twitter labeled a number of 
tweets by President Trump as inaccurate, abusive, and 
glorifying violence.
    In response, President Trump issued an executive order 
threatening all social media companies. The Department of 
Justice has issued similarly thuggish proposals as well.
    The intent of these actions is clear--to bully social media 
companies into inaction. Social media companies need to step up 
to protect our civil rights, our human rights, and human lives, 
not to sit on the sideline as the nation drowns in a sea of 
disinformation.
    Make no mistake: the future of our democracy is at stake 
and the status quo is unacceptable.
    While Section 230 has long provided online companies the 
flexibility and the liability protections they need to innovate 
and to connect people from around the world, it has become 
clear that reform is necessary if we want to stem the tide of 
disinformation rolling over our country.
    That concludes my opening statement.
    [The prepared statement of Mr. Doyle follows:]

                 Prepared Statement of Hon. Mike Doyle

    Good morning and welcome to today's joint hearing on 
disinformation and the crisis it is creating in our country and 
for our democracy. I'd like to thank our panel of witnesses for 
joining us virtually today.
    While the Committee has held several virtual hearings so 
far, this is the first one I have chaired, so please bear with 
us.
    The matter before the Committee today is one of pressing 
importance, the flood of disinformation online - principally 
distributed by social media companies--and the dangerous and 
divisive impact it is having on our nation as we endure the 
COVID-19 epidemic.
    More than 120,000 Americans have already lost their lives 
to this virus, and nearly 2.4 million Americans have been 
infected. Tens of millions of people are out of work as we 
attempt to stop the spread of this virus and prevent an even 
greater disaster.
    n the midst of this historic crisis, we are also facing a 
historic opportunity. Tens of millions of Americans are calling 
for racial justice and systemic changes to end racism and 
police brutality in the wake of the horrific murders of George 
Floyd, Breonna Taylor, and countless other Black Americans at 
the hands of law enforcement.
    The Black Lives Matter movement has resulted in protests 
around the globe and online, as people are taking to the 
streets and to social media to express their support for 
change.
    But as we march for progress and grapple with this deadly 
disease, the divisions in our country are growing. While our 
nation has long been divided, today we see that much of this 
division is driven by misinformation distributed and amplified 
by social media companies - the largest among them being 
Facebook, YouTube, and Twitter.
    These platforms have become central to the daily lives of 
many around the globe--and to the way that people get their 
news, interact with each other, and engage in political 
discourse.
    Our nation and the world are facing an unprecedented 
tsunami of disinformation that threatens to devastate our 
country and the world. It has been driven by hostile foreign 
powers seeking to weaken our democracy and divide our people, 
by those in our country who seek to divide us for their own 
political gain, and by the social media companies themselves - 
who have put profits before people as their platforms have 
become awash in disinformation and their business models have 
come to depend on the engaging and enraging nature of these 
false truths.
    When Congress enacted Section 230 of the Communications 
Decency Act in 1996, this provision provided online companies 
with a sword and a shield to address concerns about content 
moderation and a website's liability for hosting third party 
content. And while a number of websites have used 230 for years 
to remove sexually explicit and overly violent content, they 
have failed to act to curtail the spread of disinformation. 
Instead they have built systems to spread it at scale and to 
monetize the way it confirms our implicit biases.
    Everyone likes to hear and to read things that confirm what 
they think is true, and these companies have made trillions of 
dollars by feeding people what they want to hear. As a result, 
these platforms have peddled lies about COVID 19, Black Lives 
Matter, voting by mail, and much, much more.
    When companies have done the right thing and stepped up to 
take down disinformation, they have been attacked by those who 
have benefited from it. Recently, Twitter labelled a number of 
tweets by President Trump as inaccurate, abusive, and 
glorifying violence. In response, President Trump issued an 
Executive Order threatening all social media companies. The 
Department of Justice has issued similarly thuggish proposals 
as well. The intent of these actions is clear: to bully social 
media companies into inaction.
    Social media companies need to step up to protect our civil 
rights, our human rights, and human lives--NOT sit on the 
sidelines as our nation drowns in a sea of disinformation.
    Make no mistake, the future of our democracy is at stake 
and the status quo is unacceptable.
    While Section 230 has long provided online companies the 
flexibility and liability protections they need to innovate and 
to connect people from around the world, it has become clear 
that reform is necessary if we want to stem the tide of 
disinformation rolling over our country.

    Mr. Doyle. It now gives me great pleasure to recognize my 
good friend, Mr. Latta, ranking member for the Subcommittee on 
Communications and Technology for 5 minutes for his opening 
statement.

OPENING STATEMENT OF HON. ROBERT E. LATTA, A REPRESENTATIVE IN 
                CONGRESS FROM THE STATE OF OHIO

    Mr. Latta. Well, thank you, Mr. Chairman, and thank you 
very much for holding today's hearing on disinformation online. 
I also want to thank our witnesses for joining us today.
    We are living in a time when Americans increasingly rely on 
the internet in their daily lives, and while our nation is 
battling the coronavirus, having access to accurate information 
can mean the difference between life and death.
     But as we all know, not everything we see and read online 
can be taken as fact due to inaccuracies or outright lies. I 
have some folks that have told me that everything on the 
internet is true because you can't put anything on the internet 
that wouldn't be true. So that is what some people were doing.
    To date, companies have worked to police their platforms to 
remove harmful or inaccurate information online. In fact, 
Congress enacted Section 230 of the Communications Decency Act 
to allow internet companies to do just that.
    The law was intended to encourage internet platforms, then 
interactive computer services like CompuServe and American 
Online, to proactively take down offensive content without 
having the fear of being held liable for doing the right thing.
    Hateful and racists comments should have no place in our 
society or on our platforms, and Section 230 provides a tool 
for companies to make sure this doesn't happen.
    And while some companies use this shield for its intended 
purpose, it is concerning that we are seeing other abuse of 
Section 230 after being pressured by activist employees or 
advertisers to make Good Samaritan policies intended to fit 
their own political agenda.
    Many tech companies have benefitted and grown because they 
are afforded CDA 230 protections. These protections have 
allowed them to become the true gatekeepers to the internet. 
But too often, we see that they don't want to take 
responsibility for the content within those gates.
    Let me be clear. I am not advocating that Congress repeal 
the law nor am I advocating for Congress to consider niche 
carve outs that could lead to a patchwork of applicability of 
the law.
    Section 230 was enacted for a reason. It is unfortunate, 
however, that the courts have such a broad interpretation of 
Section 230, simply granting broad liability protection without 
platforms having to demonstrate that they are doing, and I 
quote, ``everything possible.''
    Numerous platforms have hidden behind Section 230 to avoid 
litigation without having to take responsibility. Not only are 
Good Samaritans sometimes being selective in taking down 
harmful or illegal activity, but Section 230 has been 
interpreted so broadly that bad Samaritans can skate by without 
accountability.
    Freedom of speech is a fundamental right upon which our 
democracy is built, and we must make sure these companies are 
not policing the free flow of speech, especially when it comes 
to political discussions, as they continue to operate online 
platforms.
    While we are talking about private companies, many of the 
concerns I have outlined here today could simply be addressed 
if these companies began to equitably and consistently enforce 
their terms of service.
    If companies have the time and resources to make the 
difficult complex decisions over moderating conservative 
speech, then surely they can make the easy decisions when it 
comes to taking down illegal, hate, or racist content on their 
platforms.
    I hope reports of political bias among the large internet 
platforms are not an indication of their prioritization of 
resources. If so, then we should consider congressional 
scrutiny over how Section 230 is being used in the marketplace.
    So I will say it again. I do not believe repealing Section 
230 is the answer. But I do believe these companies could need 
more oversight as to how they are making certain decisions 
related to their content moderation practices, what they choose 
to censor and what they don't.
    We should make every effort to ensure that companies are 
using the sword provided by Section 230 to take down offensive 
and lewd content but that they keep their power in check when 
it comes to censoring political speech.
    Again, terms of services should be enforced equitably and 
consistently. I look forward to hearing from today's witnesses, 
and Mr. Chairman, I yield back.
    [The prepared statement of Mr. Latta follow:]

               Prepared Statement of Hon. Robert E. Latta

    Welcome to today's hearing on disinformation online. We are 
living in a time where Americans increasingly rely on the 
Internet in their daily lives, and while our nation is battling 
the coronavirus, having access to accurate information can mean 
the difference between life and death.
    But as we all know, not everything we see and read online 
can be taken as fact due to inaccuracies. To date, companies 
are doing a good job of policing their platforms to remove 
harmful or inaccurate information online. In fact, Congress 
enacted Section 230 of the Communications Decency Act to allow 
Internet companies to do just that. The law was intended to 
encourage Internet platforms-then, ``interactive computer 
services'' like CompuServe and America Online-to proactively 
take down offensive content without having the fear of being 
held liable for doing the right thing. Hateful and racist 
comments should have no place in our society or on our 
platforms, and Section 230 provides a tool for companies to 
make sure that doesn't happen.
    And while some companies use this shield for its intended 
purpose, it is concerning that we are seeing others abuse 
Section 230 after being pressured by activist employees or 
advertisers to make Good Samaritan" policies intended to fit 
their own political agenda. Many tech companies have benefited 
and grown on a large scale because they are afforded CDA 230 
protections. These protections have allowed them to become the 
true gatekeepers to the Internet, but we often see that they 
don't want to take responsibility for the content within those 
gates.
    Let me be clear, I am not advocating that Congress repeal 
the law. Nor am I advocating for Congress to consider niche 
``carve-outs'' that could lead to a patchwork of applicability 
of the law. Section 230 was enacted for a reason. It is 
unfortunate, however, that the courts took such a broad 
interpretation of Section 230, simply granting broad liability 
protection without platforms having to demonstrate that they 
aredoing--and I quote--``everything possible.'' Numerous 
platforms have hidden behind Section 230 to avoid litigation 
without having to take any responsibility. Not only are ``good 
Samaritans'' sometimes being selective in taking down harmful 
or illegal activity, but Section 230 has been interpreted so 
broadly that "bad Samaritans" can skate by without 
accountability, too.
    Freedom of speech is a fundamental right upon which our 
democracy was built, and we must make sure these companies are 
not policing the free flow of speech, especially when it comes 
to political discussions, as they continue to operate online 
platforms. While we are talking about private companies, many 
of the concerns I've outlined here today could simply be 
addressed if these companies began enforcing their terms of 
service equitably and consistently. If companies have the time 
and resources to make the difficult, complex decisions over 
moderating conservative speech, then surely they can make the 
easy decisions when it comes to taking down illegal, hate, or 
racist content on their platforms. I hope reports of political 
bias among large Internet platforms are not an indication of 
their prioritization of resources. If so, then we should 
consider Congressional scrutiny over how Section 230 is being 
used in the marketplace.
    So, I will say it again: I do not believe repealing Section 
230 is the answer. But I do believe these companies might need 
more oversight as to how they are making certain decisions 
related to their content moderation practices: what they choose 
to censor and what they don't. We should make every effort to 
ensure that companies are using the sword provided by Section 
230 to take down offensive and lewd content, but that they keep 
their power in check when it comes to censoring political 
speech. Again, terms of services should be enforced equitably 
and consistently.
    I look forward to hearing from the witnesses today. I yield 
back.

    Mr. Doyle. OK. I thank the gentleman.
    The Chair now recognizes Ms. Schakowsky, chairwoman of the 
Subcommittee on Consumer Protection and Commerce, for 5 minutes 
for her opening statement.

 OPENING STATEMENT OF HON. JAN SCHAKOWSKY, A REPRESENTATIVE IN 
              CONGRESS FROM THE STATE OF ILLINOIS

    Jan, you need to unmute if you haven't.
    Ms. Schakowsky. OK. I do that all the time. Sorry.
    Thank you, Chairman Doyle. I am so glad to be doing a joint 
hearing with you, and I want to thank our distinguished panel 
for joining us today.
    Last fall, Chairman Doyle and I held a joint hearing on 
Section 230 of the Communications Decency Act, and subsequently 
my subcommittee held a hearing on unsafe products and fake 
reviews found online.
    At both hearings, industry representatives came and 
testified. Big Tech was here, and we heard that content 
moderation and consumer protection were really hard and that 
industry could always do better.
    And they made promises, but they discouraged congressional 
action. I think they may have even apologized, as Big Tech 
typically does when it appears before this committee.
    Fast forward to six months later and add a global health 
pandemic and nationwide protests against policies of brutal and 
racial inequality. And as we will hear today, it is an 
understatement to say that industry could still be doing 
better.
    The harms associated with misinformation and disinformation 
continue to fall disproportionately on communities of color, 
who already suffer worse outcomes from COVID-19.
    And at the same time, the president himself is continually 
spreading dangerous disinformation that Big Tech is all too 
eager to promote.
    No matter what the absolutists say about Section 230, it is 
not only about free speech and content moderation. If it were, 
our conversation today would be very different.
    Instead, Big Tech uses it as a shield to protect itself 
from liability when it fails to protect consumers from harm and 
from harmful public health--or harms public health, and use it 
as a sword to intimidate cities and states when they consider 
legislation, as Airbnb did in 2016 when New York City was 
considering regulating its online rental market for private 
homes.
    The truth is Section 230 protects business models and the 
generation--and generates prolific scams, fake news, fake 
reviews, and unsafe, counterfeit, and stolen products.
    This was never the intent, and since both courts and the 
industry refuse to change it, Congress must do it. But we must 
do it in a responsible way.
    The president's recent actions are designed to kneecap 
platforms that fact check him and engage in--checking the time 
here--engage in what he claims is bias against conservative 
views.
    Let me be clear. The president is using his position to 
chill speech and that is just wrong. We must encourage content 
moderation that fosters a safer and healthier online world.
    And don't be fooled by made-up claims of bias against 
conservatives. Today, it seems there is less of a bias against 
conservatives and, rather, a bias for conservatives.
    On June 19th, nine of the ten top-performing political 
pages on Facebook were conservative pages, including Donald 
Trump, Donald Trump for President, Ben Shapiro, Breitbart and 
Sean Hannity.
    And as the New York Times reported over the weekend, 
Facebook in particular seems to enjoy a cozy relationship with 
the Trump administration, aided by Facebook's loyal Trump 
supporters, Joel Kaplan and Peter Thiel.
    I hope that Mr. Kaplan and Mr. Thiel will soon make it 
before Congress, make themselves available so that we can ask 
questions about what role they play.
    And I am just so anxious to hear about--hear from our 
witnesses and I will yield back at this time.
    Thank you, Mr. Chairman.
    [The prepared statement of Ms. Schakowsky follows:]

               Prepared Statement of Hon. Jan Schakowsky

    Good morning and thank you for being here today. Thank you 
to our distinguished panel for joining us today.
    Last fall Chairman Doyle and I held a joint hearing on 
Section 230, and subsequently my subcommittee held a hearing on 
unsafe products and fake reviews found online. At both 
hearings, industry representatives from Big Tech testified, and 
we heard that content moderation and consumer protection were 
really hard, and that industry could always do better. They 
made promises and discouraged Congressional action. I think 
they may have even apologized, as Big Tech typically does when 
it appears before this committee.
    Fast forward six months, add a global health crisis and 
nationwide protests against police brutality and racial 
inequality. As we will hear today, it's an understatement to 
say that industry could still be doing better.
    The harms associated with misinformation and disinformation 
continue to fall disproportionately on communities of color, 
who already suffer worse outcomes from COVID-19.
    11All the while, the President himself is continually 
spreading dangerous disinformation that Big Tech is all too 
eager to profit from.
    No matter what the absolutists say, Section 230 is not only 
about free speech and content moderation. If it were, our 
conversation today would be different. Instead, Big Tech uses 
it as a shield to protect itself from liability when it fails 
to protect consumers or harms public health, and uses it as a 
sword to intimidate cities and states when they consider 
legislation, as Airbnb did in 2016 when New York City was 
considering regulating its online rental market for private 
homes.
    The truth is, Section 230 protects business models that 
generate profits off scams, fake news, fake reviews, and 
unsafe, counterfeit, and stolen products. This was never the 
intent, and since both courts and industry refuse to change, 
Congress must act.
    But we must do so responsibly. The President's recent 
actions are designed to kneecap platforms that fact check him 
or engage in what he claims is bias against conservative views. 
Let me be clear, the President is using his position to chill 
speech and that is wrong.
    We must encourage content moderation that fosters a safer 
and healthy online world. And don't be fooled by made up claims 
of bias against conservatives. Today, it seems there is a less 
of a bias against conservatives and rather a bias for 
conservatives.
    On June 19th, 9 of the 10 top-performing political pages on 
Facebook were conservative pages, including Donald J. Trump, 
Donald Trump for President, Ben Shapiro, Breitbart and Sean 
Hannity.
    And as the New York Times reported over the weekend, 
Facebook in particular seems to enjoy a cozy relationship with 
the Trump Administration, aided by Facebook's own loyal Trump 
supporters, Joel Kaplan and Peter Theil. I hope Mr. Kaplan and 
Mr. Theil will soon make themselves available to Congress to 
answer questions about what role they play in information 
dissemination, and how they balance this incredible 
responsibility with their extreme partisan ties and views.
    Regardless, as the testimony today demonstrates, something 
needs to be done. The American people are dying and suffering 
as a result of online disinformation. I look forward to working 
with my colleagues to modernize Section 230 and put platforms 
on a path that helps all Americans.

    Mr. Doyle. Thank you. The gentle lady yields back her time.
    The Chair now recognizes, Mrs. Rodgers has yielded her 
time, I believe, to Mr. Guthrie.

 OPENING STATEMENT OF HON. BRETT GUTHRIE, A REPRESENTATIVE IN 
           CONGRESS FROM THE COMMONWEATH OF KENTUCKY

    So, Mr. Guthrie, you are recognized for 5 minutes.
    Mr. Guthrie. Thank you, Mr. Chair.
    I want to thank the Chairs and the ranking members for 
holding this hearing and our distinguished panelists for being 
here.
    The coronavirus outbreak has shown us the true strength of 
American technology. As much of our world became digital, we 
saw innovation across the board, from doctors switching to 
telehealth services to educators teaching students from afar, 
to friends and family connecting online, more so than ever 
before.
    Through this explosion of innovation, we have seen the best 
in people, companies and individuals stepping up to adapt to 
our new world and neighbors helping neighbors as we all go 
through this together.
    Sadly, it has also brought out the worst in some people. 
Though social media and other online websites can be used to 
connect us with each other and to information, unfortunately, 
bad actors can also weaponize these platforms to further spread 
disinformation, putting Americans' health and security at risk.
    Social media platforms have responded to disinformation 
campaigns differently. Some have taken a more active approach 
to monitoring and removing such content while others have 
allowed disinformation, misinformation, and offensive and 
intolerable comments to fester on their sites unchecked.
    In either case, I think we can all agree that better 
transparency regarding how these internal guidelines are 
determined as well as the mechanisms about which such content 
is removed and the appeals processes they have in place is 
needed.
    We must also ensure that social media companies are 
applying these standards fairly and not just labeling a 
differing opinion as disinformation.
    During this public health crisis, the Federal Trade 
Commission has continued its work protecting consumers, 
providing guidance to businesses and protecting competition in 
the marketplace throughout the pandemic.
    They have issued dozens of warnings to individuals and 
entities marketing therapies and products that claim to prevent 
or treat COVID-19. Further, they have disseminated information 
to consumers on how to avoid such scams and verify information 
they come across online, which I have shared with my 
constituents.
    This information will continue to be vital as we navigate 
this unprecedented time.
    Looking forward, I believe that emerging technology has the 
potential to be useful in combating illicit content online and 
putting a stop to these bad actors.
    That is why I recently introduced the Countering Online 
Harms Act, which would direct the Federal Trade Commission to 
conduct a study on how artificial intelligence may be used to 
identify and remove harmful online content, such as 
disinformation campaigns, deep fakes, counterfeit products, and 
other deceptive and fraudulent content that is intended to scam 
or do harm.
    Further, my bill would require the FTC to submit a 
subsequent report to Congress with recommendations on how to 
implement solutions with AI to address those issues and 
recommendations for potential legislation.
    Throughout the coronavirus pandemic, we have tapped into 
America's innovative potential to solve many of our new 
problems and I hope the Countering Online Harms Act will build 
on this innovation to help protect American consumers as more 
and more of our lives are conducted online.
    Thank you to all the witnesses for your participation 
today. I look forward to hearing your testimony.
    Mr. Chairman, I would like to submit for the record Ranking 
Member McMorris Rodgers' opening statement.
    [The prepared statement of Mr. Guthrie follows:]

                Prepared Statement of Hon. Brett Guthrie

    And thank you again. I yield back. I yield back.
    Thank you Chairman Doyle, Ranking Member Latta, Chair 
Schakowsky, Ranking Member McMorris Rodgers, Chairman Pallone, 
and Ranking Member Walden for holding this hearing.
    The coronavirus outbreak has shown us the true strength of 
American technology. As much of our world became digital, we 
saw innovation across the board--from doctors switching to 
telehealth services, to educators teaching students from a far, 
to friends and family connecting online more so than ever 
before.
    Through this explosion of innovation, we have seen the best 
in people--companies and individuals stepping up to adapt to 
our new world, and neighbors helping neighbors as we all go 
through this together. Sadly, it has also brought out the worst 
in some people. While social media and other online websites 
can be used to connect us with each other and to information, 
unfortunately, bad actors can also weaponize these same 
platforms to further spread disinformation, putting Americans' 
health and security at risk.
    Social media platforms have responded to disinformation 
campaigns differently-some have taken a more active approach to 
monitoring and removing such content, while others have allowed 
disinformation, misinformation, and offensive and intolerable 
comments to fester on their sites, unchecked. In either case, I 
think we can all agree that better transparency regarding how 
these internal guidelines are determined, as well as the 
mechanisms by which such content is removed and the appeals 
processes they have in place, is needed. We must also ensure 
that social media companies are applying these standards 
fairly, and not just labeling a differing opinion as 
``disinformation.''
    During this public health crisis, the Federal Trade 
Commission has continued its work protecting consumers, 
providing guidance to businesses, and protecting competition in 
the marketplace throughout the pandemic. They have issued 
dozens of warnings to individuals and entities marketing 
therapies and products that claim to prevent or treat COVID-19. 
Further, they have disseminated information with consumers on 
how to avoid such scams and verify information they come across 
online, which I have shared with my constituents. This 
information will continue to be vital as we navigate this 
unprecedented time.
    Looking forward, I believe that emerging technology also 
has the potential to be a useful tool in combatting illicit 
content online and putting a stop to these bad actors. That is 
why I recently introduced the Countering Online Harms Act, 
which would direct the Federal Trade Commission to conduct a 
study on how artificial intelligence may be used to identify 
and remove harmful online content, such as disinformation 
campaigns, ``deepfakes,'' counterfeit products, and other 
deceptive and fraudulent content that is intended to scam or do 
harm. Further, my bill would require the FTC to submit a 
subsequent report to Congress with recommendations on how to 
implement solutions with AI to address those issues and 
recommendations for potential legislation. Throughout the 
coronavirus pandemic, we have tapped into America's innovative 
potential to solve many of our new problems, and I hope the 
Countering Online Harms Act will build on this innovation to 
help protect American consumers as more and more of our lives 
are conducted online.
    Thank you to all of the witnesses for your participation 
today, and I look forward to hearing your testimony.
    Mr. Chairman, I'd like to submit for the record Ranking 
Member McMorris Rodgers' opening statement.
    Thank you again and I yield back.
    [The information appears at the conclusion of the hearing.]
    Mr. Doyle. The gentleman yields back. Mr. Pallone has 
yielded his time equally between Mr. Butterfield and Ms. Blunt 
Rochester.
    So, Mr. Butterfield, you can start for the 2\1/2\ minutes, 
and then yield to Ms. Blunt Rochester.
    Mr. Butterfield. Thank you so much, Mr. Chairman, for 
convening this hearing today on the role that social media and 
other online platforms in spreading disinformation.
    Mr. Chairman, the ability for virtually anyone to post 
thoughts and pictures and videos to social media has shed many 
of the systemic injustices and disparities that exist both in 
our country and around the world.
    However, we have also witnessed those same platforms used 
by domestic and foreign actors to undermine our democracy 
through disinformation campaigns, making for the easy spread of 
false narratives that undermine the public's trust in 
institutions like the press and our governments.
    A disturbing pattern, Mr. Chairman, has emerged online, 
revealing that African Americans and other racial minorities 
are consistently targeted by those seeking to promote 
disinformation.
    It is now well-established that in 2016 foreign actors 
targeted the African-American community by way of social media 
in efforts to keep African Americans from voting in the 
presidential election. That is a fact.
    More recently, mass protests following the death of George 
Floyd have often been wrongfully categorized on social media as 
violent by those seeking to undermine their purpose.
    Further, in the midst of a pandemic that disproportionately 
impacts communities of color, falsehoods have been spread from 
all--from our own president about the virus's treatment and 
testing and origins, deepening already existing divides and 
putting the public's health at considerable risk.
    Such attempts at disenfranchisement and deception have no 
place, no place, in a country where so many have fought 
bitterly and at such great cost to ensure that every American 
voice is heard at the ballot box and in the public square, 
which has increasingly moved online.
    In order to achieve meaningful progress in the fight 
against disinformation online, it will take the full 
cooperation of policymakers, industry stakeholders, and 
regulators to achieve our goal of an equitable online landscape 
that fosters healthy discourse while also promoting and 
protecting the civil rights of all users.
    That is what Ms. Schakowsky was talking about a few minutes 
ago, and I want to completely associate myself with her words.
    At this time, Mr. Chairman, as you mentioned in the outset, 
I will yield the balance of my time for my friend from the 
state of Delaware, Congresswoman Lisa Blunt Rochester.
    Ms. Blunt Rochester. Thank you, Mr. Butterfield, for 
yielding.
    Last October, the Energy and Commerce Committee considered 
whether social media companies have done enough to control hate 
speech, voter suppression activities, and blatantly false 
information on their platforms.
    Less than a year later, we are faced with a pandemic, 
record level unemployment, and Americans across the country 
demanding real action now on police violence and racial 
inequality.
    Yet, social media companies have failed to prevent white 
nationalists, scammers, and other opportunists from using their 
platforms to exacerbate these crises.
    Notably, the largest platform, Facebook, stands out as the 
most irresponsible platform. 2020 is a defining year for our 
democracy. Facebook and the other platforms have a 
responsibility to the country to get their act together and to 
be a part of the solution and not the problem.
    Thank you, and I yield back.
    [Pause.]
    Ms. Blunt Rochester. Mr. Chairman?
    Mr. Farid. Mr. Chairman, you're muted. We can't hear you.
    Mr. Doyle. I am sorry.
    At this time, the Chair will recognize Mrs. Brooks, who is 
being yielded Mr. Walden's time.
    Mrs. Brooks, you are recognized for 5 minutes.
    Mrs. Brooks. Thank you, Mr. Chairman.
    Ranking Member Walden is at Rules Committee so I have been 
asked to read his statement.
    Thank you, Mr. Chairman. I welcome and thank all our 
witnesses for joining us today to discuss online 
misinformation.
    The internet is both a tool for good and evil. It allows 
Americans to work and learn from home, gives us unlimited 
access to information, helps connect us to our loved ones, and 
strengthens our economy.
    The United States is a global leader in innovation and home 
to the most advanced technology companies in the world. The 
internet has also empowered bad actors to promote online scams, 
post harmful and offensive content, and globally disseminate 
disinformation for free.
    Often, social media posts have become a cancer on civility, 
literally destroying reputations and lives with one click. It 
is revolting to see what some people post online, something I 
can tell you from personal experience in this public position.
    But we all know it is hard to regulate speech, especially 
in a democracy and with protections we are afforded under the 
First Amendment.
    We also know there are boundaries and limits. But over the 
course of our history, we have never had so much power to 
regulate speech concentrated in so few in the private sector 
and with the broad immunity protection they have under Section 
230.
    As we battle COVID-19, access to factual information is 
more important now than ever. However, we still see 
misinformation spread on platforms.
    I know the Trump administration has aggressively gone after 
bad actors. But as soon as you take down one cyber profile, 
another one pops up. It is a global battle.
    We are in the midst of a national fight for equality and 
justice. At the same time, we see bigots post unacceptable, 
racist, and offensive comments online. These comments have no 
place in our society.
    Congress expects internet companies to monitor their 
platforms and take down false, misleading, and harmful content. 
That is why Congress enacted Section 230 of the Communications 
Decency Act, which provides liability protection to companies 
that take down content on their platforms.
    Last fall, this committee held a hearing to reexamine 
Section 230. I said then and will say again, many concerns can 
be addressed if these companies simply do what they say they 
will do: enforce their terms of service.
    However, recent actions taken by these companies trouble 
me. Twitter recently enacted new policies that seemingly target 
President Trump. Meanwhile, tweets that actually advocate 
violence are not flagged. Questions remain about who makes 
these decisions.
    Google took action against the Federalist for allegedly 
violating Google's ad policy in comment sections, not for the 
content of its articles, as NBC initially claimed.
    Significant questions persist as to whether Google followed 
their procedures and notified the Federalist directly. 
Moreover, why was this publication targeted and not others?
    I think I can speak for everyone on this committee when I 
say we do not support harmful or racist rhetoric or 
disinformation online. We expect these companies to do their 
best to flag or remove offensive and misleading content.
    But we also expect these immensely powerful platforms to 
follow their own processes for notifying users when they have 
potentially violated those policies and to enforce policies 
equitably. But that does not appear to have happened of late.
    That is why I prepared legislation that will mandate more 
transparency from online platforms about their content 
practices. This would require these companies to file reports 
with the FTC so it is clear whether they are complying with 
their own terms of service and to bring transparency to their 
appeal process.
    I hope this can be bipartisan legislation. This is a 
straightforward bill that only impacts companies with revenues 
over a billion dollars. So I hardly think it will crash the 
internet.
    I realize, given a mix of human review and artificial 
intelligence, these platforms are not always going to get it 
right. But they absolutely must be more transparent. The power 
to regulate speech in America is cloaked more and more in 
secret algorithms and centralized in the hands of a powerful 
few in the private sector. We have never needed transparency 
and accountability more. Freedom-loving Americans have far too 
much at stake for us to let internet companies go unchecked.
    Thank you, and I yield back.
    Mr. Doyle. The gentle lady yields back, and I want to thank 
her.
    I now want to introduce our witnesses for today's hearing.
    Ms. Brandi Collins-Dexter, senior campaign director at 
Color of Change; Dr. Hany Farid, professor, University of 
California Berkeley; Mr. Neil Fried, former chief counsel for 
communications and technology on the Energy and Commerce 
Committee and principal at DigitalFrontiers Advocacy; and Mr. 
Spencer Overton, President of the Joint Center for Political 
and Economic Studies, and professor of law at George Washington 
University.
    We want to thank all of our witnesses for joining us today. 
We look forward to your testimony.
    At this time, the Chair will recognize each witness for 5 
minutes to provide their opening statement, and Ms. Collins-
Dexter, you are now recognized for 5 minutes.
    And if you unmute.
    [Pause.]
    Mr. Doyle. Ms. Collins-Dexter?
    Ms. Collins-Dexter. Hello.
    Mr. Doyle. You are recognized for 5 minutes.

STATEMENTS OF BRANDI COLLINS-DEXTER, SENIOR CAMPAIGN DIRECTOR, 
 COLOR OF CHANGE; HANY FARID, PH.D., PROFESSOR, UNIVERSITY OF 
  CALIFORNIA, BERKELEY; NEIL FRIED, FORMER CHIEF COUNSEL FOR 
 COMMUNICATIONS AND TECHNOLOGY, ENERGY AND COMMERCE COMMITTEE, 
    PRINCIPAL, DIGITAL FRONTIERS ADVOCACY; SPENCER OVERTON, 
  PRESIDENT, JOINT CENTER FOR POLITICAL AND ECONOMIC STUDIES, 
         PROFESSOR OF LAW, GEORGE WASHINGTON UNIVERSITY

               STATEMENT OF BRANDI COLLINS-DEXTER

    Ms. Collins-Dexter. Thank you.
    Good day, Chairman Pallone, Ranking Member Walden, Chairman 
Doyle, Ranking Member Latta, Chair Schakowsky, Ranking Member 
McMorris Rodgers, and members of the subcommittee.
    I am Brandi Collins-Dexter, senior campaign director at 
Color of Change and a visiting fellow at Harvard Kennedy 
Shorenstein Center, working on documenting racialized 
disinformation campaigns.
    For black communities, uncertainty is driven by distrust of 
mainstream media and a history of trauma from interactions with 
powerful institutions ranging from Madison to law enforcement 
to federal, state, and local governments.
    Many of us have turned to social media as our church, our 
office water cooler, and our political home. But unlike a 
physical space like a church or office, online you often don't 
know who is standing next to you, who is giving the sermon, or 
how your data and information may be weaponized against you.
    While many corporate actors claim they are protecting free 
speech, this is an illusion. Every day companies make a choice 
about what's allowed and what's not.
    When companies say they are not willing to remove certain 
things, what they are really saying is that addressing white 
nationalism, disinformation, and anti-blackness simply don't 
rise to a level of urgency for them.
    Tech companies have routinely failed to uphold societal 
values like transparency, accountability, and fairness. We have 
seen misinformation about COVID-19 that endangers black lives.
    Back in February, Color of Change alerted Twitter to COVID-
19 misinformation that was spreading in the black community. 
The company only revised standards to address the dangers of 
misinformation after increased pressure and evidence gathered 
by Color of Change and other groups.
    Other tech companies have been slower to respond. A 
pandemic video on YouTube suggesting that the pandemic is a 
false flag to force mandatory vaccines and microchips had 4.3 
million views on YouTube and 930,000 engagements on Facebook.
    Every week, I sit on Zoom with my mom while she recounts 
various people in our family and friend network who have passed 
from COVID-related issues. So I feel acutely the danger from 
these types of lies.
    At Color of Change, we have collected hundreds and hundreds 
of complaints from our members about censorship, harassment, 
and vile racial threats that they have received on Facebook.
    On the platform, we often see conspiracy theories coupled 
with threats and calls to violence. The most popular of those 
conspiracy theories are those involving anti-Semitic tropes 
about George Soros and black activist groups.
    The idea that black people are puppets has been played up 
by white supremacists like David Duke to undermine the 
credibility and impact the black organizations, but more--
beyond credibility, it puts our lives in physical danger.
    Members of Congress, please move quickly to fix our 
democracy before it is irretrievably broken. I urge you to 
convene a series of civil rights-focused hearings with high-
level executives from all major companies with a particular 
focus on those trafficking in disinformation.
    Restore funding for the Office of Technology Assessment in 
order to help Congress tackle issues such as data privacy, tech 
election protection, and set up infrastructure that can 
facilitate deeper investment in U.S. space innovation and 
entrepreneurship to combat disinformation and other data-
hostile practices.
    Ensure that regulators have every power at their disposal 
to ensure the safety of consumers and users on tech platforms. 
We support a consumer watchdog agency that is resourced to 
ensure we are all able to have control and protection of our 
data and that there is a competitive digital marketplace.
    And finally, Congress should affirmatively empower and 
resource the Federal Trade Commission to enforce antitrust laws 
against technology oligarchs.
    The sheer amount of data and information amassed by tech 
companies, the inability of companies like Facebook and Google 
to be regulated at scale, and mistakes online, in the voting 
booth, and on our streets require actionable steps towards 
breaking up companies.
    Congress is charged with making decisions, policies, and 
laws that make real our joint aspiration for a more perfect 
union that establishes justice, ensures domestic tranquility, 
provides for the common defense, and promotes the general 
welfare so that the blessings of liberty can ring true for all 
of us.
    This cannot happen when democracy is corrupted. 
Uncontrolled tech companies pose significant threats to 
democracy and freedom in the U.S. and around the world.
    We must move with collective urgency to ensure that our 
data and physical bodies are protected on and offline.
    Thank you so much for your time.
    [The prepared statement of Ms. Collins-Dexter follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. Thank you for your testimony.
    The Chair now recognizes Dr. Farid. You are recognized for 
5 minutes.

                    STATEMENT OF HANY FARID

    Mr. Farid. Chairs, Ranking Members, and members of both 
subcommittees. Thanks for the opportunity to speak with you 
today on these incredibly important issues.
    Technology and the internet have had a remarkable impact on 
our lives and society. Many educational, entertaining, and 
inspiring things have emerged in the past two decades in 
innovation.
    But at the same time, many horrific things have emerged. A 
massive proliferation of child sexual abuse material. The 
spread and radicalization of domestic and international 
terrorists.
    The distribution of illegal and deadly drugs. The 
proliferation of mis- and disinformation campaigns designed to 
sow civil unrest, incite violence, and disrupt democratic 
elections.
    The proliferation of dangerous, hateful, and deadly 
conspiracy theories. The routine harassment of women and 
underrepresented groups in the forms of threats of sexual 
violence and revenge in non-consensual pornography, small- to 
large-scale fraud, and spectacular failures to protect personal 
and sensitive data.
    How, in 20 years, did we go from the promise of the 
internet to democratize access to knowledge and make the world 
more understanding and enlightened to this litany of daily 
horrors? Due to a combination of naivete, ideology, willful 
ignorance, and a mentality of growth at all costs, the titans 
of tech have simply failed to install proper safeguards on 
their services.
    We can and we must do better when it comes to contending 
with some of the most violent, harmful, dangerous, hateful, and 
fraudulent content online.
    We can and we must do better when it comes to contending 
with the misinformation apocalypse that has emerged over the 
past few years.
    The COVID global pandemic, for example, has been an ideal 
breeding ground for online misinformation. Social media traffic 
has reached an all-time record as people are forced to remain 
at home, often idle, anxious, and hungry for information.
    The resulting spike in COVID-related misinformation is of 
grave concern to health professionals. The World Health 
Organization, for example, has listed this infodemic in its top 
priorities to contain the pandemic.
    Over the past few months, we have measured a troublingly 
wide-reaching belief in COVID-related misinformation that is 
highly partisan and is more prevalent in those that consume 
news primarily on social media.
    We find that the amount of misinformation believed by those 
with social media as their main source of news is 1.4 times 
greater than others, and the amount of misinformation believed 
by those on the right of the political spectrum is two times 
greater than those on the left.
    Even prior to the current pandemic, the FBI announced last 
year that fringe conspiracy theories are a domestic terrorist 
threat due to the increasing number of violent incidents 
motivated by such beliefs.
    At the same time, YouTube continues to knowingly and 
actively promote fringe and dangerous conspiracies. At its peak 
in late 2018, we measured that almost ten percent of 
recommended videos on YouTube's informational and news channels 
were conspiratorial in nature.
    Because 70 percent of all watched videos on YouTube are 
recommended by YouTube, their recommendation algorithm is 
responsible for the spread of conspiracies and misinformation.
    Now, Facebook's Mark Zuckerberg has tried to frame the 
issue of reining in mis- and disinformation as not wanting to 
be the arbiter of truth. This entirely misses the point.
    The point is not about only about truth or falsehood but 
about algorithmic amplification. The point is that society 
media decides what is relevant by recommending it every day to 
their billions of users.
    The point is that social has learned that outrageous, 
divisive, conspiratorial content increases engagement. The 
point is that online content providers could simply decide that 
they value trusted information over untrusted information, 
respectful over hateful, and unifying over divisive and, in 
turn, fundamentally change the divisiveness fuelling and 
misinformation distributing machine that is social media today.
    If advertisers that are the fuel behind social media took a 
stand against online abuses, they could withhold their 
advertising dollars to insist on reaching change.
    Standing in the way of this much-needed change is a lack of 
corporate leadership, a lack of competition, a lack of 
regulatory oversight, and a lack of education among the general 
public.
    Responsibility, therefore, to regain civility and trust 
online falls on the private sector, government regulators, and 
we, the general public.
    Thank you, and I look forward to taking your questions.
    [The prepared statement of Mr. Farid follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. I thank the gentleman for his testimony.
    The Chair now recognizes Neil Fried. Neil, welcome back to 
the committee. Always good to see one of our own back for a 
visit.
    You are now recognized for 5 minutes.

                    STATEMENT OF NEIL FRIED

    Mr. Fried. Thank you, Mr. Chairman.
    Chairman Pallone, Ranking Member Walden, Chairman Doyle, 
Ranking Member Latta, Chair Schakowsky, Ranking Member McMorris 
Rodgers, and members of the committee, thank you for inviting 
me to testify.
    After ten years as communications and technology counsel to 
this committee, it is an honor to be on this side of the 
witness table, albeit virtually.
    I have been involved in Section 230 debates for a while 
now, since 1999, and welcome the opportunity to share my views. 
Those views are my own. I have no client on Section 230 
matters.
    I come not to bury Section 230 but to improve it. I 
recommend restoring a duty of care online by requiring 
platforms to take reasonable good-faith steps to prevent 
illicit use of their services as a condition for receiving 
Section 230 protection.
    This would better protect users as well as address 
competition concerns and it would do so without regulating the 
internet, without taking away the platforms' content moderation 
safe harbor, and without raising government censorship issues.
    Growing frustration with the internet stems in large part 
from the lack of accountability of platforms as well as online 
intermediaries such as domain name providers and reverse proxy 
services.
    Increased transparency would help, as would legislation 
restoring access to the WHOIS information needed to catch 
illicit actors. Fully realizing the internet we all aspire to, 
however, will ultimately require recalibrating Section 230.
    So long as platforms can facilitate illicit activity with 
impunity, we are fighting a losing battle. Despite claims that 
Section 230 encourages content moderation, it actually does the 
opposite. Congress gave platforms a content liability shield so 
they would wield a content moderation sword.
    Although Section 230(c)(2) creates a safe harbor for 
content moderation, Section 230(c)(1) eliminates liability even 
if the platforms don't moderate content. In other words, they 
reap the benefits of the shield even when they drop the sword.
    Thus, while Section 230(c) is called the Good Samaritan 
provision, it still protects platforms when they behave like 
Bad Samaritans, profiting from advertising around unlawful 
behavior while sitting on their hands without legal 
consequence.
    This is aggravating illicit activity online, everything 
from fraud to the spread of child pornography. Ordinarily, a 
business has a duty of care to prevent people from using its 
services to harm others.
    Section 230, however, eliminates this duty, even when the 
platforms negligently, recklessly, or willfully disregard 
illicit activity. This puts the internet users in harm's way 
and often leaves victims without a remedy in light of the 
anonymous nature of the internet.
    The platforms say they are taking responsible steps to curb 
illicit activity. That may be true in some cases. But why 
should their judgment be beyond traditional scrutiny?
    Section 230 also affects competition by letting platforms 
avoid the ordinary business costs of preventing harm. This 
gives them an unfair advantage over their competitors.
    It can grow more recklessly in both scale and scope, which 
also gives them market power to negotiate aggressive terms in 
their favor. It can generate profit from an advertising around 
illicit activity and they can ignore harms that their users 
cause their competitors.
    One way to preserve the benefits of Section 230 while 
fixing its harms would be to restore a duty of care. This could 
be achieved by requiring platforms to take reasonable good-
faith steps to curb illicit activity as a condition of 
receiving protection.
    Doing so would mean platforms do not enjoy protection when 
they negligently, recklessly, or knowingly facilitate illicit 
activity. Such a solution also avoids harms that critics 
attribute to Section 230 reform.
    First, it preserves the content moderation safe harbor the 
platforms say they need to continue carrying user-generated 
content.
    Second, it requires no new regulation of the internet. 
Platforms would still have discretion over their business 
models on the front end but would appropriately be held 
accountable on the back end if they used that discretion 
poorly.
    That potential back end accountability would prompt 
responsibility by design.
    Third, it doesn't rely on government-determined content 
rules, avoiding First Amendment claims.
    Fourth, any evaluation of reasonableness will factor in the 
resources available to a platform, ensuring smaller platforms 
are not unreasonably burdened as they try to grow.
    In the meantime, the U.S. should refrain from including 
Section 230 type language in trade deals. To do otherwise would 
export the harms we are experiencing here to foreign citizens 
and to U.S. companies abroad, and because the internet is 
global, lax standards of fraud also harm U.S. citizens and 
businesses here.
    I thank the committee again for providing me the 
opportunity to appear today and welcome any questions.
    [The prepared statement of Mr. Fried follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. Thank you, Mr. Fried.
    The Chair now recognizes Mr. Overton for 5 minutes.

                  STATEMENT OF SPENCER OVERTON

    Mr. Overton. Thank you very much.
    Chairs, Ranking Members, and members of the committee, I 
thank you for inviting me to testify.
    My name is Spencer Overton. I am the president of the Joint 
Center for Political and Economic Studies, which was founded in 
1970 and is America's black think tank.
    I am also a tenured law professor at GW, specializing in 
voting rights, and I have recently published academic research 
on voter suppression through social media.
    Disinformation on social media presents a real danger to 
democracy. Both domestic and foreign actors use disinformation 
to divide Americans along racial lines. They use data and 
psychology to play on people's deepest fears and create an us 
versus them discourse.
    According to a recent Gallup Knight Foundation survey, 81 
percent of Americans believe that social media companies should 
never allow intentionally misleading information on elections 
and political issues. Section 230 clearly gives social media 
companies authority to remove disinformation and they should 
use that authority to do a better job at stopping 
disinformation.
    So some social media companies will say they don't remove 
disinformation because they want to protect speech or be 
viewpoint neutral. But the harms that result are now neutral 
for communities of color.
    For example, in 2016, you remember several Facebook, 
Instagram, Twitter, and YouTube accounts looked like they were 
African American operated but in fact they were operated by the 
Russian Internet Research Agency.
    At first, the fake accounts built trust by showcasing black 
achievements. Later, they started posting on police violence 
and other structural inequalities. Then, near Election Day, 
after they had built a large following with fake accounts, 
urged black voters to protest by boycotting the election and 
not voting.
    Now, we don't know how many black voters stayed home 
because of this disinformation. But we do know that 2016 marked 
the most significant decline in black voter turnout on record.
    Even though the Russians infiltrated different groups, you 
know, a variety of groups--conservative, liberal, Second 
Amendment, LGBT, Latino, policing, Muslim American groups--even 
though they did all that, this harm was not neutral for black 
communities.
    For example, while black people make up just 13 percent of 
the U.S. population, black audiences accounted for over 38 
percent of the Facebook ads purchased by the Russians and 
almost half of the user clicks.
    Also, the Russian scheme discouraged voting among African 
Americans, right, but not those other groups. It is not neutral 
for our nation's most valuable companies to profit off of 
discrimination against historically marginalized communities.
    Now, recently, President Trump signed an executive order 
that attempted to increase the legal liability for social media 
companies that moderated objectionable content by President 
Trump and his followers.
    This type of retaliation discourages social media companies 
from stopping disinformation and allows for more disinformation 
that divides Americans.
    Although President Trump's executive order is problematic, 
right, the status quo, just clearly it is not working. The 
types of disinformation and voter suppression schemes we saw in 
2016 are continuing in 2020.
    Facebook has even argued that federal civil rights laws 
don't apply to Facebook. Even in the aftermath of the killing 
of George Floyd, there exists a real question about whether 
social media companies will address their own systemic 
shortcomings and fully embrace civil rights principles.
    I hope that civil right--that social media companies will 
fully adopt these civil rights principles and use their 
existing legal authority to prevent disinformation and voter 
suppression.
    If legal reforms are needed, however, these debates should 
occur in Congress and should include the voices of communities 
of color who have been disproportionately harmed by targeted 
voter suppression and other disinformation campaigns.
    Thank you, and I look forward to our discussion today.
    [The prepared statement of Mr. Overton follows:]
   [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Doyle. Thank you very much. I want to thank all the 
panellists for their testimony, and I will note for the record 
that they all were able to do it under 5 minutes.
    So we, on the committee, will endeavor to follow their good 
example and not take any longer than 5 minutes for our 
questions. So we are going to move on to member questions.
    Each member will have 5 minutes to ask questions of our 
witnesses and I will start by recognizing myself for 5 minutes.
    Ms. Collins-Dexter, in your testimony you talk about the 
dangers of tech companies' reluctance to regulate 
disinformation from prominent figures.
    Recently, Facebook CEO Mark Zuckerberg defended his 
decision not to moderate inaccurate statements made by 
President Trump regarding voting by mail and his glorification 
of violence when he said people protesting the murder of George 
Floyd, when the looting starts, the shooting starts.
    Those words have a long history of abetting bigotry and 
police brutality in this country. Just yesterday, the president 
threatened protestors in Washington, DC with violence.
    What are online platforms culpable for when they allow this 
kind of content to be posted and shared by their most prominent 
users, and what do you think the President's intent was when he 
signed the executive order to prevent online censorship, as the 
White House described it?
    Ms. Collins-Dexter. Thank you, Congressman.
    There is so much at stake with people's lives when 
disinformation is allowed to rule the day and, particularly, it 
doesn't matter when tech companies enforce the rules with 
people that have 10 or 20 or 100 followers, if the people that 
have thousands and millions of followers are allowed to peddle 
disinformation.
    And so in terms of vote by mail, we know for a fact that, 
you know, 80 percent of Americans support vote by mail. We have 
seen swells in voter turnouts in places like I live like 
Baltimore.
    It has actually no impact on partisan turnout. I know that 
is not important to anyone here but it is actually important 
for everybody in our democracy to be able to engage in the 
business of voting.
    And so when it is left up, all of these allegations that 
vote by mail is a fraud, it turns people off. It hinders our 
ability to have, like, safe voting conditions in November when 
we are still dealing with COVID, and it really does do a 
disservice to, I think, the work that Congress has invested in 
in ensuring that everybody can engage in our democracy.
    In terms of the threats, we personally have dealt with 
increased number of threats to our lives as individuals and as 
an organization. I think MoveOn has actually found that in--
after monitoring 25,000 comments in certain right-wing groups 
in particular there were 207 calls to violence and murder that 
were actually identified, which I can share with folks if 
they're interested afterwards.
    But, like, we see how things that are said online have a 
deep impact on our safety offline. As far as what the 
President--you know, his thinking, I--you know, I would hate to 
speculate what anybody thinks internally, particularly----
    Mr. Doyle. I understand, and you don't need to do that.
    I do want to ask Mr. Overton a question before my time is 
up.
    Mr. Overton, in your recent Law Review article entitled 
``State Power to Regulate Social Media Companies'' and voter 
suppression in minority communities using targeted ads on 
Facebook and other platforms, we saw in our hearings on the 
devastating effect of these efforts in 2016, and I have grave 
concerns about the 2020 election.
    You argued that the steps taken by online platforms to 
enable and tailor the targeting of affected classes such as 
black Americans with paid advertisements and promoted posts 
constitutes a material contribution to the distribution of 
these ads and should make these online platforms liable under 
state voter suppression laws.
    Does Congress need to clarify Section 230 to make it clear 
that platforms that enable these kinds of civil rights 
violations are liable not only under federal law but under 
state law as well?
    Mr. Overton. Thank you so much, and you are absolutely 
right. We are not necessarily talking about the speech of the 
third parties, as you point out. We are talking about the 
platforms themselves. They are materially participating by 
using their algorithms to target communities of color.
    So an employment ad goes to white folks but not to black 
folks. Voter suppression ads target at black votes and not 
other people, and that is materially participating in the 
discrimination, which is not what--which is not immunity that 
230 covers.
    So courts have not explicitly talked about that and if 
Congress opens up 230, certainly, it should make it explicitly 
clear that this type of behavior is not protected by 230.
    Unfortunately, Facebook has argued that it is protected and 
that they should be able to target ads away from black 
communities and employment opportunities to just white folks.
    Mr. Doyle. Thank you, and I see my time has expired right 
on the button.
    I will now recognize my good friend, Mr. Latta, for his 5 
minutes.
    You need to unmute, Bob. There you go.
    Mr. Latta. Well, thanks, Mr. Chairman, and I want to thank 
you again for holding today's hearing, and if I could start 
with Mr. Fried.
    This morning I sent a letter to several federal agencies 
requesting information about how those agencies use information 
from the Whois database to combat illegal activity online.
    Due to the ambiguous and overly broad nature of the 
European Union's GDPR, access to Whois information has been 
restricted for many third-party organizations that use this 
information to identify bad actors online.
    Access to Whois is especially important during this 
pandemic as we have seen an increase of online misinformation 
and fraud targeting consumers.
    First question, how did access to Whois information 
required in implementation of the GDPR help in the takedown of 
illegal content?
    Mr. Fried. Thank you, Mr. Latta.
    Two ways. One is fairly obvious. If you found someone 
engaging in illicit activity from a website you could try and 
figure out who holds that website. So it is good in capturing 
who is engaged in illicit activity.
    What many people don't realize it is also used to prevent 
illicit activity because you can track patterns. If you see 
that a lot of illicit activity has occurred from a particular 
website or from certain people who are holding a website, what 
web managers can do is create sort of blacklists, and say we 
know that this actor is doing things nefarious. They are 
engaging in fraud, they are engaging in cyber-attacks, and 
corporate or law enforcement can then proactively prevent those 
entities from creating further havoc.
    So it is both catching criminals and preventing crime.
    Mr. Latta. Well, and also, just to follow up then, because 
does withholding that access to the Whois information from 
certain groups reduce the action the domain name providers are 
able to take against that illegal content?
    Mr. Fried. Yes. And so this is also very important. So, 
obviously, law enforcement is critical here. But law 
enforcement only has so many resources.
    There is a very large community of cyber experts that track 
illicit behavior and they can often flag illicit activity both 
that has already occurred, or as we just discussed, that may be 
happening soon because they can see certain actors who have 
engaged in illicit activity in the past about to do something 
again, and they can warn public safety law enforcement that 
something is amiss, and you lose that as well.
    So without Whois, it really hurts the ability of domain 
name providers to release this information, at least because of 
the way the EU's GDPR is being over applied.
    Mr. Latta. Well, and just finally, could you briefly 
summarize the types of societal problems that could be better 
confronted by restoring the access to the Whois information?
    Mr. Fried. Everything we are experiencing now, from fraud 
to illicit sale of drugs to cyber-attacks, any illicit activity 
online often either has a website component or it has IP 
addresses that you can find through the Whois data.
    So any illicit activity, the scourge of misinformation or 
fraud or cyber-attacks, all could be aided--combatting that 
could all be aided if we had better access to Whois data like 
we used to have.
    Mr. Latta. Thank you.
    Mr. Farid, there is much discussion for companies to be 
transparent about their terms of service and how they enforce 
their policies.
    But recently, social media companies seem to be creating 
new policies ad hoc to fit their political agenda, arguably, 
making these companies arbiters of speech on their platform.
    Both artificial intelligence and human review are important 
elements to ensure that harmful and illegal content gets taken 
down.
    But how do you address the human bias element to make sure 
that Americans are able to exercise their right of free speech?
    Mr. Farid. I think the bias question is important, 
Congressman, and I think it is important for us to talk about 
it.
    Let me say that there is no compelling evidence that we 
have seen to date that shows that there is a consistent bias. 
You can always take individual cases and show that there is a 
problem here or there. But the consistent disproportionate 
affecting of one group or another, politically, we have not 
seen.
    So I think the answer to your question is we need 
transparency. We need transparency in the rules. We need 
transparency in how they are being enforced. We need better 
reporting. We need more consistency, and we need more 
investment.
    The fact is that the tech companies have not invested in 
the technologies and into the services they need to moderate 
their platforms because, frankly, it is bad for business.
    And so we need for them to put more effort into this and 
for it to be transparent and clear and consistent application 
of the rules, and without, as we have been talking about, real 
reform in 230--not removing it, as you said in your opening 
remark, but real reform--I think that is going to very 
difficult to achieve.
    Mr. Latta. Well, thank you very much, Mr. Chairman. My time 
is just set to expire and I yield back the balance.
    Mr. Doyle. I thank the gentleman.
    The Chair now recognizes Ms. Schakowsky for 5 minutes.
    You need to unmute, Jan.
    Ms. Schakowsky. OK. Mr. Fried, I just want to point out, 
you said that you hope that 230 and the liability shield would 
not be in trade agreements, and as you know, because I was--and 
I know because I was on the working group, it is in the U.S.-
Canada-Mexico Trade Agreement and I think we need to work in a 
bipartisan way to make sure that we are keeping it out of 
future agreements because it will make it harder for us then to 
moderate 230.
    So I hope we can work together on that.
    As Mark Zuckerberg noted so clearly when he testified 
before Congress, Facebook and other social media platforms make 
money by selling ads. In many of our consumer protection 
hearings, someone uses the now kind of cliche line, if you are 
not paying for the product, you are the product.
    Simply put, the longer you stay on an app, the more money 
the company makes, and what gets and keeps people online, as 
Dr. Farid noted in his testimony, content that is, quote, 
``novel and provoking,'' unquote, such as conspiracy theories 
and snake oil, et cetera, and COVID-19 hoaxes and things about 
protestors are--draw viewers.
    So let me ask you, Dr. Farid, can you discuss why many of 
the big platforms allow amplification of conspiracies and 
disinformation to happen, and how the business model seems to 
be benefiting them.
    Mr. Farid. Thank you, Congresswoman. You said it absolutely 
right, that social media is in the engagement and attention 
business.
    So they profit when we spend more time on the platform. 
They collect more data from us and they deliver ads. They 
didn't set out to fuel misinformation and hate and 
divisiveness. But that is what the algorithms learned.
    So when you do a AB testing--if we show you this do you 
spend more or less time on the platform--algorithms have 
learned that the hateful, the divisive, the conspiratorial, the 
outrageous, and the novel keeps us on the platforms longer, and 
since that is the driving factor for profit, that is what the 
algorithms do.
    Now, they could change the algorithms. They could just say, 
look, it is not all about engagement. It is not all about 
profit. It is about a healthier ecosystem, democracy in society 
and they could just veil you something else than what they are 
optimizing for.
    But the core poison here, Congresswoman, which is what you 
are getting at is the business model. The business model is 
that when you keep people on the platform you profit more and 
that is fundamentally at odds with our societal and democratic 
goals.
    Ms. Schakowsky. Thank you. You know, we hear over and over 
again from Big Tech that, well, we are going to fix this. Self-
regulation really works. And I don't think that--I personally 
don't think that is true.
    Congress routinely, routinely regulates commercial activity 
to prevent harm harmful products from being sold, stop fraud, 
and deter illegal discrimination.
    So when a company is profiting from its decisions to push 
counterfeit products or facilitating housing discrimination, 
they should be held accountable.
    So, Mr. Overton and maybe we could also hear from Ms. 
Collins-Dexter--do I have any time left? Yeah, I do.
    You testified that the status--OK. Let me try again. You 
testified that the status quo for Section 230 is not working to 
protect civil rights.
    Can you expand a little bit on the civil rights aspect?
    Mr. Overton. Yes, thank you.
    So it is not working because in part, the algorithms are--
have a discriminatory impact in effect. Even when they take 
explicit racial groups and targeting out, they are still 
profiting from that in terms of employment.
    When we look in other areas, we see voter suppression that 
continues to exist. Something was just uncovered in terms of a 
group in Ghana and Nigeria targeting black Americans with 
disinformation.
    So we see several examples. It is very unlike COVID. 
Certainly, there was misinformation with COVID, but the thought 
was, hey, there is public health here at stake. We really also 
need to be focused about on the health of our democracy and we 
need platforms to be serious about that.
    Ms. Schakowsky. Thank you. Actually, my time has expired.
    Mr. Doyle. The gentle lady's time has expired.
    Ms. Schakowsky. Yes. So maybe we can talk offline. Thank 
you very much.
    Mr. Doyle. OK. Thank you.
    The Chair now recognizes Mr. Shimkus for 5 minutes.
    [No response.]
    Mr. Doyle. Is Mr. Shimkus here?
    [No response.]
    Mr. Doyle. OK. Let us go to Dr. Burgess. You are recognized 
for 5 minutes.
    Oh, I am sorry. Is the Chair here? Is Mr. Walden here?
    [No response.]
    Mr. Doyle. Is the chairman here? Is Mr. Walden here, Bob?
    [No response.]
    Mr. Doyle. OK. Dr. Burgess, you are recognized for 5 
minutes.
    Ms. O'Connor. Mr. Doyle, both Mr. Walden and Mr. Burgess 
are at Rules Committee at the moment.
    Mr. Doyle. OK. Is Mr. Shimkus present?
    [No response.]
    Mr. Doyle. OK. Are we down to what, Mr. Guthrie?
    Mr. Guthrie. Mr. Guthrie.
    [Laughter.]
    Mr. Doyle. Yes, Mr. Guthrie is recognized for 5 minutes.
    Mr. Guthrie. Thank you, Mr. Chair. I appreciate it very 
much.
    Mr. Doyle. I think we have members on the Rules Committee, 
but.
    Mr. Guthrie. Yes. Yes, they were. I am standing by. Thank 
you very much.
    Dr. Farid, my first question is for you. Welcome back to 
the committee. We all--we like hearing from you. Enjoy our 
discussions.
    We often hear that there are difficult judgment calls on 
content moderation. Do you believe these large social media 
companies currently possess the technological means to better 
moderate illicit content on their platforms? And if they do, 
why aren't they using it?
    Mr. Farid. Thank you, Congressman. Good to see you again 
and good to be back here.
    I don't actually think they have very good technology. It 
is not that the technology can't be developed. It is just they 
haven't developed it. They haven't prioritized it. I will give 
you a couple of examples.
    On Facebook and on YouTube, you are not allowed to post 
adult legal pornography. Perfectly protected speech, by the 
way, and nobody gives Facebook and YouTube a hard time for 
eliminating that content, which they do, by the way, very 
effectively because it is bad for business. Advertisers don't 
want to advertise against that content to spread their 
information.
    When the DMCA was passed, we got very good at removing 
copyright infringement because the law insisted on it. So when 
there has been an insistence to remove content or that it was 
important for businesses, the companies have actually gotten 
very good. They simply haven't prioritized misinformation, 
disinformation.
    And I would also point out that it is not always entirely 
about either removing the content or not. It is also about the 
amplification.
    So what they could choose to do, even if they don't have 
the ability to detect fake news, misinformation, 
disinformation, is they could reprioritize the algorithms so 
that trusted information is brought above untrusted 
information.
    So you can think about the problem in two ways. It is not 
about necessarily detecting fake information but it also could 
be about detecting trustworthy information or civil discourse.
    And so it is simply not a priority for them, and despite 
the claims, by the way, and by the way it's the same claims 
about copyright infringement. But as soon as the laws were 
passed, well, they got really good at it.
    And the same case about child sexual abuse material, and 
when the public pressure escalated so much they eventually 
started removing the content after years and years of denial 
that it was possible.
    Mr. Guthrie. Well, that is very helpful. Thank you very 
much.
    And also a question to you, again. I appreciate your 
collaboration with Microsoft to develop photo DNA over a decade 
ago. When you work with these companies, what are the 
benchmarks you would advise these companies to meet in such as 
investment in personnel?
    Mr. Farid. Yes, it is a great question, Congressman.
    So, you know, one of the hardest things with these 
companies is there is a lack of transparency, and so we don't 
know how much child sexual abuse material, how much terrorism 
material, how much illegal drugs and misinformation goes 
through their networks.
    So the first thing is to really get good at reporting and 
understanding the flow of disinformation through your services 
so that as we deploy technologies we can do better.
    So here is what I can tell you is that in all of the major 
companies dealing with these things are not a priority. So 
whether it is human moderators or whether it is research or 
whether it is technology deployment, it is simply not being 
prioritized.
    Now, the thing you will always hear is, well, we do this, 
we do this, we do this, and we do this. The answer is that it 
is fine, but what they are not telling you is what they are not 
doing.
    So when I make the call for transparency on the flood of 
disinformation and harmful content, it is critical to this and 
to reprioritize the priorities of the current companies so that 
we start dealing with the harmful content, at least as 
effectively as we do for making money.
    Mr. Guthrie. OK. Thank you. And I have about a minute left. 
So, Mr. Fried, I would like to ask you this. I appreciate your 
answers, Dr. Farid.
    Mr. Fried, where do you see emerging technologies such as 
artificial intelligence being used to help combat 
disinformation, particularly during situations like the current 
pandemic? Do you believe AI can be used to identify and remove 
illicit content from platforms such as disinformation campaigns 
and counterfeit products?
    I have about a minute for you to respond.
    Mr. Fried. I certainly would not rule out any technological 
tool that can help. I would caution, depending on what kind of 
artificial intelligence you are talking about, like algorithms, 
right, these also can prompt--some of them got us in the mess 
we are in now.
    So, absolutely, we should look at all the options. But the 
(audio interference) the concerns as well as (audio 
interference)
    Mr. Guthrie. OK. And I will just follow up then. Do you 
believe that--so Mr. Fried, do you believe that Section 230 of 
the Communication Decency Act creates a disincentive for 
platforms implementing artificial intelligence and other 
emerging technologies to address this?
    Mr. Fried. So Professor Farid hit on this already, which is 
that the law could increase the incentive to solve problems, 
and right now, because the liability protection applies even if 
they do nothing, there is less legal incentive for them to 
solve that problem.
    Mr. Guthrie. So there is not a disincentive, just no 
incentive to do so?
    Mr. Fried. There is less of a legal incentive. That is 
correct.
    Mr. Guthrie. Thank you very much. I have about ten seconds 
left. Appreciate your answers. Appreciate you being here, and I 
yield back.
    Mr. Doyle. The gentleman yields back.
    The Chair now recognizes Mr. Rush for 5 minutes.
    Mr. Rush, you are recognized.
    Bobby, you need to unmute.
    [No response.]
    Mr. Doyle. Mr. Rush, if you can hear me, you need to unmute 
your microphone, and your video is off, too. So you may want to 
check that also.
    [No response.]
    Mr. Doyle. OK. I think we are going to go to Mr. 
Butterfield. Mr. Butterfield, we will recognize you for 5 
minutes and we will come back to Mr. Rush when he gets back 
online.
    Mr. Butterfield?
    [Laughter.]
    Mr. Doyle. Isn't technology wonderful?
    Mr. Butterfield?
    OK. Mr. Rush, can you hear me?
    Mr. Rush. I certainly can now. I can hear you and I can see 
you, Mr. Chairman.
    Mr. Doyle. OK. You are now recognized for 5 minutes, my 
friend.
    Mr. Rush. Well, thank you very much, Mr. Chairman, and I am 
delighted to participate in this hearing, and I want to welcome 
all of our witnesses.
    Ms. Collins-Dexter, I have introduced the COVID-19 Testing, 
Reaching, and Contacting Everyone, which is called the TRACE 
Act, which is meant to provide contact tracing and testing in 
the face of this pandemic.
    We have almost 70 co-sponsors, and this bill has been 
subject to an incredible amount of disinformation and 
distortions and, frankly, downright lies, all on social media 
platforms.
    This occurred primarily due to the bill's number H.R. 6666 
and the bill is facing distortions which focus on the African-
American community.
    When my staff engaged ten companies to prevent the spread 
of this disinformation, they were told that the posts represent 
people expressing their opinions on legislation and, therefore, 
don't violate community guidelines.
    While I totally support a free and spirited discussion, I 
believe it is also important to recognize that disinformation 
can have real and significant consequences.
    I wonder, then, where and how do we draw the line between 
opinion and disinformation.
    Ms. Collins-Dexter. Yes. I think it is extremely important 
that we draw that line. Thank you, Congressman.
    A difference of opinion is I think taxes should go here 
versus, you know, pay less taxes and there are a number of ways 
in which we have to be invested in the free marketplace of 
ideas.
    But when it comes to information that is put forward that 
directly endangers people's lives on and offline; in the case 
of your bill, Congressman, we are seeing right now that from 
the data we do know that 60 percent of cases, depending on the 
state, of deaths from COVID-19 are black people.
    There is a high number of stakes involved with the amount 
of disinformation we've seen floating around, and particularly 
when congress people are trying to put forward bills that would 
increase the data and awareness of folks around this and other 
things, it is extraordinarily important that we do that there 
is a difference between opinion and there is a difference 
between the need for facts in our society.
    Mr. Rush. Yes. Next, Ms. Collins-Dexter, AI trained--you 
stated in your testimony that, I quote, ``AI trained to 
identify hate speech may actually amplify racial bias.'' This 
is a big concern to me and many of my colleagues.
    What could and should Congress do to prevent and mitigate 
this outcome?
    Ms. Collins-Dexter. Yes. So, I mean, I think that, like, 
with the issue of AI we found time and time again racial bias 
in healthcare software, in crime software. Google's hate speech 
has a AI racial bias problem as well as the technology that was 
used on Facebook.
    And so I think--and also we see that a lot of content 
moderators are contractors or not necessarily in the country 
and not able to really do their job at full scale. And so I 
think that kind of points to some of the ways in which I would 
see Congress moving forward.
    I think something like bringing back GAO would be really 
great to help ensure that Congress is informed and able to make 
the decisions around how to move forward on decisions like AI 
monitoring.
    Mr. Rush. Mr. Chairman, I think that my time has been 
expired and I yield back.
    Mr. Doyle. I want to thank the gentleman for yielding us 
back 43 seconds, setting such a good example.
    I can see my good friend, John Shimkus, on the screen. So 
Mr. Shimkus, you are recognized for 5 minutes.
    Mr. Shimkus. Thank you, Mr. Chairman. It is a great hearing 
and thank you for the panelists for being here.
    I want to direct my questions to Neil Fried. It is good to 
have and see him again, of course, working with the committee 
for many years.
    And I think we all can agree that some content decisions 
are complex. But you suggest companies have not met the bar 
when it comes to clearly illegal content or violation of terms 
of service.
    Do the current incentives under Section 230 encourage 
companies to proactively engage in enforcing their terms of 
service or simply wait for users to flag content?
    Mr. Fried. Thank you, Mr. Shimkus.
    So despite the claim that it actually encourages content 
moderation, it doesn't. What it does do, right, in the safe 
harbor for content moderation is it gets rid of the 
disincentive caused by the Prodigy case.
    But what provision gives, essentially, 230(c)(1) takes away 
by saying you can't be held liable for anything anyway. So that 
actually (audio interference) no incentive to be legal, 
incentive to be proactive.
    Mr. Shimkus. OK. And we had some interruption there. So but 
let me just follow up.
    And if companies decide to engage proactively, how do those 
incentives prevent the engagement from being entirely one-sided 
at the whim of the employees making those decisions?
    Mr. Fried. So the discretion is completely theirs. There is 
no--most every other business who is not a platform will have 
some duty of care. They could be held culpable if they act 
recklessly.
    That does not apply to the platforms. So it is completely 
within their discretion. There is not a legal incentive for 
them to actually act.
    Mr. Shimkus. Great. My family is fourth generation 
Lithuanian Americans, and I follow a lot of the Baltic issues, 
as you know, and disinformation that comes from Russia 
throughout Eastern Europe but particularly the Baltic 
countries.
    So the Lithuanian government has created an initiative 
called Debunk.eu to combat disinformation. They found that in 
this COVID crisis there had been a significant increase in 
online disinformation with stories that have stirred up 
Russophobia in the Baltics to push false narratives and the 
failures of the Baltic governments and spread messaging that 
COVID-19 is destroying the European Union.
    Are there any lessons the U.S. can learn from projects like 
Debunk.eu?
    Mr. Fried. So, Mr. Shimkus, I am not familiar with that 
particular project. But, clearly, if that were to reveal 
information about websites that are engaged in misinformation, 
it would be great to have access to Whois information to try 
and track them down, see the patterns that exist that cyber 
experts often try to do and prevent that information from 
spreading.
    Unfortunately, we don't have Whois access because of an 
over application of GDPR.
    Mr. Shimkus. Yes, and I am trying to look for my clock to 
check my time, and if I have available time if any one of the 
other panellists want to address that.
    Combatting--I have got two minutes left--combatting 
information with information is kind of what the Lithuanians 
are doing, and so it is, you know, fighting in that space.
    Does anybody else maybe--I see Ms. Collins-Dexter shaking 
her head. Would you like to comment on that?
    Ms. Collins-Dexter. Yes. I think it is extremely important 
when we look at models around the globe that there are 
countries that are taking this very seriously, the issue of 
disinformation. We have seen how that has played out in Ukraine 
and some of the places that you have mentioned.
    This is part of the reason why I have advocated for a data 
protection agency. But in terms of, like, looking around and 
seeing what are the lessons, how can we get this right, we have 
to have a vested interest in getting it right.
    Mr. Shimkus. Thanks. Mr. Farid?
    Mr. Farid. Thank you, Congressman.
    Good information, trustworthy information are necessary but 
they are not sufficient conditions. So we need that information 
but we also then need the platform algorithms to allow them to 
surface.
    If those get buried by the recommendation algorithms, they 
don't do us any good. So I think we need two things. We need 
that trustworthy information and then we need for them to be 
valued and promoted above the untrusted information that we are 
talking about.
    Mr. Shimkus. Great. I have 43 seconds left, Mr. Chairman. 
Thank you, and I yield back my time.
    Mr. Doyle. I thank the gentleman for yielding back.
    The Chair now recognizes Mr. Butterfield for 5 minutes.
    Mr. Butterfield. Thank you very much, Mr. Chairman. Thank 
you again for convening this very important hearing today and 
thank you to the witnesses for your testimony.
    Mr. Overton, it is good to see you again. I have known you 
and of your work for many years now, and thank you very much 
for your testimony.
    Let me ask you, Mr. Overton, how does the dissemination of 
misinformation disenfranchise marginalized communities? Would 
you break it down at a level so that the average person can 
understand that?
    Mr. Overton. Certainly. Thank you very much and thanks for 
your leadership and your service to our country.
    The unique nature of social media involves micro targeting 
and, as a result, especially with ads, you can target 
particular groups.
    So, for example, this is what the Russians did and some 
other folks do, targeting African Americans with particular 
messages, building trust, et cetera, and then toward the end 
saying, well, hey, let us protest police brutality and let us 
protest systemic racism by staying home and not voting. That is 
a real problem in the micro targeting so other people in the 
nation don't really know what is going on and these messages 
are targeted right at black folks. That is a part of the 
problem.
    Now, micro targeting has some good traits if you think 
about sickle cell anemia, other things. We want to help people 
in certain ways. But it can also be used for negative purposes.
    Mr. Butterfield. Well, it seems that we need to correct 
this issue and we need to move pretty fast. So what, in your 
opinion--what are the platforms just plain failing to do and 
what do we need to do immediately?
    Mr. Overton. Well, the platforms were very serious in terms 
of COVID-19. They did things like look to outside experts and 
third-party credible entities to figure out what information 
was good, what information was bad.
    They invested a lot of resources and took it more seriously 
than they have taken voting issues. So the first thing is, 
really, platforms--them, themselves--need to step up to the 
plate.
    Second piece here is they need to disclose information so 
that we understand what is going on in terms of the American 
public, both the schemes that are going on and how they are 
applying their standards.
    So that disclosure at this point is important. You know, 
there is a question can anything be done in terms of law in the 
next few months. But, certainly, a hearing pressing these 
countries is important.
    Mr. Butterfield. Let me next turn to Dr. Farid.
    Dr. Farid, how has the reluctance by some social media and 
tech companies to fact-check content on their platforms 
actually exacerbated the spread of disinformation?
    Mr. Farid. Thank you, Congressman.
    I think you are right. When Mark Zuckerberg takes the stand 
that we don't want to be the arbiters of truth, this is part of 
what the problem is we are facing in the misinformation 
apocalypse.
    But it is also, as I said in my testimony, completely 
missing the point. The point is not always entirely about 
what's true and what's false, but it is what is being 
amplified.
    The power of Facebook, the power of YouTube, the power of 
Twitter is in the recommendation engine, and as Professor 
Overton was saying, in the micro targeting of information.
    So I think there is a bit of a head fake being done here to 
say, well, we don't want to tell people what is real and what 
is not when, in reality, they are telling us on a daily basis 
what is relevant and what we should view.
    That is what, I think, the discussion should be had is what 
is being actively promoted and recommended and micro targeted, 
and then we can move away from some of the difficult 
conversations of the gray area of content being either true or 
false.
    Mr. Butterfield. OK. And I think you may have touched on 
this but let me just go ahead and put it out anyway. How would 
you recommend that tech companies take meaningful action to 
prevent the dissemination of disinformation on their platforms, 
meaning--I mean, not just rhetoric but they need to act 
quickly.
    Mr. Farid. I agree, and I have been deeply frustrated for 
over a decade in their response from everything from child 
sexual abuse to terrorism to illegal drugs and misinformation, 
and the reality is they have not acted.
    And so I think there should be a call of action to the 
advertisers. The reality is that the social media companies are 
almost entirely dependent on advertising dollars.
    There are ten CEOs in the world who can wake up tomorrow 
and say, enough is enough. We are done with technology being 
weaponized. We will no longer advertise on these platforms, and 
there is a slowly growing effort on that part. A number of 
companies have started to withhold advertising dollars. Disney 
has done it in the past with YouTube because they failed to 
protect children.
    I think if we want action today that is where the power is, 
is in the advertising dollar.
    Mr. Butterfield. Thank you, Dr. Farid. I have run out of 
time, and our chairman yields a pretty mean gavel.
    I yield back.
    Mr. Doyle. I want to thank the gentleman for yielding back.
    OK. I see the doctor is back in the house so we will 
recognize Dr. Burgess.
    Mr. Burgess. Thank you. Thank you, Chairman. I appreciate 
the recognition.
    Mr. Fried, let me ask you, and I apologize for the fact 
that there are duelling hearings going on, and if you have 
already answered this I will apologize to you in advance.
    But in your written testimony you argued that Congress 
should restore a duty of care by requiring the technology 
company that hosts content to only receive the 230 immunity if 
they make a good-faith effort to remove said illicit content.
    So in your view, what does make up this good-faith effort?
    Mr. Fried. So the good-faith effort is not as much an issue 
there as the reasonable action, right. Under a common law view, 
any company can be held culpable if they act negligently, 
recklessly, or knowingly failing to prevent one user of its 
services to harm other users.
    And so that increases the incentives to make sure that they 
are not (audio interference) take decent steps (audio 
interference) steps they're fine. But Section 230 says that no 
matter what happens, a platform cannot be held culpable even if 
(audio interference) it has no concept of this problem if we 
actually required the reasonable step, I think this would, 
largely, solve itself.
    As Professor Farid mentioned, they have a legal incentive 
to act and they (audio interference) responsibility by design 
to anticipate the risk a more reasonable (audio interference) 
and, certainly, when they know of illicit activities, they'll 
be able to stop it. So even (audio interference) there are 
liability protections (audio interference) would encourage more 
responsible behavior.
    Mr. Burgess. Great. So I had an opportunity just this week 
to have a conversation with our former colleague, Chris Cox, 
who was, if I recall correctly, one of the authors of this 
provision originally, and he suggested that the authority by 
the state AGs to enforce unfair deceptive trade practices--mini 
FTCs, I think, he referred to them as--but they could be used 
to enforce content moderation policies as outlined in the terms 
of conditions.
    So, Mr. Fried, if you could speak to that.
    Mr. Fried. So there is (audio interference) the state 
action would be pre-empted. So it is not quite clear what the 
states will be able to do and I think that the state AGs have 
expressed concern about that, and Congress (audio interference) 
might want to look at that.
    Mr. Burgess. So given the cross-border applications on the 
internet and the way the technology works, where actually does 
the jurisdiction then reside?
    Mr. Fried. Well, certainly, if there is criminal activity 
in the state, the state takes the view its citizens are being 
harmed, as you point out, that maybe jurisdiction over the 
institutions perpetrating the harm. If there are requirements 
that apply to these online providers (audio interference) So 
there are some jurisdictions used to address. But we have done 
that before and we can do that again.
    Mr. Burgess. So is this a viable option to get companies to 
comply with their own content moderation policies?
    Mr. Fried. I think so. It works for every other industry 
(audio interference) have an industry many, of whom compete 
with the online platforms. There are non-online media entities 
and platforms (audio interference) with brick and mortar 
companies not in the media space. They have this obligation and 
they deal with it, and so I am not sure why the platforms 
shouldn't.
    Two hundred thirty works for the Good Samaritan, but it is 
also shielding Bad Samaritans. We need to address the Bad 
Samaritan problem. Eliminate 230 would fix it to right the 
incentives.
    Mr. Burgess. So it seems like anything that changes the 
applicability of Section 230 based on size of the technology 
company could create a barrier to entry for others who are just 
starting out and, obviously, much smaller. Is that a concern?
    Mr. Fried. I actually don't think so, because under a duty 
of care (audio interference) what is reasonable, and certainly 
the resources of the entity are a fact in what's reasonable. A 
smaller entity has fewer resources. A smaller entity has fewer 
users. A smaller entity has fewer (audio interference).
    But when they're smaller, they still need content 
moderation and (audio interference) they don't have the 
resources. As they grow, the expectation grows with them and 
with it the responsibility to build their platform. They build 
their platform when they're small to address the problems when 
they're big.
    Problems (audio interference) have very, very large 
platforms that really weren't focused on (audio interference) 
right. And so now we have a much broader problem (audio 
interference) I think a duty of care can (audio interference)
    Mr. Doyle. The gentleman's time has expired.
    Mr. Burgess. I thank the chairman. I will yield back.
    Mr. Doyle. Thank you. The gentleman yields back.
    Let me remind all Members that in order to be recognized 
your video needs to be on and active. So if you are in line 
waiting to be recognized, please make sure you have your video 
on.
    I see that the chairman of our full committee has returned 
from the Rules Committee. So it gives me great pleasure to 
recognize Mr. Pallone for 5 minutes.
    Mr. Pallone. Thank you, Chairman Doyle, and this is such an 
important hearing on disinformation and how it divides the 
nation.
    I constantly am reminded of a quote by Mark Zuckerberg that 
he doesn't want to be, and I quote, ``the arbiter of truth.'' 
But I think that that absolutely misses the point, and I know 
that Dr. Farid is there, and in his written testimony, he talks 
about how certain disinformation or conspiracy theories are 
amplified on platforms like Facebook, and the problem caused by 
Facebook has nothing to do with truth and falsity, in my 
opinion. It is about what they amplify or don't, and it may go 
to a fundamental problem with Facebook's business model.
    So last month, the Wall Street Journal published an article 
entitled ``Facebook Executives Shut Down Efforts to Make the 
Site Less Divisive,'' and I would like to introduce it for the 
record. It is here. And I think that this article shows that 
Facebook understands the problem but won't address it. I guess 
I should ask unanimous consent of you to enter it into the 
record, Mr. Chairman.
    Mr. Doyle. Yes. Without objection, so ordered.
    [The information appears at the conclusion of the hearing.]
    Mr. Pallone. Thank you.
    So let me ask Dr. Farid. Do you believe the major platforms 
understand that they are promoting disinformation and 
conspiratorial content, and could they address it if they 
wanted to?
    Mr. Farid. Thank you, Congressman.
    So, first of all, I think you are absolutely right about 
the landscape here. The Wall Street Journal did a very good job 
of revealing what we have known for a long time, which is that 
Mark Zuckerberg and the C-Suite in Facebook knows that they are 
peddlers of hate, divisiveness, conspiracies because it is good 
for business.
     So they have known this for a long time. The algorithms 
have learned this to the tune of $70 billion last year in 
revenue. And they could do something about it. They absolutely 
could. But it would, of course, reduce engagement and, 
therefore, reduce profits.
    And so there is just--this is sort of our problem with 
these business models of social idea is that they are 
misaligned with individual, societal, and democratic goals 
because they have figured out that they can tap into the lowest 
common denominator of humans.
    So it is like any other addictive substance, right. These 
things are tested in order to keep us addicted, in order to tap 
into the things that we--are not necessarily positive for 
society.
    So we can blame us, and you should blame the users for this 
for falling into this trap. But the fact is that we are being 
manipulated by the algorithms in order for them to simply get 
our data and provide advertising dollars to us.
    Mr. Pallone. Well, thank you.
    So, I mean, these platforms may bias towards conspiracy 
theories. But I think the problem is made ten times worse when 
important figures and government officials peddle 
disinformation or conspiracy and hate that feeds these 
platforms.
    So let me ask Ms. Collins-Dexter. In your written 
testimony, you discuss how some public figures peddle 
disinformation to the masses with few, if any, repercussions 
and, as we know, President Trump is one of those public 
figures.
    So let me ask you, how has President Trump used social 
media to spread or amplify disinformation that is harmful to 
black Americans and otherwise inflame racial tensions?
    And you also mention that social media companies like 
Facebook are not fact-checking posts consistently such as those 
related to mail-in voting.
    And so what is the effect of that inconsistent fact-
checking? So first with the racial tensions and second with 
the, you know, mail-in voting as examples, if you could, Ms. 
Collins-Dexter.
    Ms. Collins-Dexter. Yes, thank you, Congressman.
    We have pointed out several different examples over the 
course of this and the ways in which commentary around 
protestors and this depiction of Black Lives Matters as a 
terrorist organization has had deep impacts in terms of, like, 
harms to people on the ground.
    But another way in which I have seen the President push 
disinformation is actually some of his early commentary on 
COVID-19 and his push that lupus medication was actually what 
people should be using.
    Now, if you look, again, disproportionately people are 
impacted by--black people are impacted by COVID, also 
disproportionately impacted by lupus and a number of other 
autoimmune diseases. It runs rampant in my family, for example.
    So that piece of disinformation allowed to stand that 
caused a run on lupus medication actually actively, like, 
harmed people. My sister couldn't get her medication. There 
were a number of different issues. So that is just, like, one 
example I would raise.
    Mr. Pallone. And what about the mail-in ballots and, you 
know, comments about that?
    Ms. Collins-Dexter. Again, this danger--like, we have 
talked about the fact that, like, vote by mail is very safe. 
There is research that shows that in elections it has .0025 
percent chance of fraud or something like that. So to promote 
that that is a fraudulent practice has implications. Also for, 
black people going to polling places, which we know are being 
closed down, standing in long lines, not being able to engage 
in democracy safely.
    Mr. Doyle. The gentleman's time has expired.
    The Chairman. Thank you.
    Mr. Doyle. The Chair now recognizes the ranking member of 
the full committee. Mr. Walden, you are recognized for 5 
minutes.
    Mr. Walden. Well, thank you very much, Mr. Doyle, and 
Chairman Pallone and I were both at the Rules Committee. So I 
appreciate being worked in here and I am sorry I wasn't able to 
give my opening statement myself. I understand Mrs. Brooks did 
a wonderful job with it.
    Mr. Farid, I think many of the issues related to content 
moderation by platforms could be solved if companies enforced 
their terms of service in a transparent, uniform, and equitable 
pattern.
    That is what makes it so disconcerting in this case of 
Google that we have read about that reportedly, based action 
off of inaccurate NBC reporting, and now just yesterday another 
egregious action by Twitter to block the President's tweets.
    These recent actions make clear that the CEOs of these 
companies need to come before this committee to answer our 
questions, and if they will not come voluntarily then, Mr. 
Chairman, perhaps it is time we compel their attendance.
    Mr. Fried, free speech and content decisions are certainly 
tricky, and I am certainly an advocate of the First Amendment. 
I have a degree in journalism.
    But how can we incentivize these companies to step up and 
apply their rules with more transparency and more uniformity?
    Mr. Fried. Thank you, Mr. Walden. Good to see you.
    Certainly, transparency would be helpful. I know there is 
some discussion that, you know, a violation of a terms of 
service could be a contractual problem. There is some 
discussion, I think, of asking the FTC to step in and call it 
an unfair practice.
    So you could certainly try that. One thing I suggest you 
look at is, and this was raised by DOJ, is whether Section 230 
might actually even preempt FTC action. It is an open question.
    If you look at the Roommates.com case, which was mentioned 
sort of implicitly earlier, if the platform is itself acting in 
an illegal fashion and perhaps violating a term of service, 
were that Congress's decision to make clear it is an illegal 
action and an unfair practice, then maybe under the Roommates 
case the FTC could have jurisdiction because it is not a 
content moderation issue.
    I surmise, however, that the platforms will argue that no, 
no, this about our content moderation, not about terms of 
service. And so Section 230 preempts.
    So, certainly, calling it an unfair practice to violate the 
terms of service could be an issue. But I'm not sure how that 
interplays with Section 230 just yet.
    Mr. Walden. Yes, and I guess for the average consumer or 
user, pretty tough to take on these giants individually and 
wage this battle. This is bigger than City Hall.
    Mr. Fried. And keep in mind, too, again, I am all for the 
transparency. But they control their terms of service. So if 
they know that a violation of their terms of service could 
cause an FTC violation, they presumably would write their terms 
of service in a more generic way as to avoid liability under 
the--that sort of theory.
    So that is why I certainly think transparency helps. I fear 
we are going to have to address the 230 immunity to really 
solve this problem.
    Mr. Walden. Well I think it is long overdue, in many 
respects. It was a law written decades ago in an era that 
didn't contemplate the modern communication technology scope 
and the power of these platforms.
    And I know it is not easy to quote, unquote, ``regulate'' 
speech because what one person finds offensive the other may 
say that is my right. And so they are in a tricky position as 
well.
    But I know we have asked Google three pretty simple 
questions about this case involving The Federalist and just how 
were they notified, how did it work, and all that, and we still 
don't have answers back on that, and they were actually pretty 
simple questions.
    Mr. Farid, you mentioned how companies employ algorithmic 
amplification to increase views of certain content that draws 
attention to their platform but need to prioritize algorithms 
to promote trusted information over misinformation.
    Should there be a heightened accountability for these 
platforms when it comes to amplified content in order to gain 
liability protection, and how do you incentivize these actions?
    And I know when I chaired the committee, we actually did a 
hearing on algorithms so members could better understand how 
they are created, because there is implicit bias in some of the 
algorithms that is unfair and can--I will leave it that. But it 
was certainly demonstrated that there are some unfair issues in 
the algorithms.
    So Dr. Farid, can you explain?
    Mr. Farid. Yes. So there is no question, Congressman, that 
simply turning over everything to algorithms that are not 
explainable or understandable solves all of our problems.
    As Ms. Collins-Dexter pointed out, we have seen bias in 
algorithms in medical care, in judicial systems. So we do have 
to be careful about that. And again, you know, we have been 
beating this drum. But the transparency and the understanding 
of these algorithms is critical.
    I would say though that, you know, we still have this 
tension here between misalignment of corporate interests and 
societal interests and what is keeping people on the platform, 
and that I think is where that tension is.
    You are absolutely right, Congressman, there is--they have 
a tough business to run.
    Mr. Walden. Yes.
    Mr. Farid. What insults you doesn't insult me, and vice 
versa.
    Mr. Walden. And I know my time is about over. But I would 
just say I have seen it too in the news business where they 
actually read what story what time of the day, which headline, 
which lead gets you most.
    And I am not sure that is a good thing either. I realize 
they got to run their business, but I am not sure--that only 
reinforces kind of--it doesn't expand our scope of 
understanding information. It may just drive us deeper into our 
own silos.
    And so I yield back, Mr. Chairman.
    Mr. Doyle. The gentleman's time--the gentleman's time has 
expired. The Chair now recognizes Ms. Matsui for 5 minutes.
    Ms. Matsui. Thank you very much, Mr. Chairman, and thank 
you very much for your patience for some of us going back and 
forth all the time between Rules and the Energy and Commerce.
    Facebook has announced that it will remove COVID-19-related 
misinformation that could contribute to imminent physical harm. 
However, Facebook appears to have adopted a narrow 
interpretation of imminent physical harm that is allowing 
significant amounts of disinformation to remain in circulation 
online.
    Dr. Farid, do you believe imminent physical harm should 
include false information of our COVID-19 testing site 
locations, hours of operation, or documentation required?
    Mr. Farid. Thank you, Congresswoman. I agree with you. I 
think Facebook has taken a particularly narrow definition of 
imminent harm, and maybe that was an excusable position three, 
four months ago. But I think it is inexcusable today when we 
know that even a fraction of the population that is getting 
misinformation can lead to health problems for the entire 
society and the world.
    So I think that they need to take a much harder line on 
this both in terms of what we see publicly and also the private 
groups that are much less transparent.
    Ms. Matsui. So do you also believe there is additional 
public health misinformation you believe should be removed by 
Facebook?
    Mr. Farid. Absolutely. We are seeing a proliferation of 
misinformation and conspiracies on Facebook, on YouTube, on 
Twitter, on Reddit, and on almost all social media platforms 
that are not actively being dealt with.
    Ms. Matsui. Well you know, even in situations where 
Facebook has taken down COVID-19 misinformation, it has failed 
to adequately remove all posts spreading the same false 
information. In one instance, Facebook removed a post 
suggesting that wearing a mask in public can make you sick but 
failed to remove duplicates or clones of this original post.
    Dr. Farid, in your opinion, has Facebook committed 
sufficient resources to identify and remove all duplicate or 
shared posts related to COVID-19 misinformation?
    Mr. Farid. They have not, and part of that is excusable 
because most of their moderators have not been able to do the 
work from home, and I understand that.
    But the fact is that prior to COVID-19 they did not have 
the safeguards in place both in terms of content moderators, in 
terms of technology, and in terms of policy to deal with these 
types of issues.
    Ms. Matsui. OK. Facebook has promoted its groups feature as 
a way for users to communicate privately with other users about 
the topic of shared interest.
    While these groups have allowed some family members or 
local clubs to foster more intimate interactions, they have 
also provided malicious actors with a new tool to spread 
misinformation, hate speech, and conspiracy theories.
    Ms. Collins-Dexter, how have white nationalists employed 
Facebook groups to spread hate speech, and have these groups 
translated into real-world activity?
    Ms. Collins-Dexter. Absolutely. Our first entry point into 
negotiating with Facebook actually came about six years ago 
when there were closed white nationalist hate groups that were 
doxing organizers and black people in Sacramento and in Stone 
Mountain, Georgia, and people were showing up to those places 
and to people's homes with guns. And when we went to Facebook 
and asked them to make changes, they said they didn't feel like 
they had a problem and were slow to move.
    Now, cut to now, we see this plethora of issues coming out 
by the day of the ways in which violent hate speech, calls to 
murder, calls to violence run rampant in closed groups and 
including targeting mosques and other places, and Facebook, 
again, has been slow to act.
    Ms. Matsui. Well do you believe that Facebook provides 
sufficient public information about the ownership, management, 
and membership in neo-Confederate or white nationalist groups?
    Ms. Collins-Dexter. I think that is a issue--the same 
question. We are for sure in favor of private data. That is why 
we support laws like the one in California, and I think it is 
extremely important that we maintain data protection 
activities.
    But, again, when that line crosses over to bullets dipped 
in pig's blood, like calls to action that would actively 
compromise people's lives and safety, we have to really be 
strident about the ways in which we are making that information 
public so people can protect themselves.
    Ms. Matsui. OK. I just have a few seconds here.
    YouTube has had some success limiting the prevalence of 
misinformation by adjusting its recommendation algorithms.
    Dr. Farid, do you believe YouTube has the capacity to 
further adjust its algorithms to limit the spread of 
misinformation, and if so, why haven't these adjustments been 
implemented? I have got like 17 seconds so you have 17 seconds.
    Mr. Farid. Yes. So the answer is yes, they have the 
ability.
    In late 2018, we saw upwards of ten percent of the 
recommended videos being conspiratorial, and under public 
pressure they have been able to reduce that to around three 
percent. They have always had the ability to do it. They simply 
have not.
    Ms. Matsui. OK. Well thank you very much, and I thank all 
the witnesses. And I yield back.
    Mr. Doyle. The gentle lady yields back. The Chair now 
recognizes Mr. Kinzinger for 5 minutes.
    Mr. Kinzinger. Well thank you, Mr. Chairman. Thank you to 
all our witnesses for being here. Very much appreciate it.
    Despite all the debate, I actually have yet to really take 
a position on whether to preserve, amend, or repeal Section 
230. Given the potential ramifications, I definitely want to 
just take my time and be thoughtful.
    What makes the most sense is to make other legislative or 
regulatory attempts to bring about the change we want before we 
throw the baby out with the bathwater. If those options fail, 
we still have the ability to go back and amend or repeal it 
later.
    For example, I have introduced solutions to one subset of 
these issues, the fake accounts, which are under discussion 
today. One of them is H.R. 6586, the Social Media 
Accountability and Account Verification Act, which was referred 
to this committee, and also I have the second bill. It is H.R. 
6587, the Social Media Fraud Mitigation Act, which is related 
but was referred to the Judiciary Committee.
    Taken together, the bills seek to protect consumers by 
improving the operations of social media companies and 
punishing those who use fake accounts to cause harm to others. 
My office took on the task of trying to legislate or regulate 
social media companies without amending Section 230 or 
trampling on free speech, and it was certainly not easy.
    We consulted with attorneys, nonprofits, consumer 
advocates, industry and more, and yes, we want--we went to 
every or almost nearly every Democrat on the two subcommittees 
represented here today to ask them to work with me on the bill 
and lead it with me.
    We offered to work with anyone to improve our language, to 
try to garner broad support. To be fair, some did engage and 
provide constructive feedback, and I am proud to say that much 
of the feedback was incorporated into the bill.
    I am not saying this to be mean-spirited. I am not calling 
anyone out by name. But as of today, and not for the lack of 
effort, I still don't have a partner on the other side of the 
aisle to work with me on it. If my friends on the other side of 
the aisle don't like my bill or other ideas offered by my 
Republican colleagues, that is fine. It won't hurt my feelings.
    But we have put something out there, and so let's either 
work on this together or please put forward a reasonable 
solution of your own. Because frankly I am a little frustrated 
that we keep having the Section 230 conversation and we haven't 
found a path forward, or rather it seems we can't even find an 
intersection where we are supposed to meet to chart a path 
forward.
    So I will end my comments by, once again, inviting my 
colleagues on the other side of the aisle or really any of my 
colleagues--I could use some Republican support as well--to 
reach out so we can move this ball forward.
    But more broadly, whether we are talking about romance 
scams, fake profiles, or tools of statecraft involved in 
massive coordinated disinformation effort, we clearly have a 
problem with the status quo.
    We have already touched on whether you all think that 
social media companies are utilizing all their tools that they 
can to be able to take down scams or fraudulent activity, and 
Mr. Walden touched on the incentives for the companies. So I 
want to follow up on that with Mr. Farid.
    Putting aside the complexities of algorithm development, 
what other barriers are there in preventing the companies from 
implementing these safeguards?
    Mr. Farid. I think the primary barriers are technology. We 
saw that when the platforms wanted to remove adult pornography 
or copyright information, they were able to do that.
    So it is primarily one of resources being put to research 
to deal with these issues. But there is real tension here and 
we have to recognize it, which is that it is fighting up 
against the core business model.
    You are literally asking companies to reduce their profits. 
And look, that is a tough ask, and that is because the core 
business model is one of engagement and not one of the way, 
say, Netflix or Spotify or Amazon Prime is where I pay a 
monthly fee and they get the money no matter what, and they 
just have to grow the user base.
    So there is tension here, and I think that tension is at 
the core of why it is difficult to get the companies to act on 
these issues.
    Mr. Kinzinger. So is the only real answer then government 
intervention, in your mind?
    Mr. Farid. I think there is two interventions. One is 
healthy competition. Maybe it is time for a better business 
model. Maybe we don't have to have a purely data-driven ad-
driven technology sector.
    Maybe there is a better business model. Maybe advertisers 
can say, you know what, we don't want our ads running against 
hateful divisive conspiratorial content.
    And then, of course, there is us, the user. I mean so you 
could blame the people creating the fake content. You can blame 
the platform for amplifying it. But we are part of the problem, 
too. We are the ones sharing it, liking it, and retweeting it.
    So as an educator, I also have to say this is our failure 
that we the people are part of the problem as well, and of 
course there has to be sensible regulation.
    And I agree with you, Congressman, that you don't want to 
throw out 230, and you don't want to move too fast in this 
space and have unintended consequences.
    There has been many, many wonderful things from technology, 
and we have to think carefully and thoughtfully how modest 
change can help us get out of the mess that we are in today.
    Mr. Kinzinger. Well thank you. And I have another question 
that I will submit for the record. And I yield back.
    Mr. Doyle. I see the gentleman's time has expired. Thank 
you. Thank you, Mr. Kinzinger. The Chair now recognizes the 
gentle lady from Florida, Ms. Castor, recognized for 5 minutes.
    Ms. Castor. Well thank you, Mr. Chairman. Hey, our 
witnesses have been terrific today. So thank you very much, all 
of you.
    I would like to ask, if the liability shield under Section 
230 were eliminated today, what practical changes would the 
tech platforms like Facebook make? What would we see happen? I 
will start with Ms. Collins-Dexter.
    Ms. Collins-Dexter. Sorry. Could you say that one more 
time?
    Ms. Castor. If the liability shield under Section 230 were 
eliminated today, what practical changes would we see tech 
platforms like Facebook make?
    Ms. Collins-Dexter. Yes. You know what? I honestly don't 
know, and this is part of the reason why I think we need to 
have this conversation in a broader context, one about general 
corporate responsibility and corporate concentration, and two, 
around like understanding what the research is telling us about 
230. I know that it is time to have a serious conversation 
around 230 for sure.
    But in terms of like privacy protections and a number of 
other factors, you know, blow torch to 230 before we have 
actually had a chance to talk about that is important and I 
know that Mr. Overton----
    Mr. Overton. Yes, let me just comment.
    Ms. Castor. Mr. Overton? Yes.
    Mr. Overton. Sure. Let me just chime in for a moment. We 
really do need 230, and we need some form of 230 for a couple 
reasons.
    One, we want Black Lives Matter. We want the Tea Party. We 
want a variety of grassroots organizations to be able to 
participate and post their material without fear that the 
platforms feel like they are going to be sued, right?
    So 230 is important in terms of facilitating the speech of 
grassroots folks. It is an important provision, right? But, 
like we said, these are problems and there need to be some 
tweaks here.
    It is also important in terms of the original purpose was 
to facilitate content moderation, this notion that you are not 
responsible for every single thing that goes up. And again, as 
a result, you can have a lot of different voices up.
    But again, if you are discriminating and using your 
algorithms to make money, to, you know, target employment ads 
toward whites and away from Latinos, that is a problem.
    Ms. Castor. But it is worse than that. It is worse than 
that though, with the proliferation of child pornography and 
other illicit behavior. It goes beyond just the debate, you 
know, under free speech and the First Amendment. There is a lot 
of illicit activity that these tech platforms have used that 
liability shield to shield themselves.
    Mr. Overton. That is absolutely right, and that is why I 
say hey, we should think about reform rather than just--I think 
your initial point was let's just kind of blow it up or, you 
know, what would happen if we repealed it completely, right?
    And I do think that there is a problem with just absolute 
complete repeal, not just to the tech companies but to average 
citizens in our democracy.
    Ms. Castor. Mr. Fried?
    Mr. Fried. And I agree. Yes, I agree. But there is some 
good news here, which is the benefits of 230 come from (c)(2). 
It comes from the safe harbor for content moderation, right?
    We wanted--we didn't want Prodigy to be punished for 
stopping child predators. We didn't want Prodigy punished when 
it tried and missed something.
    So we can keep (c)(2). The problem is (c)(1), which says 
you don't have to do any of that. So my proposal is let's put 
that duty of care back in place. Let's keep (c)(2), which keeps 
the internets and the platforms as an avenue of free 
expression. But let us tweak (c)(1) so that they actually have 
to own that. They actually have to exercise a duty of care to 
stop the illicit stuff in exchange for that. That gets us the 
best of both worlds. We stop the illegal activity, but we keep 
the platform for free expression.
    Ms. Castor. And Mr. Farid?
    Mr. Farid. Thank you. We have an interesting experiment 
that had been played out in Germany over the last few years 
with the NetzDG law which was addressing hate speech, 
terrorism, and extremism.
    And what happened is when the Germans passed very strict 
laws on takedown, what happened is that the companies ramped up 
their human moderators and they ended up doing a good job.
    They started--they just said, look, now the law mandates 
this with penalties up to 50 million euros for each failure, 
and guess what? They ramped up. They started getting better 
moderators, better technology, and the law actually worked.
    So we have a good existence of proof that we can actually 
do better.
    Ms. Castor. Thanks to all of you. I will yield back. 
Thanks.
    Mr. Doyle. Thank you. The gentle lady yields back. It now 
gives me pleasure to introduce my friend and fellow suffering 
Pittsburgh Pirate fan, Mr. Bilirakis for 5 minutes.
    Mr. Bilirakis. We haven't lost a game yet, Mr. Chairman.
    [Laughter.]
    Mr. Doyle. That is because we haven't played yet.
    Mr. Bilirakis. You are right. We have a good shot with this 
short season though. OK. We will get back to business. I want 
to thank the witnesses.
    Mr. Fried, as both Representative McKinley and I have 
passionately called on in past hearings, we have seen 
advertisements for the sale of illegal drugs on social media 
websites.
    As we know, there are two types of ad groups online: one, a 
private negotiation between the platform and the advisor, the 
other where the advisor is the winner of a bid or for immediate 
available ad space where the platform is less connected to the 
transaction.
    And this is the question. How, if at all, can Section 230 
be retrofitted to fairly provide platform accountability for 
advertisements of illegal products in both these circumstances, 
or is there another method to address this very serious 
problem?
    Mr. Fried. And I think----
    Mr. Bilirakis. For Mr. Fried.
    Mr. Fried. Sure. Thank you, Congressman. I think if we fix 
230 (c)(1) and recreate the duty of care, we will make a lot of 
progress, right, because then there actually is an incentive to 
solve the problem and there is a legal consequence for failing 
to.
    If a nightclub doesn't do enough to stop the peddling of 
drugs in its nightclub, it can be culpable. But in the same 
scenario, a platform cannot because of (c)(1).
    So I think we need to recreate that duty of care, and this 
applies to all the horribles that we see on the Internet, 
whether it is illicit drugs, peddling of child pornography, 
cyber-crime, fraud.
    There is a lack of a legal incentive that applies to 
everybody else who is not an online platform, whether a 
traditional media or even just brick-and-mortar retail. And we 
need to write those incentives. We can save (c)(2) so we get 
the free expression.
    But if we recreate that duty of care just to require 
reasonable action and let there be scrutiny. Right now, we 
don't have to take the word of everybody else.
    But we have to take the word of the platforms. They may be 
doing a good job. In fact, often they are. But there is no 
scrutiny of that, and that is why we need to fix the incentive.
    Mr. Bilirakis. Thank you. My next question, again for Mr. 
Fried--Section 230 (c)(2), as you mentioned in your written 
testimony and then also here, states that: ``A provider is 
protected from liability for any action voluntarily taken in 
good faith to restrict access to or availability of material 
that the provider or user considers to be obscene or otherwise 
objectionable,'' and that is a quote here.
    That is an exceptionally wide protection. What is your 
opinion--in your opinion, what would be required for a provider 
to fail that standard under the current language, and are you 
aware of any real-world solutions where the standard was not 
met? Again, for Mr. Fried.
    Mr. Fried. Sorry, I am having a little mic trouble. Can you 
hear me?
    Mr. Bilirakis. Yes, I can.
    Mr. Fried. So when we are talking about speech, we do have 
to be a little more concerned, as Mr. Walden pointed out, about 
the First Amendment. But there may be some value in the good-
faith provision, right?
    So if it is clear--Section 230 is meant to protect 
consumers, right? So an effort to protect consumers done in 
good faith is fine. But if there is evidence, and it would take 
evidence--but if there is evidence that there is a pretextual 
use of content moderation, it is not really to help consumers 
but that it is being used as a pretext for some other motive, 
then you might have a court say well OK, 230 doesn't apply.
    There are starting to be some discussion of that. It is not 
always a clear discussion of good faith. But I would look for a 
discussion of that; what is pretextual rather than protecting 
consumers? That may be a place to explore. That way you don't 
worry about regulating speech.
    Like if it is clear that that--there is evidence that is 
not what they are doing is protecting consumers, then maybe 
they don't get the defense of 230.
    Now that doesn't mean that they are necessarily culpable 
for anything. It just means that they have lost their liability 
shield. They still would have had to engage in something 
illicit. But at least you can have that conversation--have they 
done something, have they violated a contract, have they 
violated a law without using 230 as a shield when they are not 
really protecting the consumer.
    Mr. Bilirakis. Thank you very much. Mr. Chairman, I will 
yield back my 35 seconds. Appreciate it very much. I thank the 
witnesses as well.
    Mr. Doyle. I thank the gentleman. It gives me pleasure to 
introduce the gentleman from the great state of California, Mr. 
McNerney, for 5 minutes.
    Mr. McNerney. I thank the Chairs and the ranking members 
and the panellists. This is a great hearing and great 
engagement on both sides of the aisle. So thank you all.
    Last year I sent a letter to Mr. Zuckerberg expressing 
concerns about the potential conflict of interest that Facebook 
faces between their bottom line and addressing the spread of 
political disinformation on their platform. I asked specific 
questions focusing on Facebook's handling of disinformation.
    But they did not answer my questions. I also asked Facebook 
some of these questions again when they testified before the 
committee earlier this year. Still they refused to answer my 
questions.
    Professor Farid, you spent a lot of time looking into these 
issues and working with communities to understand--with the 
companies to understand their practices. What is it that they 
are hiding?
    Mr. Farid. Mark Zuckerberg is hiding the fact that he knows 
that hate, lies, and divisiveness are good for business. He is 
hiding the fact that content moderation is bad for business, 
and so he props up these phony arguments to hide behind.
    And I think Mark Zuckerberg is hiding the fact that his 
entire business model of maximizing engagement and maximize 
advertising dollars just stinks. It is bad for us as 
individuals. It is bad for society, and it is bad for 
democracy, but it is awfully good for his bottom line to the 
tune of $70 billion last year.
    I continue to argue that the core business model is the 
poison here. When you are in the attention-grabbing ad-driven 
business, your job is to keep people on the platform for as 
long as possible, and we know that hate, divisiveness, 
outrageous, and conspiratorial drives business, and he knows 
this and he is profiting off the back of us as individuals, 
societies, and democracies, and I think we should hold him 
accountable for that.
    Mr. McNerney. Thank you. That was a pretty strong 
statement, Professor. I appreciate it.
    Spencer Overton, thank you for presenting this morning. 
Some Republicans in Washington have made demonstrably false 
allegations of anti-conservative bias on social media. But as 
the representative from Stockton, California, the most racially 
diverse city in the country, my concern is really about 
protecting the rights of all citizens to vote.
    Professor, can you talk about some of the tactics that have 
been used to suppress votes of black people and people of color 
on social media platforms? Also, with just 131 days to go 
before the general election, what tactics are you concerned 
about that will be used leading up the November election?
    Mr. Overton. Yes, thank you very much. And just to be 
clear, I don't think that we can equate content moderation, 
which is debatable, with voter suppression. I think that they 
are just very different things. So to kind of have some false 
neutrality like they are the same thing is wrong.
    This targeting, targeting of messages at particular 
communities is a primary device. So we see that. We saw that in 
2016. We are seeing it in 2020.
    Certainly messages about, you know, extensive fraud can 
certainly discourage people from participating and engaging, or 
messages that, hey, we are going to have law enforcement at 
every polling place. A variety of messages like that can 
certainly discourage participation, especially when they are 
targeted at particular communities.
    Mr. McNerney. Well thank you. Professor Farid has already 
addressed the question of what action the platforms could be 
taking and talked about the need for advertisers to act. I 
would like to hear from the other witnesses, starting with Ms. 
Collins-Dexter.
    What are some of the steps that social media platforms 
could be taking right now that they are currently--that they 
aren't taking to combat the spread of disinformation on their 
platforms?
    Ms. Collins-Dexter. Thank you. So I think there is a lot to 
be said around the recommendations that folks have mentioned. 
Often when you are recommending other sources, it can take you 
down a dark rabbit hole, and we have seen that a couple of 
times with increased recommendations of white supremacists.
    I think there needs to be stuff fixed around the content 
moderation. I think we need permanent civil rights 
infrastructure that exists in the executive level in the C-
Suite working with Mark Zuckerberg. I think it is critically 
important that we see civil rights not as a partisan issue, but 
one that has implications across the board and that there is 
someone there that represents those interests.
    Mr. McNerney. Well, thank you. Professor Overton, in 17 
seconds?
    Mr. Overton. Yes. Civil rights--it is not partisan. It is a 
bipartisan issue. Facebook has kind of created this false 
dichotomy of like conservatives versus civil rights. That is 
completely wrong. People in both parties are committed to civil 
rights, and you know, we need to stay firm with that.
    Mr. McNerney. Thank you. I ran out of time, Mr. Chairman.
    Mr. Doyle. The gentleman's time has expired. I thank the 
gentleman. I see my good friend, Mr. Johnson, appear to be in 
an automobile, hopefully in the passenger seat.
    So Bill, you are recognized for 5 minutes and keep your 
eyes on the road if you are not in that passenger seat.
    Mr. Johnson. Yes, I am an IT guy, Mr. Chairman, and I can 
multitask. So I am good.
    Mr. Doyle. OK.
    Mr. Johnson. But I am in the passenger seat. Thank you very 
much. Hey, you know, it frustrates me when I hear these tech 
companies like Facebook and Google and Twitter and others hide 
behind the excuse that it is their algorithms that are making 
these decisions about the content that they serve up to the 
American people.
    Look, I have got two degrees in computer science. We have 
talked about this before in other hearings. Algorithms are 
logic constructs that are built by humans, and the computers 
are told what to do. They don't dream this stuff up on their 
own.
    And I also get frustrated because, you know, one of the 
main reasons that these technology platforms have been able to 
be as prolific and as powerful as they are is because they 
haven't been regulated, and in the absence of regulation it 
takes the notion of social responsibility even that much higher 
to self-police.
    So Mr. Fried, in your testimony, you talk about how the 
tech industry is one of the only sectors that not only is free 
from regulation before the fact, but they're also free from 
judicial scrutiny after the fact. Instead Congress has 
delegated the oversight authority to these tech companies to 
the actual tech companies themselves to self-regulate.
    How has this balance been struck in other related 
industries like the newspaper or broadcast industries, and what 
has been the effect of that balance?
    Mr. Fried. You know, ordinarily, it is one or the other. If 
you are regulated, ordinarily, you might have some limited 
immunity for the regulated activity because your business model 
has been restricted.
    If you are not regulated, then ordinarily, you are held 
culpable if you make a bad decision in designing your business 
model. The platforms have the best of both worlds.
    Now, the traditional media, right, still have a duty of 
care. In the New York Times v. Sullivan case, for example, the 
Supreme Court very importantly said there are First Amendment 
protections in holding a media defendant liable when there is a 
public official who is bringing a libel suit.
    But even in that case, they are subject to a knowledge or 
reckless disregard standard. So if they don't do their due 
diligence and are reckless or have knowledge of falsity, they 
can still be held culpable under a standard of care. They put a 
lot of effort into their fact checking to avoid that sort of 
culpability, and of course, if it is not a public figure there 
is even more potential culpability because it is not that high 
of a standard.
    But in a case where a platform knowingly or is recklessly 
disregarding falsity, they still can't be held culpable and 
that gives them an advantage because they can avoid the 
ordinary costs of business in avoiding harm.
    [Pause.]
    Mr. Johnson. Well I think I have lost my sound. Mr. 
Chairman, I yield back the remainder of my time. I apologize. I 
don't know why but I can't--I can't hear anything.
    Mr. Doyle. You know what? We can hear you, Mr.--we can hear 
you, Mr. Johnson.
    Mr. Johnson. Can you hear me?
    Mr. Doyle. You still have a minute and 25 seconds.
    Mr. Johnson. OK. Well here goes. My last question then, you 
know, and I appreciate what Mr. Fried just said.
    We have seen instances where--beyond the platforms for 
third-party information distribution to instead acting as 
content providers. And I agree with Mr. Fried, you can't--you 
can't be both.
    So Mr. Fried, as edge platforms make this move from neutral 
bulletin boards of the 1990s to playing an active role in 
moderating content, has Section 230 given these platforms a leg 
up, an advantage among their media industry competitors?
    Mr. Fried. I think so. They don't have to be as sensitive 
to misinformation, to defamation. They avoid costs that every 
other responsible media organization has to be very concerned 
about, right.
    They are pushing data. They are not worried as much about 
curating content, and that gives them an advantage.
    Mr. Johnson. Yes. I agree with you totally. Mr. Chairman, 
in spite of my technical malfunction, I will yield back a total 
of 15 seconds. Thank you.
    Mr. Doyle. OK. I thank the gentleman. Now, it gives me 
pleasure to introduce Vermont's most popular Congressman, Peter 
Welch. Mr. Welch, you are recognized for 5 minutes.
    Mr. Welch. Thank you, Mr. Chairman. I am going to start 
with a reference to two of my colleagues, Mr. Kinzinger and Mr. 
Johnson and what he just said.
    Mr. Kinzinger asked the question as to whether it is time 
to pull the plug and to say that Congress has to act on Section 
230, and that would be Congress making decisions about what a 
duty of care is and making a decision whether to provide 
regulations or authority to oversee that.
    But in my view (audio interference), ask the witnesses for 
namely to establish (audio interference), or can we continue to 
leave that self-policing of the various platforms.
    Ms. Collins-Dexter, just very briefly?
    Ms. Collins-Dexter. Yes. If I understand you, I think you 
are saying more--whether or not we should do regulations. Are 
you----
    Mr. Welch. Law and regulation.
    Ms. Collins-Dexter. Yes, we absolutely need it. I think it 
is important that----
    Mr. Welch. I just want to go through this really quickly. 
So and Mr. Farid?
    Mr. Farid. Absolutely. We have been waiting for years, 
Congressman, for the tech industry to self-regulate, and they 
haven't. So we have to make some changes.
    Mr. Welch. OK. And Mr. Fried?
    Mr. Fried. Don't eliminate 230 but fix it.
    Mr. Welch. OK. And Professor Overton?
    Mr. Overton. Yes, we need some changes.
    Mr. Welch. I am going to characterize something that I 
think is what I am seeing in the situation. You have a case 
like--the typical defense from the tech companies about, quote, 
``interfering,'' as they put it, with Section 230. It is 230, 
they cite.
    And this committee years ago was the author of Section 230. 
It has made us the biggest internet success in the world. Now 
Mr. Zuckerberg's argument is essentially, look how rich I am. 
That is how successful Section 230 is. But the casualty more 
and more is democratic discussion and democratic debate, and 
this is where I want to go to Mr. Johnson.
    The Zuckerberg defense is that he doesn't want to monitor 
speech. But as a number of you have said, and Mr. Johnson 
pointed out, the algorithm is something they control, number 
one, and number two, as Professor Overton pointed out, it is 
not about speech. It is about peddling the conflicting content 
that will most--get the most hits and produce the most money.
    Can each of you comment as to whether you see that as an 
ongoing threat to our democratic debate and dialogue? And I 
will start--I will start with you, Mr. Fried.
    Mr. Fried. Was that Fried or Farid? If it was Fried, I 
would say fix the incentives and the rest will----
    Mr. Welch. Pardon me.
    Mr. Fried. I would say fix the incentives and the rest will 
fix itself. We don't need to regulate them. We just need to 
give them the same incentive everybody else has to moderate 
their content, protect consumers, and know that if they are 
reckless, they are going to be culpable.
    Mr. Welch. Right. Mr. Farid?
    Mr. Farid. You are absolutely right, Congressman. They 
control the algorithms. They have designed the algorithms to 
promote the hate and the divisive and the conspiratorial, and 
they can optimize it differently. They just need the right 
incentives, whether that is regulatory, advertising, or 
conversation-based.
    Mr. Welch. And let me go on to Professor Overton. The 
question I think many of us have and my colleague, John 
Sarbanes has raised this before, can Section 230 freedom for 
these platforms to do anything they want where the algorithms 
about intensifying division coexist with decent democratic 
debate, or does one have to become the casualty of the other?
    Mr. Overton. Well so far one has become the casualty to the 
other, and part of the problem here is that the platforms have 
not taken the steps they need to protect civil rights.
    They have the authority. They have the power. They haven't 
used it effectively. They haven't responded here. So that is 
the problem. And in light of that, you know, I do believe that 
the status quo is untenable.
    Mr. Welch. All right. I thank you very much for this 
excellent hearing. I yield back, Mr. Chairman.
    Mr. Doyle. OK. I thank the gentleman for yielding back. Now 
let's see now who is next. The Chair recognizes Mr. Flores for 
5 minutes.
    Mr. Flores. Thank you, Mr. Chairman, and I thank you for 
hosting this hearing.
    Mr. Fried, thank you for appearing here today and for your 
thoughtful testimony, and particularly--in particular, I 
appreciate your sensitivity to preserving First Amendment 
rights in the context of suggesting practical and effective 
solutions to the spread of disinformation.
    As you succinctly observed earlier, Section 230 was created 
to, quote, ``one, help a nascent online industry to develop 
into a form for user-generated content, and two, to stem the 
growing state of harmful behavior on the Internet,'' unquote.
    You know, I think we can all agree it has succeeded beyond 
all expectations on the first goal, to grow the industry, but 
it has fallen woefully short of stemming harmful behavior, for 
the second part.
    You have recommended recalibrating Section 230 to restore 
duty of care by requiring Internet platforms to take reasonable 
good-faith steps to prevent illicit use of their services as a 
condition of receiving Section 230's protection. In other 
words, holding the platforms accountable when they act with 
negligent, reckless, or wilful disregard.
    Can you elaborate on what restoring a duty of care looks 
like in practice, and how that becomes operational? In other 
words, what is the dynamic that makes this approach successful?
    Mr. Fried. So it is all about incentives. I think a number 
of the witnesses have talked about it. Every other nonplatform 
has the duty, right? If they let their facilities be used to 
harm another and don't take reasonable steps, they can be held 
accountable. That is really what we want.
    No one can question them right now when they say, we have 
done enough. We are being reasonable. And again, in many cases 
they may be, and they will be vindicated when they are.
    But when they are not, and you can't question it, victims 
are left with no remedy. Often they can't even get discovery 
because as soon as the court says, oh, sorry, it is, you know, 
a content moderation issue, you can't even question whether the 
platforms are being responsible, are they being reckless, are 
they in wilful disregard of a lot of awful stuff happening on 
their platform?
    If you create that duty of care you solve that problem, but 
you keep the great stuff that has made the platform so big. It 
is the content moderation safe harbor to promote free 
expression. That is the good part. That is what I think 
everyone wanted. That is what Congress wanted to do to address 
the Prodigy case, to help the good actor, the Good Samaritan.
    Unfortunately, (c)(1) protects the Bad Samaritan.
    Mr. Flores. Let's dig a little bit further into this. How 
do we determine what constitutes a reasonable good-faith effort 
to prevent illicit use, and can you provide an example of a 
model to illustrate? That would be very helpful.
    Mr. Fried. Sure. The reasonableness is not hard. That is 
the standard everybody lives on if you are not under--if you 
are not a platform, right.
    So if you are acting negligently or recklessly, you can be 
held accountable. You don't really have to define that. There 
is plenty of precedent on that.
    The good faith is a little more of a challenge. But, again, 
I think the key there--and, again, we have to be careful 
because that--the good-faith provision tends to come up more in 
a speech context than in an illicit activity context. But are 
they really trying to moderate to protect consumers, or is it a 
pretext? Are they using the claim of content moderation not to 
protect to consumers but to accomplish some other objective? If 
there is evidence of that, then you can actually say OK, let us 
strip away the protection of 230 because this is not about 
protecting consumers.
    This is not being done in good faith. Then you have to ask 
the second question, which is: have they really done something 
illicit? They may not have. If it is a pure speech issue, even 
if they don't get protection of 230, the First Amendment will 
protect them. It is: have they violated some other duty?
    Once you get rid of the 230 protection, then you can ask 
that question. Many times they won't have violated a duty. 
Unfortunately, there are cases where I think they are violating 
that duty. That is the incentive we need to fix.
    Mr. Flores. OK. By the way, it is great to have you join us 
with the committee again today, and I yield back the balance of 
my time.
    Mr. Fried. Thank you.
    Mr. Doyle. OK. Thank you, Mr. Flores. He yields back. The 
Chair now recognizes the gentleman from California, Mr. 
Cardenas, for 5 minutes. You need to unmute, Tony.
    Mr. Cardenas. OK. Got it. Thank you very much, Mr. 
Chairman. I appreciate this opportunity. And Ranking Member--
chairman and ranking members to--and women to have this 
important hearing.
    The issues that come to mind to me for Americans is that we 
have a lot of major problems going on today, and Americans 
don't know who to believe or where to get their information and 
what to believe.
    So people are confused about what is news and what is 
commentary. The leaders of this nation have had an inherent 
level of credibility in the hearts and minds of Americans for 
generations, and that is a good thing.
    We have a big problem when we have a President who is 
misusing his access to the loudest megaphones in the land, and 
by virtue of megaphone and his affinity for making false claims 
and the biggest transmitter of misinformation and 
disinformation in conduct.
    According to one database, as of April 3rd, 2020, President 
Trump has made 18,000 false or misleading claims during his 
time in office. Even if they only have that half right, you are 
still approaching 10,000 times of the most powerful person in 
the world giving misleading or misinformation.
    We all know a few of the most recent claims: the use of 
hydroxychloroquine to cure the coronavirus, which the FDA has 
now said not to use because it--because the medicine has failed 
in several clinical trials.
    Or the constant rhetoric of a Hispanic invasion. He has 
used this word at least two dozen times when referring to 
Latinos and Latina immigrants or asylum seekers.
    Even after a shooter killed 20 people in El Paso this fall 
and referenced a Hispanic invasion, Trump warned yet again of a 
looming invasion and claimed without any evidence whatsoever 
that a caravan of migrants headed to the border had been 
infiltrated by gang members.
    He even sent U.S. troops to the border, insisting that the 
operation was necessary to keep our country safe. But after the 
election was over, there wasn't another peep about an invasion 
and our troops were quietly called back. When the public reacts 
in horror to what he tweets, sometimes he and his staff walk 
things back and say he was just kidding.
    Well, of course the American people are confused. More than 
just confusing, disinformation and misinformation are harmful--
are things that are harmful to the people of this great nation 
when they come from a supposed leader and can have even more 
dangerous consequences for marginalized communities.
    Ms. Collins-Dexter, in your testimony you mentioned 
disinformation campaigns that have used social media to inflame 
racial divisions and hostilities in America, and we all know 
the president loves to use inflammatory rhetoric, however 
inaccurate, to sow hate and discord between Americans.
    Can you explain the long-term and short-term consequences 
and impacts of disinformation and misinformation that come from 
the President on disenfranchised and underrepresented 
populations?
    Ms. Collins-Dexter. Thank you. Thank you, Congressman. 
Those are different issues here. There is one of safety of 
people offline. We have seen an increase of white nationalist 
hate crimes carried out against different--Latinx, Asian, black 
communities, Muslim communities--and part of that stems from 
what we are seeing in closed groups.
    Also, voter suppression, as Dr. Overton has talked about. 
We have seen a lot that. It has been--we talked extensively 
about black attacks. It has actually been woefully 
underreported how much disinformation around voting has been 
targeted to Latinx communities and actually was targeted 
through the Russian and troll farms.
    And so there is a number of ways in which these leave 
fractures in our democracy and ability to live that come from 
disinformation online being unregulated by figures, 
particularly those that are validated with Blue Checks by their 
name.
    Mr. Cardenas. Thank you. And to some of our panellists, if 
you could explain Section 230, being that some of these 
platforms are now the largest corporations in America 
apparently having revenues in the tune of tens of billions of 
dollars a year, are they adhering to 230 correctly when it 
comes to the possibility of interfering with their revenue?
    In other words, do they have enough resources to be more 
technology adherent and also hire more individuals so that they 
can actually do their due diligence and adhere to the spirit of 
230 while still having--giving themselves the opportunity to 
continue their business model?
    [Pause.]
    Mr. Cardenas. OK. No panellists have an opinion about 
whether or not it is a shortage of resources for companies like 
Facebook and others.
    OK. All right. Well----
    Mr. Farid. Congressman, I don't think it is a shortage of 
resources. When Facebook makes $70 billion a year, this is not 
a resource problem. This is a priority problem.
    Mr. Cardenas. Thank you. Thank you, Mr. Chairman.
    Mr. Doyle. The gentleman's time has expired.
    Mr. Cardenas. OK. Thank you so much. I yield back.
    Mr. Doyle. Thank you.
    Chair now recognizes Mrs. Brooks for 5 minutes.
    Mrs. Brooks. Thank you, Mr. Chairman, and thank you so much 
for this incredibly important hearing.
    I want to talk about the practical aspects. When we had 
Mark Zuckerberg testify before our committee now a couple of 
years ago, I actually asked him the question about promulgation 
of terrorist messaging.
    I had a constituent who actually had been beheaded by ISIS. 
I asked--I am a former U.S. Attorney and prosecuted many child 
exploitation cases and talked about still the proliferation of 
child exploitation over the internet, and he talked about the 
number of content moderators that the company had hired and 
they were hiring more.
    Dr. Farid, I would like you to talk a little bit about the 
practical aspects of how content moderators work, and I 
believe, Dr. Collins-Dexter, you might have mentioned that 
content moderators are often not in this country. I was not 
aware of this.
    Could both of you, quickly, talk about how the platforms 
actually use content moderators. And then also I have recently 
heard that they sued these platforms because of, really, 
horrific kind of--the type of work that they do.
    So I am very concerned about this.
    Mr. Farid. I think you are right to be concerned, 
Congresswoman. So first of all, most of the moderators, the 
vast majority, are not employees of Facebook. They are third-
party. They go through vendors.
    There have been horrific stories of their misuse. They are 
underpaid. They are overworked. They have PTSD within weeks of 
working because they are looking at the absolute worst and 
horrific content that you can't even imagine online and they 
are not given the mental health issues or the resources to deal 
with it.
    Facebook has outsourced some of the ugliest work that they 
have to do and I think they should be ashamed of themselves for 
that. The fact is they don't have enough moderators.
    They are very happy to trot out the number of moderators, 
but the reality is these moderators are spending fractions of a 
second looking at a piece of content and having to look at that 
for eight, ten hours straight, day after day after day. These 
are horrific working conditions, and that is why some of the 
really good investigative journalists have called out these 
companies for horrific treatment of this.
    I can tell you, having worked in the child sexual 
exploitation space that when the National Center for Missing 
and Exploited Children or the Canadian Center for Child 
Protection does content moderation, they limit greatly limit to 
only a few hours a day what moderators will see.
    They have mental health issues. They have breaks. They take 
care of the people who are doing the dirty ugly work, and 
Facebook is simply not doing that.
    Mrs. Brooks. Thank you, and I must say that because I was a 
prosecutor in these types of cases, I did witness some of this 
and it is horrific, and so thank you for your time.
    But what--we have to have content moderators, and is your 
suggestion that through either more reporting, more 
transparency, that the country, the world, understand what is 
being allowed on these platforms?
    Mr. Farid. What we should be moving to, first of all, is 
not doubling but quadrupling ten-factor moderators so they can 
spend less and less time looking at this material to minimize 
the harm, and at the same time we should be deploying 
technology to mitigate the content that human moderators have 
to look at.
    The goal should be you're always going to need human 
moderators to make the difficult calls, but technology can do 
better and better at this. We have seen that successful in the 
child sexual exploitation space. But it is simply, again, an 
issue of investment.
    So we have to--we need way more moderators than we have. We 
have to treat them better. They have to have mental health 
issues and we need to start deploying technology in a much more 
effective way to minimize the harm to the human moderators.
    Mrs. Brooks. Thank you.
    I am going to shift very briefly. Dr. Collins-Dexter, you 
mentioned in your opening statement about the Office of--
Congress's OTA. Can you--I am on the bipartisan Committee on 
Modernization of Congress and we have talked about renewing 
Congress's, in the House, the Office of Technology Assessment, 
I believe.
    Or can you talk about what role you believe that would play 
or you think in order to help us move forward in these very 
difficult--understanding of this type of problem? What was your 
recommendation there?
    Ms. Collins-Dexter. Yes. So, I mean, two more things on 
content moderation.
    A, when we first went to Facebook five years ago and told 
them about the hate speech and violent threats online, they 
told us that their content moderators didn't understand how 
racism looked in the U.S. and that is why there are false 
positives. So that is an issue. Also, they have cut back a lot, 
which in the time of coronavirus is a big deal.
    Shout out to my mother-in-law, who worked at the Library of 
Congress and was a part of this, and she did a lot of, like, 
bipartisan research around technology and one of the things 
they said is that when they took the partisanship out of the 
research they were actually able to do a lot and invest in, 
like, innovation in communities.
    Mrs. Brooks. Thank you.
    Mr. Doyle. The gentle lady's time has expired.
    The Chair now recognizes Congresswoman Kelly for 5 minutes.
    Ms. Kelly. Thank you, Mr. Chair, and thank you to all that 
are testifying today.
    As we have established, many people, especially young 
people, use social media as their primary source of obtaining 
news, and also as we have established, unfortunately, through 
COVID and recent protests, there has been a lot of 
misinformation and disinformation.
    Ms. Collins-Dexter and Mr. Overton, social media platforms 
have used a variety of approaches to reduce disinformation on 
their platform: removing or down ranking this information that 
doesn't pass fact checking by independent organizations, up-
ranking and featuring authoritative content from recognized 
health authorities, changing a user experience designed to 
introduce friction and being more transparent about the use of 
machine learning to moderate content.
    Should they be using these approaches on other topics that 
create similar harm such as the Census, political protests, or 
voting? And what other approaches should they be using now?
    Either one of you can start.
    Ms. Collins-Dexter. Professor Overton, do you want to 
start?
    Mr. Overton. Sure. Thanks so much.
    Certainly, when we look at--I know that they have fallen 
short in some ways with COVID here and, Dr. Farid, I defer to 
him. But I would also just say they have been doing a better 
job with COVID-19 than they have with regard to voting rights.
    So I would agree with you that they need to--you know, they 
do a good job on obscenity. They do a good job on some others. 
They need to adopt some of these practices. Some of it is 
technology and investing in technology.
    Some of it changing definitions about what needs to be 
moderated in terms of voter suppression, in terms of 
misrepresentations about long lines, et cetera. That stuff 
needs to be included as well from a policy standpoint.
    Ms. Collins-Dexter. Yes, I agree. Sorry, Professor.
    Ms. Kelly. No, go on.
    Ms. Collins-Dexter. Yes. I absolutely agree. I think we 
have been asking them for years to make certain changes and it 
is mind boggling how quickly they were able to scale up in a 
couple weeks into when disinformation was exploding online.
    Unfortunately, it was a little bit late because there was a 
lot of black disinformation that we have been tracking. I will 
be releasing a report later this week around how much had 
travelled before they cracked down.
    But I think what we see time and time again is that the 
urgency that they feel around other issues does not apply when 
we are talking about white nationalism or being anti-black 
disinformation.
    Ms. Kelly. Thank you.
    Professor Farid, in your testimony, you mentioned how video 
recommendation algorithms can accelerate disinformation and 
create a feedback loop.
    What incentive is there for social media companies to stop 
their current practices that are generating more eyeballs and 
more ad revenue, and how should companies intervene and is 
there a specific approach you believe a platform could take for 
viable content review?
    Mr. Farid. Good. So the first thing to understand is that 
on YouTube, for example, 70 percent--seven zero--of watched 
videos are those that are recommended by YouTube, not just 
organically you clicking on a video. So they are controlling 
what we see to a significant extent, number one.
    Number two, when, for example, YouTube was called out over 
and over again for not protecting children online, Disney 
withheld advertising dollars and then YouTube made changes.
    Similarly, when they started getting called out for 
horrific, dangerous, and deadly conspiracies, they eventually 
made changes. So there are real hard technological problems 
here. There are difficult content moderation problems. But we 
have not even come close to that line yet.
    The issue is, as we have been talking about, is just a 
misalignment of incentives. So they profit with eyeballs. 
Conspiratorial, hateful, and divisiveness maximizes eyeballs. 
So unless there is a regulatory oversight, an advertising 
boycott, or better platforms emerge, their incentives are not 
there and so we have to sort of give them those incentives.
    Ms. Kelly. OK. And then how should companies intervene and 
is there a specific approach you believe platforms should take 
for viral content?
    Mr. Farid. Absolutely. So when content, and you are 
starting to see this, goes viral, there needs to be human or 
algorithmic moderation. If something has two views, I am not 
worried about it right now.
    But when those things spike, and they know when they spike 
because the recommendation algorithms find them and start 
promoting them, they have to have extra scrutiny and that just 
means putting the resources into that.
    Ms. Kelly. Thank you so----
    Mr. Farid. I think--let me just emphasize one more thing, 
too. You have to understand that on social media, the half-life 
of a post is measured in hours. So this is not something you 
can come to a week later or a month later. You have, literally, 
minutes to deal with these things as they go viral because they 
happen very, very fast.
    Ms. Kelly. OK. Thank you so much. Thank you to all the 
witnesses, and I yield back, Mr. Chair.
    Mr. Doyle. Thank you. gentle lady yields back.
    The Chair now recognizes Mr. Hudson for 5 minutes.
    Mr. Hudson. Thank you, Chairman Doyle, Ranking Member 
Latta, Chairwoman Schakowsky, and Ranking Member McMorris 
Rodgers. Thank you for holding this joint hearing today, and 
thank you to all our witnesses for what has been an excellent 
discussion.
    The United States is a country founded on the principle of 
free speech and the free exchange of ideas. This is one of the 
principles that truly makes our nation great.
    However, I am disturbed by a recent trend of political 
censorship and liberal bias that has consumed social media 
platforms. Just yesterday, as has been mentioned earlier, 
Twitter took it upon themselves to censor another one of 
President Trump's tweets that opposed the establishment of an 
autonomous zone in Washington, DC, is similar to the one we see 
in Seattle.
    This divisiveness we are discussing here today is real. 
Companies are openly suggesting they support the free 
expression of ideas as long as they are the same as their own. 
This does nothing but undermine free speech and divide our 
nation.
    As we examine how online disinformation further exacerbates 
these issues and further divides our nation, we must realize 
the far-reaching consequences of our actions and policy 
proposals.
    To be clear, deliberately misleading anyone about medical 
treatments or sharing false information about the COVID-19 
virus is dangerous and wrong. Spreading hate speech or 
disparaging others based on their race is also dangerous and it 
is wrong.
    On the other hand, when we discuss reforms to the internet 
we must be deliberate. We cannot stifle the innovation which 
has given us the greatest tool the world has ever seen.
    Without the internet and social media, our spread of 
critical information related to COVID-19 would have been slowed 
and could have cost thousands of lives.
    Additionally, it was a social media post that first told 
the world about the death of George Floyd, and the protests and 
demonstrations that have followed take advantage of things like 
hashtags, Facebook groups, and live streaming to share with the 
world their message in a way that was not possible just a few 
years ago.
    Mr. Fried, a lot of testimony we have heard has focused on 
reforms to Section 230 of the Communications Decency Act. In 
your testimony, you lay out several reforms that you would like 
to see in order to restore the original intent of the law while 
protecting free expression.
    If Congress does go too far, in your opinion, by regulating 
the internet, what are some of the risks we are taking? Would 
we be able to have the same sort of dialogues and civic 
engagement that we have come to enjoy on our social media 
platforms?
    Mr. Fried. So the good news is I am not proposing 
regulation, right. Regulation is usually limiting a business 
model in advance, saying you can't use your discretion.
    That is not the proposal on the table, right. What we are 
talking about is applying a duty of care. You have all the 
discretion you want in the front end. All we are saying is if 
you make a bad decision then you can be held culpable like 
every other business in America. I think that avoids the harm 
of regulation.
    We don't want to stifle innovation. We don't have to. Let 
them innovate. But if they innovate wrong, if they are careless 
and reckless, like every other business in America they should 
be able to be held to account. If they are doing nothing wrong, 
and they often aren't, everything is fine. But when they are, 
we can't even question it and that's it. Let us take the 
regulation off the table. I don't think we are going to chill 
innovation by holding them to the same duty of care as 
everybody else.
    We do have to be careful on speech. And so I think there we 
have got to be very careful. It is a lot easier when you are 
focusing on illicit conduct. Maybe there is some room in the 
good-faith requirement.
    If the courts give some meaning to that in a careful way 
about pretext, I think that can go a long in solving some of 
our speech concerns.
    Mr. Hudson. That makes sense to me and, you know, Mr. 
Flores asked you about some of your ideas under your, quote, 
reasonable good-faith steps.
    But my question, I guess, would be how would that be 
enforced? What would that look like?
    Mr. Fried. I am sorry. Was that Fried or Farid?
    Was that for me?
    Mr. Hudson. That is for you. Yes, sir.
    Mr. Fried. Again, so every other company in America, right, 
is subject to a duty of care. They deal with it every day and 
so they take reasonable steps. This would, largely, be self-
enforcing. We would be lining up incentives.
    If they are going to be reckless, right, if they are going 
to allow--they are not going to combat the distribution of 
drugs over their platforms, they are not going to do what every 
other business does to make sure their facilities aren't 
misused, they will be held accountable. I don't think there is 
a lot to do other than to recreate the duty of care that 
applies to everybody else.
    Mr. Hudson. Thank you for that, and I am running low on 
time.
    But, Ms. Collins-Dexter, I wanted to ask you quickly, 
throughout your career you have worked on many successful 
campaigns and initiatives. Could you share with us how social 
media has been utilized as a tool for your efforts as well as 
for some current projects?
    How do you think various movements that seek to change 
society have benefited from an individual's ability to 
participate through social media?
    Ms. Collins-Dexter. Groups have always benefited from new 
technologies. It is as extremely important as an opportunity 
for voices of the unheard to be heard, whether it is through 
books, whether it is through media, whether it is through 
technology, and we have seen when left in the hands of 
corporations and it becomes unregulated that actually those 
voices end up getting drowned out more and more, and so that is 
part of what we are seeing now in social media.
    Like, as we have not been regulating these companies, more 
and more this idea of what free speech looks like is operating 
on a sliding scale where it is free for some and costly for 
others.
    Mr. Doyle. The gentleman's time has expired.
    Mr. Hudson. Thank you, Mr. Chairman.
    Mr. Doyle. I thank the gentleman. The Chair now recognizes 
my good friend, Mrs. Dingell, for 5 minutes.
    Mrs. Dingell. Thank you, Mr. Chairman and Chairwoman 
Schakowsky, for holding this important hearing and the good 
news for all the witnesses that by the time you get to me it 
means you're getting close to the end.
    But this subject--this is something that is very important 
to me and I want to say to my colleague, Mr. Hudson, it is not 
just the conservatives. The unions have, in the last couple of 
weeks, sent a letter.
    Facebook has taken down the use of the word unionization, 
which has been very disturbing with the UAW because it was 
being taken down from sites where some really specific issues 
were being discussed. So I want my colleagues to know it is 
really an issue on both sides, how do we talk about it and how 
do we define it.
    And I am worried about how disinformation spreads like 
wildfire and I am trying to figure out how do we address it; 
how do we protect that free speech.
    But, so, for instance, I have been to many protests over 
the last two weeks. I don't look like someone who has been at 
17 of them but I have.
    And I have had several people come up to me. I had a 
constituent who was suspended from Facebook for a week for 
saying that people were going to die because they weren't 
wearing masks. And yet, somebody else who had been at that 
vigil had threatened with guns. She was suspended for a week 
and Facebook did not take down that I have guns, I can protect.
    There is an inconsistency here that just makes--there is 
no--you have no metrics by which to judge how they are making 
decisions and I would really beg to differ with that.
    But let me start with Ms. Collins-Dexter. Have you seen 
other examples of platforms not applying their terms of 
services in uniform fashion and have you seen an uptick in 
these kinds of disparities in recent months?
    Ms. Collins-Dexter. Absolutely. I think, to be clear, all 
of the platforms have demonstrated some level of issue with 
applying the rules unevenly, particularly when it comes to more 
prominent figures.
    Mrs. Dingell. Mr. Farid, I can see you nodding your head, 
too. So----
    Mr. Farid. Yes. I mean, look, it is easy to pick on 
Facebook. They are the biggest. But all the services, from 
Reddit to TikTok to YouTube to Google to Twitter, they are all 
struggling under the weight.
    But it is their weight. They built these things at scale 
and at a speed without putting the proper safeguards in place. 
So they don't then get to turn around and say, well, the 
internet is really big. The problems are really hard.
    You built this mess and now you have to fix it.
    Mr. Overton. And Congresswoman--I am sorry, Congresswoman. 
Spencer Overton.
    Just one thing here. You know, it is definitely debatable 
about content and what should be regulated, definitely, by the 
platforms.
    The issue, though, is that these are private entities and 
so, for example, when the President talks about free speech, 
you know, free speech is really about government, right.
    And so if we were to, basically, say Facebook had to have 
everything--they couldn't remove threats, harassment, altered 
video, misinformation, sexual privacy invasions, et cetera--so 
we definitely want transparency and more consistent content 
moderation.
    But I do think that us simply saying, hey, all of this is 
free speech opens the door to a lot of negative hate speech and 
a lot of, you know, violations of basic civil rights here that, 
you know, we don't want to occur.
    Mrs. Dingell. I agree with you and I really want--I have 
one--I have another question I want to ask and I am running out 
of time. But I really fear that the internet has become a tool 
of fear and hatred, and whenever I talk about the Second 
Amendment the death threats that I get that aren't taken down 
are sort of stunning.
    But, Mr. Farid, I am running out of time. Arizona has seen 
a spike in daily COVID cases. At the beginning of this month, 
there were, roughly, 200 new cases per day. Today, that number 
is over 3,500 daily new cases.
    Yesterday, President Trump held an event in Phoenix, which 
is a new COVID hotspot. Thousands of people attended the event 
without wearing masks and without socially distancing.
    When asked why they weren't taking the precautions, they 
told reporters that they didn't believe the number of reported 
deaths, that they were overstated and they didn't believe in 
the severity of the disease.
    Given your research, what do you make of these statements 
and do you believe that online platforms are doing enough to 
curtail the deadly and misinformation?
    Mr. Farid. So, first of all, I don't think they are doing 
enough and we have seen this. We have seen the misinformation 
apocalypse and we have seen it propagate down to where people 
are making decisions that are affecting their health and their 
neighbor health and those in this country, and I think that is 
a deadly consequence of allowing this type of misinformation to 
propagate through the services.
    Mr. Doyle. The gentle lady's time has expired.
    The Chair now recognizes Mr. Gianforte for 5 minutes.
    Mr. Gianforte. Thank you, Mr. Chairman, and thank you to 
all the panellists. This has been a very good discussion.
    I created my business on the back of the internet in the 
early 2000s. We eventually grew that from an extra bedroom in 
our home to one of the largest employers in Montana. We have 
1,100 employees globally and our website had about 8 million 
unique visitors every single day.
    We are a good example that the internet has removed 
geographic barriers that previously prevented global businesses 
from operating in rural Montana and rural America.
    But the internet can also have negative effects. Platforms 
can amplify similar voices and stifle others without much 
clarity. In a time when many were forced indoors, 
misinformation had an even more disastrous effect.
    I understand how important Section 230 can be, especially 
for a small business, which doesn't have the resources of a 
large one. It is an important shield that also comes with a 
sword.
    There has been concern that certain companies are using 
their size to stifle certain voices. I believe it might be a 
lack of understanding by companies, based in Silicon Valley.
    Back in March of last year, Missoula-based Rocky Mountain 
Elk Foundation reached out to my office because one of their 
advertisements had been denied by Google over concerns of 
animal cruelty.
    The ad featured a woman talking about growing up hunting 
with her dad. There were no dead animals. There was no animal 
cruelty. It promoted our hunting heritage. As an avid hunter 
and an outdoorsman myself, I know how many Montanans rely on 
hunting to provide for food for their families and as a way to 
enjoy our great outdoors.
    Many businesses in Montana promote hunting and fishing as 
it is their means to sell their outdoor sporting goods 
products. Will their businesses be denied the opportunity to 
advertise on a platform that owns a large portion of the 
market?
    Will they have to reach out to their member of Congress 
every time there is a, quote, misunderstanding? While there 
have been some troubling examples, I have appreciated the quick 
response and willingness to engage from these platforms. We got 
the problem fixed. It just took a lot of work.
    It is difficult to regulate a dynamic industry and hastily 
rushed to draft legislation could have more unintended 
consequences than solutions.
    Mr. Fried, in your testimony you pointed to ways we can 
work together for a solution. I am interested in what effect 
you think over prescriptive legislation--what sort of negative 
impacts would overly prescriptively legislation have on this 
sector?
    Mr. Fried. I think that was me so----
    Mr. Gianforte. Yes.
    Mr. Fried [continue]. If that was for me I will continue.
    Mr. Gianforte. Yes.
    Mr. Fried. I really am not advocating regulation. We want 
all the experimentation. We want business models like the ones 
you talked about for your business. That is the innovation, 
right. And so to get the experimentation by not regulating, you 
get protection for free expression from the safe harbor of 
(c)(2).
    But what we need to do is just say innovate, experiment, 
but know that you are going to be held accountable for your own 
decisions. Every other business does that. It is just--it is 
personal responsibility. It is business responsibility. That 
will solve a lot of this. We don't have to be prescriptive.
    And the other beauty of this is you don't have to come up 
with different legislative solutions for every single ill on 
the internet. If you line up the incentives the platforms will 
solve their own problems because they don't want to be sued, 
right. That is what every other business does.
    We don't have to decide there is a solution for this and a 
solution for that. Make them accountable for their own actions 
like every other business.
    Mr. Gianforte. OK. Thank you.
    And just as a follow-on to that, Montana is a small 
business state. Innovation often happens in these small 
businesses.
    I am concerned as we look at public policy here that as 
small businesses compete with large businesses that--and I 
understand your concept of duty of care--how should that apply 
differently for small businesses versus large businesses so we 
don't stifle--the duty of care doesn't create a duty of burden 
so big that small companies can't actually innovate?
    Mr. Fried. So the great news is it already is sort of--it 
solves its own problem. Reasonableness is a flexible standard, 
and certainly a large company with lots of resources, what is 
reasonable for that company is different than what is 
reasonable for a small company, right.
    So the reasonableness standard will adjust to the size of 
the platform. Again, the small startup doesn't have as many 
users to moderate. Isn't in 12 lines of business. Has fewer 
uses.
    So if it starts knowing, I am accountable for what I do, it 
will build responsibility by design. It will start small and 
responsible, and as it grows, it will have added resources to 
deal with other issues as they pop up. I think it solves its 
own problem.
    Mr. Gianforte. Thank you, Mr. Chairman.
    Mr. Doyle. The gentleman's time has expired. I thank the 
gentleman.
    The Chair now recognizes Ms. Blunt Rochester for 5 minutes.
    Ms. Blunt Rochester. Thank you, Mr. Chairman, and to the 
other chair and ranking members, to the panellists, especially 
Professor Overton, who I have had an opportunity to work with 
on future work issues. I say thank you.
    I am struggling a little with this hearing because of the 
significance and the timeliness of it. I think back to when we 
had this conversation about Section 230 months ago and the fact 
that at that time we talked about the lack of diversity and 
inclusion in some of these platforms and some of these 
companies, and how even when we talk about algorithms or we 
talk about humans who are--have biases that it impacts what we 
get out of this--you know, this system.
    And now we are facing COVID-19, a pandemic, on top of a 
pandemic while we address racial and social issues that our 
country has long been plagued with, and what these platforms do 
are magnify the current situation.
    And, as we have seen, they are also exploiting and actually 
just really making things worse, and while there are good 
portions of the internet and good things about these platforms, 
one of the challenges now is that we have a sense of urgency.
    The questions now are life and death. People can die if 
there is misinformation out there about COVID-19. People can 
die is violence is incited in people and they go out because of 
what they are reading on these platforms that are artificially 
targeting them. And our democracy can die.
    So the sense of urgency I have, while today what is 
beautiful is that I am hearing Democrats and Republicans all 
say we have to face this. But what I really want to say is to 
those platforms and to those tech companies, we are putting you 
on notice. This is our country and this is really important.
    And so for this moment, a lot of Americans, millions are 
asking themselves individually and collectively what can I do. 
And so I am hoping that Mark Zuckerberg and Reddit and every--
YouTube, everybody, you are holding up a mirror to yourselves.
    I am going to ask a question. First of all, I am going to 
share--Mr. Chairman, I have submitted a letter into the record 
to Mr. Zuckerberg supported by 42 members of Congress and by 
leading civil rights organizations, including the Leadership 
Conference, Color of Change, and the Joint Center, and I hope 
that this committee will consider having Mr. Zuckerberg and 
that he will see this moment as a wake-up call.
    And in that line, I wanted to mention that----
    Mr. Doyle. Without objection, so ordered.
    [The information appears at the conclusion of the hearing.]
    Ms. Blunt Rochester. Thank you, sir.
    In 2018, Facebook hired an independent third-party to 
conduct a civil rights audit and we now have the results of the 
first two.
    And Ms. Collins-Dexter, the second audit report underscores 
the changes in policy Facebook was to take to address voter 
suppression, and the independent auditor stated that Facebook 
would implement new policies to further address these issues 
such as an explicit ban on don't vote ads.
    Has Facebook followed through on these new policies?
    Ms. Collins-Dexter. Thank you, Congresswoman.
    Facebook has not been consistent on following through on 
what they need to do, and I want to be clear. Facebook brought 
in Laura Murphy, formerly of the ACLU, to conduct the third-
party audit. She did an Airbnb audit which resulted in clear 
changes that still exist to this day including infrastructure--
permanent civil rights infrastructure.
    She, at Facebook appears to have been--I feel like, 
blocked, every time in terms of, like, what she recommends and 
how we see that play out in terms of, like, policy 
implementation, and though we have seen them move baby steps 
forward, we have no faith that they are actually going to go 
nearly as far as they need to.
    And to your point, the stakes are so incredibly high right 
now we have to move with urgency.
    Ms. Blunt Rochester. Thank you. And then on the question of 
violent speech, it is already--the chairman has already pointed 
out the statement of the President, when the looting starts the 
shooting starts, and that Mr. Zuckerberg refused to take that 
down--that comment is down.
    Ms. Collins-Dexter, please explain how this irresponsible 
policy by Facebook has the real potential to turn into 
violence.
    Ms. Collins-Dexter. We have seen people directly impacted 
by violence in real ways. We have seen hate crimes go up 
significantly. We have had a lot of threats directed at us. I 
have had threats directed at me.
    The stakes are super real and one thing I do want to say, 
too, like, we talk about regulation as stifling innovation. I 
think that it is best when the government steps in and actually 
lays a new lay of the land and makes something possible, like 
the New Deal and other moments in time. We see that innovation 
actually runs free, and I think that is what we need to be 
doing right now.
    Ms. Blunt Rochester. Thank you, Mr. Chairman, for the 
opportunity.
    Mr. Doyle. The gentle lady's time has expired.
    Ms. Blunt Rochester. Thank you. I yield.
    Mr. Doyle. I thank the gentle lady.
    The Chair now recognizes Mr. Carter for 5 minutes.
    Mr. Carter. Well, thank you, Mr. Chairman, and thank all of 
our panellists for being here today. We appreciate it. A great 
discussion.
    I want to start with you, Mr. Fried--Neil. When we talk 
about online disinformation, I think it is important for us to 
note where we are at and how this applies to the COVID-19 
pandemic right now.
    I think that is very important because a lot of the 
disinformation, particularly in regards to false health 
benefits, which is very troublesome and I wanted to ask you.
    I have got a bill. It is called the Combatting Pandemic 
Scams Act, and it really requires the Federal Government to 
push out best practices and awareness and, really, requires 
them to assimilate information about some of these scams and 
put them all in a database and put them online so people can 
learn about it.
    How do you see this fitting into the larger online picture 
of disinformation?
    Mr. Fried. I think it will be great to have that database. 
There is an impediment right now, which is WHOIS data would be 
very valuable to feed into that database.
    When we see patterns of misinformation coming from 
particular websites or particular names, often--I mean, they 
are often bogus names. But there is pattern recognition here.
    If you see a certain name associated with every single 
misinformation site about COVID in a certain time span, cyber-
security experts can say a-ha, here is a pattern. Next time we 
see information coming from this website, let us be suspect.
    That helps law enforcement. That helps cyber experts. It 
might even help algorithms, right. If we want to prioritize or 
deprioritize, it helps to know is this a site that is likely 
purveying misinformation.
    But we have lost access to a lot of WHOIS data because of 
GDPR. I would love to get that data back so we can build 
exactly those sorts of databases.
    Mr. Carter. Do you think it is important for the public to 
be brought up to date and to be kept up to date with this kind 
of information on these kinds of ongoing scams?
    Mr. Fried. Sure, and actually, there used to be--there 
still are some databases but they are growing stale because we 
are losing access to that WHOIS data.
    Mr. Carter. Let me ask you another question. On June 22nd, 
the Wall Street Journal editorial board put out a piece about 
how the recent social justice movement has begun to move 
organizations to punish people for exercising their First 
Amendment rights on social media platforms, and we have talked 
about that all throughout this discussion today and we all 
understand what a difficult place we are in and how this can be 
done but it has got to be done carefully. We all understand 
that.
    But despite the fact that some of the opinions were not 
negative in any ways they were still removed. Do you see a 
danger here? Do you see a danger of online platforms 
potentially creating a mentality or a pathway that could 
compromise people's First Amendment rights?
    Mr. Fried. As you point out, we have to be very sensitive 
when we are talking about speech. I think the best solution 
there is what a lot of both the witnesses and the members have 
talked about, which is, first, transparency. What are the 
policies, right.
    Second, what is the--what are the terms of service, right. 
What are the standards going to be. And process, right. Who has 
been taken down for what reason; how can they appeal that.
    If we can track that information, some sort of--sort like 
a--I mean, often we get from the platforms is transparency 
reports. This has probably already been gathered.
    That kind of transparency will make sure we know why 
someone is being taken down, if there has been a mistake, how 
they can fix it, and we can see patterns over time. Then maybe 
we know, you know, has there been good faith, has there not 
been good faith.
    Mr. Carter. You know, we have been talking about this 
subject for quite a while, for a long time. Even when you were 
still on the committee, we were talking about it and, you know, 
the message that I think we are all giving to the platform 
owners and to those running the platforms is you need to do it 
to yourself before we have to do it to you. And I don't know 
how we get that message across to them.
    I don't know--I don't want to have to do that. I don't want 
the Federal Government--because I am really fearful that we are 
going to suppress innovation and I don't want to see us do 
that. So that is my concern here.
    Let me ask you one final question. Combatting 
disinformation is, certainly, important and I believe it is 
also important to note that the suppression of real information 
doesn't--that doesn't fit a political narrative.
    For instance, there are some media platforms that like to 
emphasize the good things that the governor of New York has 
been doing, and he has been doing some good things.
    But they fail to mention some of the things like putting 
patients in nursing--putting COVID-19 patients infected in 
nursing homes, which is the absolute worst thing you could do.
    How do you balance between that? How do we balance on both 
sides of the aisle between that?
    Mr. Fried. Again, let us track what is happening and why, 
because that is how we analyze data and figure and figure out 
what really is happening, and then it is back to the 
marketplace of ideas, right.
    The more platforms there are, the more avenues for 
expression, you make sure that the good data comes out and 
those will attract consumers. And, again, that is the benefit 
of keeping (c)(2), right, keeping the content moderation safe 
harbor of 230.
    So we have all those platforms, all those opinions 
expressed. Just make sure that if there is something really 
nefarious happening there is accountability for the platform.
    Mr. Doyle. Gentleman's time has expired.
    Mr. Carter. Thank you, Mr. Chairman. I yield back.
    Mr. Doyle. I thank the gentleman.
    The Chair now recognizes Mr. Walberg for 5 minutes.
    Mr. Walberg. Hold on. There we go. There we go.
    Thanks for this hearing. It is something whose time has 
come.
    Let me follow up, briefly, on Leader Walden's interest in 
having Jack Dorsey back before the committee.
    I would like to enter into the record a New York Times 
article from yesterday on how Mr. Dorsey--his financial 
transactions company Square is withholding payments to 
thousands of small enterprises with little warning who are 
desperately in need of these funds to stay afloat during the 
pandemic, and when these folks attempted to use Mr. Dorsey's 
other company, Twitter, to complain, they were blocked.
    I would like to have that entered.
    Mr. Doyle. Without objection, so ordered.
    [The information appears at the conclusion of the hearing.]
    Mr. Walberg. Thank you. Thank you.
    Mr. Fried, it almost goes without saying in relation to 
your testimony that the courts have strayed away from the 
original congressional intent behind Section 230.
    The advocacy courts that we have today strayed away from a 
lot. So this shouldn't surprise us. Can you please explain how 
judicial interpretation of Section 230 over the last two 
decades has not squared with its purpose?
    Mr. Fried. Sure, Congressman.
    So this really, as we, I think, all know now, started as a 
libel case, right, and it was about Prodigy, who was doing the 
right thing, being a Good Samaritan, missed something and was 
punished for it, right. They were, essentially, were told--they 
weren't punished but they could have been held culpable because 
of their good-faith efforts.
    That is what I think got Congress's concerns. That is what 
led to, you know, now Senator Wyden, Congressman Cox, to 
rightly pointing out we had a misincentive. We were actually 
discouraging people from moderating. That is what this law was 
supposed to be about.
    Unfortunately, it was written in a way that has been 
interpreted to do much more than that. It doesn't just--so the 
first problem is it doesn't just protect the Good Samaritan 
that is doing content moderation. It protects those that are 
doing no content moderation, inadequate content moderation, and 
the Bad Samaritans. That is the first problem.
    It also is going way beyond defamation, right. So now we 
know it is applies to almost every gig economy, sex 
trafficking, SESTA-FOSTA now. Still have a problem with child 
pornography, sexually explicit materials, the sale of drugs, 
and this (c)(1) provision is being used to apply to all of 
that. That was never anybody's intention.
    Unfortunately, the language is amenable to that 
interpretation, which is why if we fix (c)(1) I think a lot of 
our problems get a lot better. It doesn't make everything go 
away.
    But I think even just that change, and it can be modest, 
can make a lot of good and help a lot of these problems that 
every witness here is saying is a problem. It is every witness 
here. It is most witnesses at most of the panels in hearings we 
have been having, and it has all been victims, right. All of us 
are saying, the status quo isn't working; something has to 
change.
    Mr. Walberg. Yes. And seeing the nodding of heads, I think 
we are in agreement on that and needs to move forward.
    Let me--Mr. Fried, you put forward in your proposal, in 
your testimony, that Congress should consider amending, and I 
think we have all talked about amending 230 to add reasonable 
duty of care in order to earn the liability protections under 
the law, and we have discussed that quite a bit today.
    But could you expand on any key guiding principles that 
ought to be there for Congress to consider in retaining and 
modifying the liability protections in Section 230? The guiding 
principles, those overriding guiding principles.
    Mr. Fried. Save (c)(2). Content moderation and safe harbor 
are important. That was the goal. Let us save that. Let us not 
try and regulate everything, right. We don't want a patchwork, 
right. That would be harmful.
    But if you write the incentives and recognize the 
difference between speech and illicit conduct, I think those 
are the guiding principles, right. So add to that transparency 
and I think you can make a modest change to 230 that fixes the 
problem, that saves all the benefit that has led to what is a 
wonderful internet.
    I mean, I don't want to be seen as a Luddite. The internet 
is great. But we can keep the great parts of the internet and 
fix the problems from incentives in Section 230.
    Mr. Walberg. Thank you. I will yield my time.
    Mr. Fried. Thank you.
    Mr. Doyle. I thank the gentleman.
    I note that Mr. Sarbanes has been waived onto the committee 
and it gives me great pleasure now to recognize him for 5 
minutes.
    Mr. Sarbanes. Thanks very much, Mr. Chairman. Can you hear 
me?
    Mr. Doyle. I can hear you fine.
    Mr. Sarbanes. Excellent. Well, thank you for the 
opportunity to participate in this hearing. Really outstanding 
panel. I appreciate all the testimony. I have been listening 
for the last two or three hours because it is a very important 
topic.
    So we know that hostile actors both foreign and domestic, 
sadly, have grown quite sophisticated in exploiting these 
platforms--these social media platforms--to sow discord, to 
widen political division, and far too often, as we have heard, 
to suppress people's vote.
    Yet, as this hearing has shown, these platforms have been 
reluctant to deploy the full suite of their proven tools to 
combat the known threat, and it just doesn't have to be this 
way, from what I understand.
    While not perfect, the platform's response to the COVID-19 
outbreak has at least given us a rough roadmap for how they can 
proactively--and I emphasize that word--proactively provide 
users with accurate information about our democracy, about our 
elections, while keeping harmful misinformation designed to 
suppress the vote from spreading on its apps and on their 
platforms and so forth.
    For example, Facebook's efforts at addressing COVID-19 have 
included sending correct information to--(Telephonic 
interference.)--users and notifying them when they have 
interacted with false information. So there are steps that they 
can take.
    Professor Farid, simply as a matter of technological 
capacity for the moment, can Facebook and the other platforms 
direct users to verified sources of information for those users 
who are known to interact with false information about voting?
    Mr. Farid. Absolutely. I mean, these are the ultimate data-
collecting and intelligence-collecting corporations. They have 
a phenomenal amount of information of who we are, what we 
watch, what we see, and they absolutely have the technological 
and the data ability to inform us when we have been interacted 
with harmful content.
    Now, it remains to be seen if that is, in fact, helpful. 
Does correcting the record actually deal with the harms that 
happened earlier on, and there is some contradictory evidence 
in the literature that simply trying to correct the record will 
undo everything. That is just not the way human nature works. 
There is a boomerang effect.
    So my preference is to avoid the contact in the beginning. 
But if it does happen, this is absolutely unnecessary but it 
may not be a sufficient step.
    Mr. Sarbanes. I agree with that. I think we need both. I 
think you all have given powerful testimony as why you need 
that kind of front-end response to disinformation to try to 
protect the users from these things that can sow division and 
otherwise, in effect, distort our democracy.
    But it is clear that they have the tools to do this both on 
the front end if, as you testified earlier, they are willing to 
put the resources and attention behind it in a meaningful way, 
but also as evidenced by the way they have handled the COVID-19 
disinformation, provide good positive corrected information on 
the back end when that is necessary and called for.
    Professor Overton, this discussion, I guess, begs the 
question if the capacity exists for Facebook and these other 
platforms to have that kind of a response in their toolkit, 
what, from your perspective, can explain their reluctance to do 
it?
    Mr. Overton. Well, thanks for your leadership in terms of 
empowering small voices in terms of small donors, number one, 
in terms of public financing. So I just wanted to note that.
    I think, again, as Dr. Farid talked about, there are these 
financial incentives that companies have to look the other way, 
to basically say, hey, we'll sell this ad that is targeted--
employment ad that is targeted to whites and away from blacks.
    We will sell this ad that is targeted at black communities 
in terms of voter suppression without a lot of scrutiny. So I 
think these financial incentives are there and that we need 
some other incentives like regulatory incentives possibly to 
address it.
    Mr. Sarbanes. Thank you.
    Ms. Collins-Dexter, in the time I have left, in addition to 
proactively notifying users when they have interacted with 
false information, do you think Facebook and other platforms 
should take additional and affirmative steps of labelling or 
removing posts when that platform is being used for voter 
suppression and disinformation?
    Ms. Collins-Dexter. Absolutely. We find in our organizing 
work that when disinformation gets out, even if you correct it 
or put a label over it, people retain the lie more than the 
truth.
    So the content, I think, should come down. I think we also 
need to look at who are the verified users and how they may be 
pushers of disinformation and what are the consequences for 
that.
    Mr. Sarbanes. Thank you very much.
    Mr. Doyle. The gentleman's time has expired. I thank the 
gentleman.
    I want to thank my co-chair, Jan Schakowsky, for her good 
work, and our ranking member, Bob Latta, and all the members.
    And I especially want to thank this outstanding panel. We 
have really enjoyed your testimony and the way you have 
responded to our questions.
    I want to remind all Members that pursuant to our committee 
rules that they have ten business days to submit additional 
questions for the record to be answered by the witnesses who 
have appeared, and I ask each witness to respond promptly to 
any such questions that you may receive.
    Before we adjourn, I would like to request unanimous 
consent to enter the following records--the following documents 
into the record:
    [The information appears at the conclusion of the hearing.]
    First, a letter from the National Hispanic Media Coalition; 
a letter from the Coalition for a Safer Web; a letter from CCIA 
and NetChoice; a letter from Zeve Sanderson, executive director 
of NYU Center for Social Media and Politics; a letter from 
Public Knowledge; a statement from the Leadership Conference on 
Civil and Human Rights; an essay by Mr. Spencer Overton; a Wall 
Street article, Facebook Executive Shut Down Efforts to Make 
the Site Less Divisive; a letter to Facebook on civil rights 
issues from Representative Lisa Blunt Rochester and others; a 
letter from the Lithuanian American Community; a letter from 
the Central and Eastern European Coalition; research from 
Debunk EU on disinformation; a letter from the Open Technology 
Institute; a letter from Consumer Reports; and last but not 
least, an article from the New York Times entitled, Square, 
Jack Dorsey's Pay Service, is Withholding Money Merchants Say 
They Need.
    So, without objection, so ordered.
    Mr. Doyle. And at this time, the committee is adjourned.
    [Whereupon, at 2:52 p.m., the committee was adjourned.]

           Prepared Statement of Hon. Cathy McMorris Rodgers

    Good morning and welcome to our witnesses to our virtual 
joint subcommittee hearing focused on disinformation online.
    We are living in unprecedented times. Due to the ongoing 
COVID-19 pandemic, millions of Americans have been forced to 
stay home and as a result we are spending more time online.
    There are few values more central to the foundation of 
American democracy than the freedom of speech.
    It's what sets America apart from nearly every other nation 
on Earth.
    And yet, this bedrock principle is increasingly under 
attack.
    Whether online, on campus, or in the media, ideas are now 
more often ``canceled'' than debated.
    Those who find ideas or concepts objectionable would rather 
shut down debate and speech rather than offer a compelling 
counter argument.
    Whether or not the First Amendment applies directly, our 
institutions should be striving to uphold its ideals.
    Censorship over political differences, whether done by a 
government, a university, a newspaper, a tech company, or other 
medium is unamerican.
    Ironically, this movement against speech largely began in 
our universities.
    By shouting down or forcing administrators to cancel 
conservative speakers, many students have learned it's OK to 
shutdown dissent to protect themselves from ``wrong'' or 
``triggering thought.''
    Unsurprisingly, this method of speech suppression has found 
its way into the real world.
    Just recently, employees of the New York Times forced out 
an editor for running on Op-Ed by a United States Senator that 
they disagreed with.
    Where was this outrage when the Times ran an Op-Ed from the 
Taliban? Or Putin? Or one from 2017 praising Chairman Mao?
    This is just one example in a long, disturbing trend 
against free speech, and specifically against conservatives.
    I have long said the answer to bad speech--or any speech 
you disagree with--should be more speech, not less.
    If you disagree with an idea, don't try to suppress the 
speaker. Engage with ideas of your own.
    Our country was built on great debates over ideas and 
ideologies and we are better for it.
    While there's been too much censorship of political 
discourse, more needs to be done to combat actual harmful 
misinformation online.
    From the very outset of this pandemic, the world's 
collective response was hamstrung by the lack of transparency 
and outright disinformation spread by the Chinese Communist 
Party.
    They withheld crucial information about the virus's spread 
and used propaganda and disinformation about its origins.
    Because of this, the Chinese Communist Party has done 
extreme harm to the citizens of the world.
    It has resulted in hundreds of thousands of deaths and 
trillions in economic losses.
    China must be held accountable.
    That starts by calling China out on its attempts to shift 
blame and hide facts about the start of the outbreak, 
especially when they use our social media platforms to do so.
    While the First Amendment does not apply to private 
companies and social media companies, our Internet platforms 
should still strive to protect free speech, human and civil 
rights, and be transparent in how they operate.
    Just last month Leader Walden and I pressed TikTok about 
their ties to the CCP and about their compliance with the 
Children's Online Privacy Protection Act.
    They must protect our children's information and ensure no 
American's information is shared with our adversaries.
    We also pressed Zoom after learning they had suppressed the 
free speech of U.S.-based Chinese activists even though they 
were on U.S. soil.
    Although we expect answers to our questions later this week 
. . . we are pleased to see Zoom respond publicly to our letter 
stating they will no longer allow requests from the Chinese 
government to impact anyone outside of mainland China.
    While some social media companies do a better job than 
others in protecting free expression this should be a reminder 
to all, we are watching.
    That includes Twitter, which appears to be targeting 
President Trump, while turning a blind eye to many regimes 
pushing disinformation and propaganda.
    These companies have a lot of power and an important role 
to play in our country's political discourse. To build trust, 
platforms must enforce terms of services and apply content 
policies uniformly.
    The path of bias towards one ideology over another will 
further sow distrust and division in our already divided 
country.
    We expect more from you and the American people deserve 
better.
    Society benefits from an open and free dialogue we must 
cherish that principle not discard it.
    Thank you and I yield back.

             Prepared Statement of Hon. Frank Pallone, Jr.

    This year has been a test of our country's resilience and 
promise. We are facing a devastating health pandemic that has 
resulted in a severe economic downturn. At the same time, we 
must as a nation confront the staggering racial inequality in 
this country, sown by centuries of racism. This should be a 
time of national mourning, unity, healing, and action. Instead, 
online disinformation, among other things, is being used to 
further divide us.
    In March, the President declared COVID-19 a national 
emergency. States like New York and New Jersey were among the 
hardest hit and early on took strict mitigation measures.
    Yet, if you were on social media, you may have thought the 
severity of the virus was simply being made up, leading to 
confusion over the need for social distancing, masks, or other 
mitigation measures. Or that the virus could be cured by bleach 
or hydroxychloroquine. Some of this disinformation was being 
promoted by President Trump himself. Now some states, such as 
Texas, Florida, and Arizona, are seeing a surge in cases while 
disinformation and misinformation flourishes online.
    And then last month, our country sat horrified as it 
watched the murder of George Floyd over a painful eight minutes 
and forty-six seconds. This needless killing, following so many 
others over the years, has forced our country to reckon with a 
long history of racial inequity, that our fellow Black 
Americans deal with every day.
    People of all races have taken to the streets to show 
solidarity and raise awareness of racial inequity and to call 
for action. But some people have taken to their social media 
accounts to spread disinformation with outlandish claims such 
as George Floyd's murder was staged or that the anti-racism 
protests are using paid protesters. Even the President has 
amplified false claims about protesters on social media. Rather 
than using social media for social good , disinformation is 
being used to sow social unrest.
    President Trump is politicizing our country's health and 
social crises to cause division among us and inflame racial 
tensions. The Administration's woefully inadequate response to 
the COVID-19 pandemic has been made worse by President Trump 
spreading disinformation. And instead of healing and uniting 
our country during these troubled times, the President is 
fueling culture wars.
    Over the past few years, social media platforms have simply 
not done enough to eliminate disinformation. As a result, the 
situation has gotten worse. Facebook refuses to take action 
against Trump's spread of misinformation. Twitter has taken 
baby steps, yet equal enforcement of its policies has drawn 
acts of political retribution from the Administration.
    We can and should expect more from social media platforms 
because, unfortunately, we all know the President is not going 
to change. In fact, his actions are likely to become even more 
egregious in the upcoming months and that's why the social 
media platforms must do more. I will work with my colleagues to 
make platforms more accountable to the people.
    With that, I yield one minute to Rep. Butterfield and the 
remainder of my time to Rep. Blunt Rochester.

                 Prepared Statement of Hon. Greg Walden

    Thank you, Mr. Chairman. I welcome and thank all our 
witnesses for joining us today to discuss online 
misinformation.
    The internet is both a tool for good and evil. It allows 
Americans to work and learn from home; gives us unlimited 
access to information; helps connect us to our loved ones; and 
strengthens our economy. The United States is a global leader 
in innovation and home to the most advanced technology 
companies in the world.
    The internet has also empowered bad actors to promote 
online scams, post harmful and offensive content, and globally 
disseminate disinformation for free. Often, social media posts 
have become a cancer on civility, literally destroying 
reputations and lives with one click. It's revolting to see 
what some people post online-something I can tell you from 
personal experience in this public position.
    But we all know, it's hard to regulate speech, especially 
in a democracy and with the protections we're afforded under 
the First Amendment. We also know there are boundaries and 
limits. But over the course of our history, we've never had so 
much power to regulate speech concentrated to so few in the 
private sector, and with the broad immunity protection they 
have under Section 230.
    As we battle COVID-19, access to factual information is 
more important now than ever. However, we still see 
misinformation spread on platforms. I know the Trump 
Administration has aggressively gone after bad actors, but as 
soon as you take down one site or profile, another pops up. 
It's a global battle.
    We are in the midst of a national fight for equality and 
justice. At the same time, we see bigots post unacceptable, 
racist, and offensive comments online. These comments have no 
place in our society.
    Congress expects internet companies to monitor their 
platforms and take down false, misleading, and harmful content. 
That's why Congress enacted section 230 of the Communications 
Decency Act, which provides liability protection to companies 
that take down content on their platforms.
    Last fall, this committee held a hearing re-examine section 
230. I said then and will say again: many concerns can be 
addressed if these companies simply do what they say they will 
do: enforce their terms of service. However, recent actions 
taken by these companies trouble me.
    Twitter recently enacted new policies that seemingly target 
President Trump; meanwhile tweets that actually advocate 
violence are not flagged. Questions remain about who makes 
these decisions.
    Google took action against the Federalist for allegedly 
violating Google's ad policy on comment sections, not for the 
content of its articles as NBC initially claimed. Significant 
questions persist as to whether Google followed their 
procedures and notified the Federalist directly. Moreover, why 
was this publication targeted and not others?
    I think I can speak for everyone on this committee when I 
say that we do not support harmful or racist rhetoric or 
disinformation online. We expect these companies to do their 
best to flag or remove offensive and misleading content. But we 
also expect these immensely powerful platforms to follow their 
own processes for notifying users when they have potentially 
violated those policies, and to enforce policies equitably--but 
that does not appear to have happened of late.
    That is why I have prepared legislation that will mandate 
more transparency from online platforms about their content 
practices. This would require these companies to file reports 
with the FTC so it is clear whether they are complying with 
their own terms of service, and to bring transparency to their 
appeal process.
    I hope this can be bipartisan legislation. This is a 
straightforward bill that only impacts companies with revenues 
over $1 billion--so I hardly think it will crash the Internet.
    I realize that given a mix of human review and artificial 
intelligence, these platforms are not always going to get it 
right - but they absolutely must be more transparent.
    Mr. Chairman, we politely asked Google to testify today. 
The response we got in return said it all. Their presence 
before this committee is LONG overdue. If they won't come 
voluntarily, perhaps it is time, Mr. Chairman, we compel their 
attendance.
    The power to regulate speech in America is cloaked more and 
more in secret algorithms and centralized in the hands of a 
powerful few in the private sector. We've never needed 
transparency and accountability more. Freedom-loving Americans 
have far too much at stake for us to let internet companies go 
unchecked.
    Thank you and I yield back.

                 Prepared Statement of Hon. Anna Eshoo

    I thank Chairman Doyle and Chairwoman Schakowsky for 
holding a joint hearing on this highly important topic.
    It's important we start with common definitions. 
Misinformation is a falsehood while disinformation is a 
falsehood deliberately spread for deceit. Disinformation is 
indeed dividing our nation as the title of this hearing 
suggests, however, I respectfully suggest that our country is 
not in a crisis but is experiencing several crises that online 
disinformation is exacerbating. Similarly, there isn't a 
singular `silver bullet' to solve these problems.
    The murder of George Floyd has shaken the conscience of our 
entire country, and it has laid bare the racial disparities in 
policing that Black Americans face every day but for too long 
have been ignored. Disinformation painted protests as extremely 
violent, even when they were peaceful, leading to a further 
polarization of Americans' views on police reform. At the same 
time, over 100,000 Americans have lost their lives to COVID-19, 
and the spread of lies about ingesting bleach and similar 
disinformation threaten even more lives.
    When it comes to the Census, the Republican National 
Committee has repeatedly attempted to deceive Americans through 
mailers, text messages, and ads purchased on social media 
during the decennial count. These communications undermine our 
constitutionally mandated data collection effort, which 
determines Congressional representation and the distribution of 
critical resources to communities nationwide. This is why I 
introduced H.R. 6215, the Honest Census Communications Act, 
which outlaws communicating false or intentionally deceptive 
information about the census.
    In our elections, disinformation has extreme consequences. 
Spreading lies about a candidate, designing messages to 
suppress the vote of specific communities, or communicating 
falsehoods about voter registration or electoral procedures is, 
in my view, an act of assault on our democracy. Because 
political speech rightly has the highest level of First 
Amendment protections, I believe we need to prohibit the 
marketing tactic of political ad microtargeting, which 
fractures our open democratic debate into millions of private, 
unchecked silos allowing for the spread of disinformation, fake 
news, false promises, lies, and polarizing exaggerations, 
without real-time public scrutiny. Microtargeting is 
fundamentally a pernicious abuse of the vast data companies 
collect about users, enabling voter suppression and election 
disinformation from foreign governments.
    For this reason, I introduced H.R. 7014, the Banning 
Microtargeted Political Ads Act, to prohibit online platforms 
(e.g., social media platforms, ad networks, and streaming 
services) from targeting ads based on demographic and 
behavioral data of users. The bill does allow broad location 
targeting and targeting ads to individuals who expressly and 
specifically opt in to receive them. My bill is supported by 20 
of our country's leading elections experts, privacy scholars, 
and civil society organizations.
    Finally, the elephant in the room is whether Congress 
should amend Section 230 of the Communications Decency Act. I 
believe we should carefully consider amendments that work with 
a surgical scalpel, rather than a jackhammer. We must proceed 
with extreme caution because of the critical role Section 230 
plays in enabling so much that is positive about the internet. 
At the same time, it does seem to me that financial liability 
is one of the few ways remaining to get platforms to take their 
responsibilities for removing illegal and harmful content 
seriously.
    I look forward to a productive hearing.
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]    

                                 [all]