[House Hearing, 115 Congress]
[From the U.S. Government Publishing Office]




 
                 GAME CHANGERS: ARTIFICIAL INTELLIGENCE


                 PART III, ARTIFICIAL INTELLIGENCE AND


                             PUBLIC POLICY

=======================================================================

                                HEARING

                               BEFORE THE

                            SUBCOMMITTEE ON
                         INFORMATION TECHNOLOGY

                                 OF THE

                         COMMITTEE ON OVERSIGHT
                         AND GOVERNMENT REFORM
                        HOUSE OF REPRESENTATIVES

                     ONE HUNDRED FIFTEENTH CONGRESS

                             SECOND SESSION

                               __________

                             APRIL 18, 2018

                               __________

                           Serial No. 115-79

                               __________

Printed for the use of the Committee on Oversight and Government Reform





[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]





         Available via the World Wide Web: http://www.fdsys.gov
                       http://oversight.house.gov
                       
                       
                       
                       _________ 

             U.S. GOVERNMENT PUBLISHING OFFICE
                   
 31-118 PDF          WASHINGTON : 2018                             
                       
                       
                       
                       
                       
                       
                       
              Committee on Oversight and Government Reform

                  Trey Gowdy, South Carolina, Chairman
John J. Duncan, Jr., Tennessee       Elijah E. Cummings, Maryland, 
Darrell E. Issa, California              Ranking Minority Member
Jim Jordan, Ohio                     Carolyn B. Maloney, New York
Mark Sanford, South Carolina         Eleanor Holmes Norton, District of 
Justin Amash, Michigan                   Columbia
Paul A. Gosar, Arizona               Wm. Lacy Clay, Missouri
Scott DesJarlais, Tennessee          Stephen F. Lynch, Massachusetts
Virginia Foxx, North Carolina        Jim Cooper, Tennessee
Thomas Massie, Kentucky              Gerald E. Connolly, Virginia
Mark Meadows, North Carolina         Robin L. Kelly, Illinois
Ron DeSantis, Florida                Brenda L. Lawrence, Michigan
Dennis A. Ross, Florida              Bonnie Watson Coleman, New Jersey
Mark Walker, North Carolina          Raja Krishnamoorthi, Illinois
Rod Blum, Iowa                       Jamie Raskin, Maryland
Jody B. Hice, Georgia                Jimmy Gomez, Maryland
Steve Russell, Oklahoma              Peter Welch, Vermont
Glenn Grothman, Wisconsin            Matt Cartwright, Pennsylvania
Will Hurd, Texas                     Mark DeSaulnier, California
Gary J. Palmer, Alabama              Stacey E. Plaskett, Virgin Islands
James Comer, Kentucky                John P. Sarbanes, Maryland
Paul Mitchell, Michigan
Greg Gianforte, Montana

                     Sheria Clarke, Staff Director
                    William McKenna, General Counsel
     Troy Stock, Information Technology Subcommittee Staff Director
             Sarah Moxley, Senior Professional Staff Member
                    Sharon Casey, Deputy Chief Clerk
                 David Rapallo, Minority Staff Director
                                 ------                                

                 Subcommittee on Information Technology

                       Will Hurd, Texas, Chairman
Paul Mitchell, Michigan, Vice Chair  Robin L. Kelly, Illinois, Ranking 
Darrell E. Issa, California              Minority Member
Justin Amash, Michigan               Jamie Raskin, Maryland
Steve Russell, Oklahoma              Stephen F. Lynch, Massachusetts
Greg Gianforte, Montana              Gerald E. Connolly, Virginia
                                     Raja Krishnamoorthi, Illinois
                                     
                                     
                            C O N T E N T S

                              ----------                              
                                                                   Page
Hearing held on April 18, 2018...................................     1

                               WITNESSES

Mr. Gary Shapiro, President, Consumer Technology Association
    Oral Statement...............................................     4
    Written Statement............................................     6
Mr. Jack Clark, Director, OpenAI
    Oral Statement...............................................    12
    Written Statement............................................    14
Ms. Terah Lyons, Executive Director, Partnership on AI
    Oral Statement...............................................    23
    Written Statement............................................    25
Dr. Ben Buchanan, Postdoctoral Fellow, Cyber Security Project, 
  Science, Technology, and Public Policy Program, Belfer Center 
  for Science and International Affairs, Harvard Kennedy School
    Oral Statement...............................................    37
    Written Statement............................................    39


      GAME CHANGERS: ARTIFICIAL INTELLIGENCE PART III, ARTIFICIAL 
                     INTELLIGENCE AND PUBLIC POLICY

                              ----------                              


                       Wednesday, April 18, 2018

                  House of Representatives,
            Subcommittee on Information Technology,
              Committee on Oversight and Government Reform,
                                                   Washington, D.C.
    The subcommittee met, pursuant to call, at 2:02 p.m., in 
Room 2154, Rayburn House Office Building, Hon. Will Hurd 
[chairman of the subcommittee] presiding.
    Present: Representatives Hurd, Issa, Amash, Kelly, 
Connolly, and Krishnamoorthi.
    Mr. Hurd. The Subcommittee on Information Technology will 
come to order. And without objection, the chair is authorized 
to declare a recess at any time.
    Good afternoon. I welcome y'all to our final hearing in our 
series on artificial intelligence. I've learned quite a bit 
from our previous two hearings, and I expect today's hearing is 
going to be equally informative.
    This afternoon, we are going to discuss the appropriate 
roles for the public and private sectors as AI, artificial 
intelligence, matures.
    AI presents a wealth of opportunities to impact our world 
in a positive way. For those who are vision impaired, there is 
AI that describes the physical world around them to help them 
navigate, making them more independent.AI helps oncologists 
target cancer treatment more quickly. AI has the potential to 
improve government systems so that people spend less time 
trying to fix problems, like Social Security cards or in line 
at Customs.
    As with anything that brings tremendous potential for 
rewards, there are great challenges ahead as well. AI can 
create video clips of people saying things they did not say and 
would never support. AI tools and cyber attacks can increase 
the magnitude and reach of those--of these attacks to 
disastrous levels.
    In addition, both our allies and potential adversaries are 
pursuing AI dominance. It is not a foregone conclusion that the 
U.S. will lead in this technology. We need to take active steps 
to ensure America continues to be the world leader in AI.
    On the home front, bias, privacy, ethics, and the future of 
work are all challenges that are a part of AI. So given the 
great possibilities and equal great potential hardships, what 
do we do? What is the role of government in stewarding this 
great challenge to benefit all? What should the private sector 
be doing to enhance the opportunity to minimize the risk?
    While I do not expect anyone to have all these answers 
today, I think our panel of witnesses will have suggestions for 
the way forward when it comes to AI.
    While this is the final hearing in our AI series, this work 
does not end today. And our subcommittee will be releasing a 
summary of what we have learned from the series in the coming 
weeks outlining steps we believe should be taken in order to 
help drive AI forward in a way that benefits consumers, the 
government, industry, and most importantly, our citizens.
    I thank the witnesses for being here today and look forward 
to learning from y'all, and we can all benefit from the 
revolutionary opportunities AI offers. And as always, I'm 
honored to be exploring these issues in a bipartisan fashion 
with my friend, the ranking member, the woman, the myth, the 
legend, Robin Kelly from the great State of Illinois.
    Ms. Kelly. Thank you so much.
    Thank you, Chairman Hurd, and welcome to all of our 
witnesses here today. This is the third hearing, as you've 
heard, that our subcommittee has held on the important topic of 
artificial intelligence, or AI. Our two prior hearings have 
shown how critical the collection of data is to the development 
and expansion of AI. However, AI's reliance on the use of 
personal information raises legitimate concerns about personal 
privacy.
    Smart devices of all kinds are collecting your data. Many 
of us have to look no further than the smart watch on our 
wrists to see this evidence in motion. The arms race to produce 
individual predictive results is only increasing with smart 
assistants like Alexa and Siri in your pocket and listening at 
home for your next command. Sophisticated algorithms help these 
machines refine their suggestions and place the most relevant 
information in front of our customers.
    These systems, however, rely upon vast amounts of data to 
produce precise results. Privacy concerns for tens of millions 
of Facebook users were triggered when the public learned that 
Cambridge Analytica improperly obtained to potentially use 
their personal data to promote the candidacy of Donald Trump.
    Whether Congress passes new laws or industry adopts new 
practices, clearly, consumers need and deserve new protections. 
To help us understand what some of these protections may look 
like, Dr. Ben Buchanan from Harvard University's Belfer Center 
for Science and International Affairs, is here with us today. 
Dr. Buchanan has written extensively on the different types of 
safeguards that may be deployed on AI systems to protect the 
personal data of consumers.
    Advancement in AI also pose new challenges to cybersecurity 
due to increased risk of data breaches by sophisticated 
hackers. Since 2013, we have witnessed a steady increase in the 
number of devastating cyber attacks against both the private 
and the public sectors. This past September, Equifax announced 
that hackers were able to exploit a vulnerability on their 
systems, and as a result, gained access to the personal data of 
over 140 million Americans.
    A recent report coauthored by OpenAI, represented by Mr. 
Clark today, expressly warns about the increased cyber risks 
the country faces due to AI's advancements. According to the 
report, continuing AI advancements are likely to result in 
cyber attacks that are, quote, ``more effective, more finely 
targeted, more difficult to attribute, and more likely to 
exploit vulnerabilities in AI systems.''
    As AI advances, another critical concern is its potential 
impact on employment. Last year, the McKinsey Global Institute 
released the findings from a study on the potential impact of 
AI-driven automation on jobs. According to the report, and I 
quote, ``Up to one-third of the workforce in the United States 
and Germany may need to find work in new occupations.''
    Other studies indicate that the impact on U.S. workers may 
even be higher. In 2013, Oxford University reported on a study 
that found that due to AI automation, I quote, ``about 47 
percent of total U.S. employment is at risk.''
    To ensure that AI's economic benefits are more broadly 
shared by U.S. workers, Congress should begin to examine and 
develop policies and legislation that would assist workers 
whose jobs may be adversely affected by AI-driven automation.
    As AI advances continue to develop, I'll be focused on how 
the private sector, Congress, and regulators can work to ensure 
that consumers' personal privacy is adequately protected and 
that more is being done to account for the technology's impact 
on cybersecurity and our economy.
    I want to thank our witnesses again for testifying today, 
and I look forward to hearing your thoughts on how we can 
achieve this goal.
    And again, thank you, Mr. Chairman.
    Mr. Hurd. I appreciate the ranking member.
    And now, it's a pleasure to introduce our witnesses. Our 
first guest is known to everyone who knows anything about 
technology, Mr. Gary Shapiro, president of the Consumer 
Technology Association. Thanks for being here.
    Mr. Jack Clark is here as well, director at OpenAI.
    We have Ms. Terah Lyons, the executive director at 
Partnership on AI.
    And last but not least, Dr. Ben Buchanan, postdoctoral 
fellow at Harvard Kennedy School's Belfer Center for Science 
and International Affairs. Say that three times fast.
    I appreciate all y'all's written statements. It really was 
helpful in understanding this issue.
    And pursuant to committee rules, all witnesses will be 
sworn in before you testify, so please stand and raise your 
right hand.
    Do you solemnly swear or affirm that you're about to tell 
the truth, the whole truth, and nothing but the truth so help 
you God?
    Thank you. Please be seated.
    Please let the record reflect that all witnesses answered 
in the affirmative.
    And now, in order to allow for time for discussion, please 
limit your testimony to 5 minutes. Your entire written 
statement will be made part of the record.
    As a reminder, the clock in front of you shows your 
remaining time; the light turns yellow when you have 30 seconds 
left; and when it's flashing red, that means your time is up. 
Also, please remember to push the talk button to turn your 
microphone on and off.
    And now, it's a pleasure to recognize Mr. Shapiro for your 
opening remarks.

                       WITNESS STATEMENTS

                   STATEMENT OF GARY SHAPIRO

    Mr. Shapiro. I'm Gary Shapiro, president and CEO of the 
Consumer Technology Association, and I want to thank you, 
Chairman Hurd and Ranking Member Kelly, for inviting me to 
testify on this very important issue, artificial intelligence.
    Our association represents 2,200 American companies in the 
consumer technology industry. We also own and produce the 
coolest, greatest, funnest, most important, and largest 
business and innovation event in the world, the CES, held each 
January in Las Vegas.
    Our members develop products and services that create jobs. 
They grow the economy and they improve lives. And many of the 
most exciting products coming to market today are AI products.
    CTA and our member companies want to work with you to 
figure out how we can ensure that the U.S. retains its position 
as the global leader in AI, while also proactively addressing 
the pressing challenges that you've already raised today.
    Last month, we released a report on the current and future 
prospects of AI, and we found that AI will change the future of 
everything, from healthcare and transportation to entertainment 
security. But it will also raise questions about jobs, bias, 
and cybersecurity. We hope our research, along with the efforts 
of our member-driven artificial intelligence working group, 
will lay the groundwork for policies that will foster AI 
development and address the challenges AI may create.
    First, consider how AI is creating efficiency and improving 
lives. The U.S. will spend $3.5 trillion on healthcare this 
year. The Federal Government shoulders over 28 percent of that 
cost. By 2047, the CBO estimates Federal spending for people 
age 65 and older who receive Social Security, Medicare, and 
Medicaid benefits could account for almost half of all Federal 
spending.
    AI can be part of the solution. Each patient generates 
millions of data points every day, but most doctors' offices 
and hospitals are not now maximizing the value of that data. AI 
can quickly sift through and identify aspects of that data that 
can save lives. For example, Qualcomm's alert watch AI system, 
which provides real-time analysis of patient data during 
surgery, significantly lowers patients' heart attacks and 
kidney failures, and it reduces average hospital stays by a 
full day.
    Cybersecurity is another area where AI can make a big 
impact, according to our study. AI technologies can interpret 
vast quantities of data to prepare better for and protect 
against cybersecurity threats. In fact, our report found that 
detecting and deterring security intrusions was a top area 
where companies are today using AI. AI should contribute over 
$15 trillion to the global economy by 2030, according to PWC.
    Both the present and prior administrations have recognized 
the importance of prioritizing AI. But AI is also capturing the 
attention of other countries. Last year, China laid out a plan 
to create $150 billion world leading AI industry by 2030. 
Earlier this year, China announced a $2 billion AI research 
park in Beijing. France just unveiled a high-profile plan to 
foster AI development in France and across the European Union. 
I was there last week, and it was the talk of France.
    Today, the U.S. is the leader in AI, both in terms of 
research and commercialization. But as you said, Mr. Chairman, 
our position is not guaranteed. We need to stay several steps 
ahead. Leadership from the private sector, supported by a 
qualified talent pool and light touch regulation, is a winning 
formula for innovation in America. We need government to think 
strategically about creating a regulatory environment that 
encourages innovation in AI to thrive, while also addressing 
the disruptions we've been talking about.
    Above all, as we noted in our AI report, government 
policies around AI need to be both flexible and adaptive. 
Industry and government also need to collaborate to address the 
impact AI is having and will have on our workforce. The truth 
is most jobs will be improved by AI, but many new jobs will be 
created and, of course, some will be lost.
    We need to ensure that our workforce is prepared for these 
jobs of the future, and that means helping people whose jobs 
are displaced gain the skills that they need to succeed in new 
ones.
    CTA's AI working group is helping to address these 
workforce challenges. We just hired our first vice president of 
U.S. jobs; and on Monday, we launched CTA's 21st Century 
Workforce Council to bring together leaders in our industry to 
address the significant skills gap in our workforce we face 
today.
    In addition to closing the skills gap, we need to use the 
skills of every American to succeed. CTA is committed to 
strengthening the diversity of the tech workforce. Full 
representation of a workforce will go a long way to making sure 
that tech products and services consider the needs and 
viewpoints of diverse users.
    We as an industry also need to address data security, and 
we also need to welcome the opportunity to continue to work 
with you on that in other areas. We believe that the trade 
agenda, the IP agenda, and immigration all tie into our success 
as well in AI.
    There's no one policy decision or government action that 
will guarantee our leadership in AI, but we are confident we 
can work together on policies that will put us in the best 
possible position to lead the world in AI and deliver the 
innovative technologies that will change our lives for the 
better.
    [Prepared statement of Mr. Shapiro follows:]
    
    
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]    
    
        
    Mr. Hurd. Thank you, Mr. Shapiro.
    Mr. Clark, you're now recognized for 5 minutes.

                    STATEMENT OF JACK CLARK

    Mr. Clark. Chairman Hurd, Ranking Member Kelly, and other 
members of the subcommittee, thank you for having this hearing.
    I'm Jack Clark, the strategy and communications director 
for OpenAI. We're an organization dedicated to ensuring that 
powerful artificial intelligence systems benefit all of 
humanity. We're based in San Francisco, California, and we 
conduct fundamental technical research with frontiers of AI, as 
well as participating in the global policy discussion. I also 
help maintain the AI index and AI measurement and forecasting 
initiative which is linked to Stanford University.
    I'm here to talk about how government can support AI in 
America, and I'll focus on some key areas. My key areas are 
ethics, workforce issues, and measurement. I believe these are 
all areas where investment and action by government will help 
to increase this country's chances of benefiting from this 
transformative technology.
    First, ethics. We must develop a broad set of ethical norms 
governing the use of this technology, as I believe existing 
regulatory tools are and will be insufficient. The technology 
is simply developing far too quickly. As I and my colleagues 
recently wrote in a report Malicious Use of Artificial 
Intelligence: Forecasting, Prevention, and Mitigation, this 
unprecedentedly rampant proliferation of powerful technological 
capabilities brings about unique threats or worsens existing 
ones. And because of the nature for technology, traditional 
arms control regimes or other policy tools are insufficient, so 
we need to think creatively here.
    So how can we control this technology without stifling 
innovation? I think we need to work on norms. And what I mean 
by norms are developing a global sense of what is right and 
wrong to do with this technology. So it's not just about 
working here in America; it's about taking a leadership 
position on norm creation so that we can also influence how AI 
is developed worldwide. And that's something that I think the 
United States is almost uniquely placed to do well.
    This could include new norms around publication, as well as 
norms around safety research, or having researchers evaluate 
technologies for their downsides as well as upsides and having 
that be a part of the public discussion.
    I'm confident this will work. We've already seen similar 
work being undertaken by the AI community to deal with our own 
issues of diversity and bias. Here, norms have become a product 
of everyone, and by having an inclusive conversation that's 
involved a wide set of stakeholders, we've been able to come to 
solutions that don't require specific regulations but can 
create norms that condition the way that the innovation occurs.
    So a question I have for you is, you know, what do you want 
to know about, what are you concerned about, and what 
conversations can we have to make sure that we are responsive 
to those concerns as a community?
    Second, workforce. The U.S. is currently the leader in AI 
technology, but as my colleague Gary said, that's not exactly 
guaranteed. There's a lot of work that we need to do to ensure 
that that leadership remains in place, and that ranges from 
investment in basic research to also supporting the community 
of global individuals that develop AI. I mean, part of the 
reason we're sitting here today is because of innovations that 
occurred maybe 10 to 15 years ago as a consequence of people I 
could count on these two hands. So even losing a single 
individual is a deep and real problem, and we should do our 
best to avoid it.
    Third, measurement. Now, measurement may not sound hugely 
flashy or exciting, but I think it actually has a lot of value 
and is an area where government can have an almost unique 
enabling role in helping innovation. You know, the reason why 
we're here is we want to understand AI and its impact on 
society, and while hearings like this are very, very useful, we 
need something larger. We need a kind of measurement moonshot 
so that we can understand where the technology is developing, 
you know, where it's going in the future, where new threats and 
opportunities are going to come from so that we can have, not 
only informed policymakers, but also a more informed citizenry. 
And I think that having citizens feel that the government knows 
what's going on with AI and is taking a leadership role in 
measuring AI's progress and articulating that back to them can 
make it feel like a collective across-America effort to develop 
this technology responsibly and benefit from it.
    Some specific examples already abound for ways this works. 
You know, DARPA wanted to measure how good self-driving cars 
were and held a number of competitions, which enabled the self-
driving car industry. Two years ago, it held similar 
competitions for cyber defense and offense, which has given us 
a better sense of what this technology means there. And even 
more recently, DIUx released their xView satellite datasets in 
competition, which is driving Innovation in AI research in that 
critical area to national security and doing it in a way that's 
inclusive of as many smart people as possible.
    So thank you very much. I look forward to your questions.
    [Prepared statement of Mr. Clark follows:]
    
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]    
  
    
    Mr. Hurd. Thank you, Mr. Clark. Well, you're in the right 
place. Measurement may not be flashy, but we talk about IT 
procurement as well, which isn't sexy either. So you're with 
good company.
    Ms. Lyons, you're now recognized for 5 minutes.

                    STATEMENT OF TERAH LYONS

    Ms. Lyons. Good afternoon. Chairman Hurd, Ranking Member 
Kelly, thank you for the opportunity to discuss a very 
important set of issues.
    I am the executive director of the Partnership on 
Artificial Intelligence to Benefit People and Society, a 
501(c)(3) nonprofit organization established to study and 
formulate best practices on AI technologies, to advance the 
public's understanding on AI, and to serve as an open platform 
for discussion and engagement about AI and its influences on 
people and society.
    The Partnership is an unprecedented multistakeholder 
organization founded by some of the largest technology 
companies, in conjunction with a diverse set of cross-sector 
organizations spanning civil society and the not-for-profit 
community and academia. Since its establishment, the 
Partnership has grown to more than 50 partner organizations 
spanning three continents.
    We believe that the formulation of the partnership could 
not have come at a more crucial time. As governments everywhere 
grapple with the implications of technology on citizens' rights 
and governance and as the research community increasingly 
emphasizes the need for multidisciplinary work focused on, not 
just the question of how we build technologies, but in some 
cases, whether to and also in what ways, the Partnership seeks 
to be a platform for collective reflection, and importantly, 
collective action.
    My remarks this afternoon will focus, first, on some of the 
potential opportunities and challenges presented by artificial 
intelligence, and second, on how the Partnership hopes to 
engage with policymakers with industry, the research community, 
and other stakeholders. Artificial intelligence technologies 
present a significant opportunity for the United States and for 
the world to address some of humanity's most pressing and 
large-scale challenges, to generate economic growth and 
prosperity, and to raise the quality of human life everywhere.
    While the promise of AI applied to some domains is still 
distant, AI is already being used to solve important 
challenges. In healthcare, already mentioned, AI systems are 
increasingly able to recognize patterns in the medical field 
helping human experts interpret and scan and detect cancers. 
These methods will only become more effective as large datasets 
become more widely available. And beyond healthcare, AI has 
important applications in environmental conservation, 
education, economic inclusion, accessibility, and mobility, 
among other areas.
    As AI continues to develop, researchers and practitioners 
must ensure that AI-enabled systems are safe, that they can 
work effectively with people and benefit all parts of society, 
and that their operation will remain consistent and aligned 
with human values and aspirations. World-changing technologies 
need to be applied and ushered in with corresponding social 
responsibility, including attention paid to the impacts that it 
has on people's lives.
    For example, as technologies are applied in areas like 
criminal justice, it is critical for the Partnership to raise 
and address concerns related to the inevitable bias and 
datasets used to train algorithms. It's also critical for us to 
engage with those using such algorithms in the justice system 
so that they understand the limits of these technologies and 
how they work.
    Good intentions too are not enough to ensure positive 
outcomes. We need to ensure that ethics are put into practice 
when AI technologies are applied in the real world and that 
they reflect the priorities and needs of the communities that 
they serve. This won't happen by accident. It requires a 
commitment from developers and other stakeholders who create 
and influence technology to engage with broader society, 
working together to predict and direct AI's benefits and to 
mitigate potential harms. Identifying and taking action on 
high-priority questions for AI research, development, and 
governance will require the diverse perspectives and resources 
of a range of different stakeholders, both inside and outside 
of the partnership on AI community.
    There are several ways in which we are delivering this. A 
key aspect of this work of the Partnership has so far taken the 
form of a series of working groups which we have established to 
approach three of our six thematic pillars, with the other 
three to follow soon. These first working groups are on safety 
critical artificial intelligence; fairness, transparency, and 
accountability in AI; and also AI labor in the economy.
    The Partnership will also tackle questions that we think 
need to be addressed urgently in the field and are ripe for 
collective action by a group of interests and expertise as 
widespread and diverse as ours. Our work will take different 
forms and could include research, standards development, policy 
recommendations, best practice guidelines, or codes of conduct. 
Most of all, we hope to provide policymakers and the general 
public with the information they need to be agile, adaptive, 
and aware of technology developments so that they can hold 
technologists accountable for upholding ethical standards in 
research and development and better understand how these 
technologies affect them.
    We are encouraged by these hearings and the interest in 
policymakers in the U.S. and worldwide both toward 
understanding the current state of AI and the future impacts it 
may have.
    I thank you for your time, and I look forward to questions.
    [Prepared statement of Ms. Lyons follows:]
    
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]    

    
    Mr. Hurd. I appreciate you, Ms. Lyons.
    Dr. Buchanan, you're now recognized for 5 minutes for your 
opening remarks.

                   STATEMENT OF BEN BUCHANAN

    Mr. Buchanan. Thank you, Chairman Hurd and Ranking Member 
Kelly, for holding this important hearing and for inviting me 
to testify. As you mentioned, I'm a fellow at Harvard 
University's Belfer Center for Science and International 
Affairs, and my research focus is on how nations deploy 
technology, in particular, cybersecurity, including offensive 
cyber capabilities and artificial intelligence.
    Recently, with my friend and colleague, Taylor Miller, of 
the Icahn School of Medicine at Mount Sinai, we published a 
report entitled, ``Machine Learning for Policymakers.'' And to 
help open today's hearing, I would like to make three points: 
one on privacy, one on cybersecurity, and one on economic 
impact. And I'll try to tailor this to not be repetitive. I 
think we're in agreement on a lot of these areas.
    To simplify a little bit, we can think about modern 
artificial intelligence as relying on a triad of parts: some 
data, some computing power, and some machine learning 
algorithms. And while we've seen remarkable advances on the 
computing and learning algorithm side, I think for policymakers 
such as yourselves, it's data that's most important to 
understand. And data is the fuel of machine learning systems. 
Without this data, the systems sometimes produce results that 
are embarrassingly wrong and incorrect.
    Gathering relevant and representative data for training, 
development, and testing purposes is a key part of building 
modern artificial intelligence technology. On balance, the more 
data that is fed into a machine learning system, the more 
effective it will be. It is no exaggeration to say that there 
are probably many economic, scientific, and technological 
breakthroughs that have not yet occurred because we have not 
assembled the right data sources and right datasets.
    However, there is a catch and a substantial one. Much of 
that data that might, and I emphasize might, be useful for 
future machine learning systems is intensely personal, 
revealing, and appropriately private. Too frequently, the 
allure of gathering more data to feed a machine learning system 
distracts from the harms that collecting that data brings. 
There is a risk of breaches by hackers, of misuse by those who 
collect or store the data, and of secondary use in which data 
is collected for one purpose and later reappropriated for 
another.
    Frequently, attempts at anonymization do not work nearly as 
well as promised. It suffices to say that, in my view, any 
company or government agency collecting large amounts of data 
is assuming an enormous responsibility. Too often, these 
collectors fall far short of meeting that responsibility. And 
yet, in an era of increased artificial intelligence, the 
incentive to collect ever more data is only going to grow.
    And technology cannot replace policy, but some important 
technological innovations can offer mitigation to this problem. 
Technology such as differential privacy; that approach can 
ensure that large datasets retain a great deal of their value, 
but protecting the privacy of any one individual member. On-
device processing can reduce the aggregation of data in the 
first place. This is an area in which much remains to be done.
    Second, AI is poised to make a significant impact in 
cybersecurity, potentially redefining key parts of the entire 
industry. Automation on offense and on defense is an area of 
enormous significance. We already heard about the DARPA grand 
cyber challenge, which I agree was a significant, seminal 
event, and we've certainly seen what I would describe as the 
beginnings of significant automations of cyber attacks in the 
wild.
    In the long run, it's uncertain whether increased 
automation will give a decisive cybersecurity advantage to 
hackers or to network defenders, but there is no doubt of its 
immediate and growing relevance.
    AI systems also pose new kinds of cybersecurity challenges. 
Most significant among these is the field of adversarial 
learning in which the learning systems themselves can be 
manipulated oftentimes by what we call poisoned datasets to 
produce results that are inaccurate and sometimes very 
dangerous. And that's another area which is very nascent and 
not nearly as developed as mainstream cybersecurity literature. 
Again, much more remains to be done.
    A more general concern is AI safety. And this conjures up 
notions of Terminator and AI systems that will take over the 
world. In practice, it is often far more nuanced and far more 
subtle than that, though the risk is still quite severe. I 
think it is fair to say that we have barely scratched the 
surface of important safety and basic security research that 
can be done in AI, and this is an area, as my fellow witnesses 
suggest, in which the United States should be a leader.
    Third, AI will have significant economic effects. My 
colleagues here have discussed many of them already. The 
ranking member mentioned two notable studies. I would point you 
to two other studies, both I believe by MIT economists, which 
show that while theory often predicts a job's loss will be 
quickly replaced, in practice, at least in that one instance, 
that did not immediately occur.
    With that, I will leave it there, and I look forward to 
your questions. Thank you.
    [Prepared statement of Mr. Buchanan follows:]
    
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]    
 
    
    Mr. Hurd. Thank you, Dr. Buchanan.
    And I'll recognize the ranking member for 5 minutes or so 
for your first round of questions.
    Ms. Kelly. Thank you, Mr. Chairman, and thank you to the 
witnesses again.
    The recent news that Cambridge Analytica had improperly 
obtained the personal data of up to 87 million Facebook users 
highlights the challenges to privacy when companies collect 
large amounts of personal information for use in AI systems.
    Dr. Buchanan, in your written testimony, you state, and I 
quote, that ``much of the data that might--and I emphasize 
might--be useful for future machine learning systems is 
intensely personal, revealing, and appropriately private.''
    Is that right? You just said that.
    Mr. Buchanan. That's correct, Congresswoman.
    Ms. Kelly. And can you explain for us what types of risks 
and threats consumers are exposed to when their personal 
information is collected and used in AI systems?
    Mr. Buchanan. Sure. As you'd expect, Congresswoman, it 
would depend on the data. Certainly, some financial data, if it 
were to be part of a breach, would lead to potential identity 
theft. There's also data revealed in terms of preferences and 
interests that many members of society might want to keep 
appropriately private. We've heard a lot about AI in medical 
systems. Many people want to keep their medical data private.
    So I think it depends on the data, but there's no doubt 
that, in my view, if a company or government organization 
cannot protect the data, it should not collect the data.
    Ms. Kelly. Okay. In light of these risks and your 
assessment on the majority of companies that do collect and use 
personal data for their AI systems, are they taking adequate 
steps to protect the privacy of citizens?
    Mr. Buchanan. Speaking as a generalization, I think we have 
a long way to go. Certainly, the number of breaches that we've 
seen in recent years, including a very large dataset such as 
Equifax, suggests to me that there's a lot more work that needs 
to be done in general for cybersecurity and data protection.
    Ms. Kelly. And also, in your written testimony, you also 
outlined different types of safeguards that could improve the 
level of protection of consumers' privacy when their data is 
collected and stored in AI systems. One of those safeguards is 
the use of a technical approach you referred to as differential 
privacy. Can you explain that in laymen's terms?
    Mr. Buchanan. Sure. Simplifying a fair amount here, 
differential privacy is the notion that before we put data into 
a big database from an individual person, we add a little bit 
of statistical noise to that data, and that obscures what data 
comes from which person, and, in fact, it obscures the records 
of any individual person, but it preserves the validity of the 
data in the aggregate.
    So you can imagine, if we asked every Member of Congress, 
have you committed a crime, most Congress people and most 
people don't want to answer that question. But if we said to 
them, flip a coin before you answer; if it's heads, answer 
truthfully; if it's tails, don't answer truthfully; flip 
another coin and use a second coin flip to determine your made-
up answer, we're adding a little bit of noise when we collect 
the answers at the end. And using a little bit of math at the 
back end, we can subtract that noise and get a very good 
aggregate picture without knowing which Members of Congress 
committed crimes.
    So the broader principle certainly holds, again, with a 
fair more math involved, that we can get big picture views 
without sacrificing the privacy or criminal records of 
individual members of the dataset.
    Ms. Kelly. I have not committed a crime, by the way.
    Mr. Hurd. Neither have I.
    Ms. Kelly. Do you feel like if more businesses adopted this 
differential privacy, this type of security measure would be 
more effective in mitigating the risk to personal privacy?
    Mr. Buchanan. With something like a differential privacy, 
the devil's in the details; it has to be implemented well. But 
as a general principle, yes, I think it's a very positive 
technical development and one that is fairly recent. So we have 
a lot of work to do, but it shows enormous promise, in my view.
    Ms. Kelly. Thank you. And in addition to this, you also 
identify in your written testimony another type of security 
control known as on-device processing. Can you, again in 
laymen's terms, explain on-device processing and how it 
operates to protect sensitive and personal data?
    Mr. Buchanan. Sure. This one's much more straightforward. 
Essentially, the notion that if we're going to have a user 
interact with an AI system, it is better to bring the AI system 
to them, rather than bring their data to some central 
repository. So if an AI system is going to be on your 
telephone--your cell phone, rather, you can interact with the 
system and do the processing on your own device rather than at 
a central server where the data is aggregated. Again, as a 
general principle, I think that increases privacy.
    Ms. Kelly. And in your assessment, what are the reasons why 
more companies in general are not deploying these types of 
security controls?
    Mr. Buchanan. Certainly, as a matter of practice, they 
require enormous technical skill to implement. Frankly, I think 
some companies want to have the data, want to aggregate the 
data and see the data, and that's part of their business model. 
And that's the incentive for those companies not to pursue 
these approaches.
    Ms. Kelly. What recommendations would you have for ways in 
which Congress can encourage AI companies to adopt more 
stringent safeguards for protecting personal data from 
consumers?
    Mr. Buchanan. I think Mr. Clark has made excellent points 
about the importance of measurement, and I think this is an 
area that I would like to know more about and measure better of 
how are American companies storing, securing, and processing 
the data on Americans. So that would be, Chairman Hurd 
mentioned measurement is a topic of interest to this committee, 
and I think that would be one place to start.
    Ms. Kelly. And just lastly, a part of the struggles that 
companies have is that because they don't have enough of the 
expertise because it is not in the workforce?
    Mr. Buchanan. Yes, Congresswoman, I think that's right. 
There's enormous demand that has not yet been met for folks 
with the skills required to build and secure these systems. 
That's true in AI, and that's true in cybersecurity generally.
    Ms. Kelly. And would the rest of the witnesses agree with 
that also?
    Mr. Shapiro. Yes, the last comment.
    Mr. Clark. Yes. We need 10, 100 times more people with 
these skills.
    Ms. Lyons. I would agree with Dr. Buchanan.
    Ms. Kelly. Thank you. And thank you, Mr. Chair.
    Mr. Hurd. Thank you, ranking member.
    Mr. Shapiro, you know, I had the fortune of attending CES, 
the Consumer Electronics Show, this recent January. Thanks for 
putting on such a good show. I learned a lot about artificial 
intelligence and how important data is in training, the actual 
training the algorithm.
    And one of the questions that we have come up--or we have 
heard on the issues we have heard is the importance of data, 
and we've learned about bias and preventing that. We learned 
about being auditable. We know we have to invest more money in 
AI. We also know when you train people better.
    Who should be taking the lead? Like, who is the person, who 
should be driving kind of this conversation? Or maybe let me 
narrow the question. Who in government should be driving kind 
of the investment dollars in this? And I know you have peer 
research at universities. You have the national labs. You know, 
who should be coming up with that with our investment plan in 
AI?
    Mr. Shapiro. Well, I think, first of all, we have to agree 
on the goals. I liked the idea of measurement as well, and I 
think the goals are, number one, we would like the U.S. to be 
the leader; two, we want to solve some fundamental human 
problems involving healthcare, safety, cybersecurity. Now, 
there's some--we can define goals with those. And third, we 
want to respect people's privacy.
    And I think there has to be national discussion on some of 
these issues because let's take the issue of privacy, for 
example, and we've heard a lot about that today. The reality 
is, culturally, we're different on privacy than other parts of 
the world. In China, the concept of privacy is, especially in 
this area, is that the citizens really don't have any. They're 
getting social scores. Their government is monitoring what they 
do socially, and certainly there doesn't seem to be much legal 
restriction on accessing whatever people do.
    Europe has taken a very different--they're really focused 
on privacy. They have the right to be forgotten. They have the 
right to erase history, something that seems an anathema to us. 
How could you change the facts and take them off the internet? 
And they've really clamped down, and they're going forward in a 
very arguably bold and unfortunate way on this GDPR, which is 
really you could argue is for privacy or you could argue is to 
shut Europe out in a competitive fashion.
    When I look at the U.S. and our focus on innovation and our 
success and I compare it to Europe, I see they have maybe a 
handful of unicorns, you know, billion dollar valuation 
companies, and the U.S. has most of them in the world, over 
150. And why is that? There's many answers. It's our First 
Amendment, it's our diversity, it's our innovation. It's the 
culture we have of questioning. There's many things go to it, 
but part of it, we're a little more willing to take some risks 
in areas like exchange of information.
    Europe is going forward with GDPR, and frankly, it's going 
to hurt American companies. I was just in France last week. 
It's going to hurt European companies. They're terrified of it. 
They're talking about trying to delay it. But it's also going 
to kill people, because if you can't transfer, for example, 
medical information from one hospital to another in the same 
region, that has life consequences.
    So when we talk about the issue of privacy and who should 
lead on it, I think we should do it in a commonsense way, and 
we shouldn't let HIPAA, for example, be our model. The model 
should be what is going to be--what kind of information is 
warranted in this situation. We've done a lot of research and 
we have found, for example, that Americans are willing to give 
up personal information for a greater good, as they have done 
with health information on our Apple watches. They're willing 
to do it for safety. They're willing to do it for the security 
of their children. They're willing to do it for their own 
safety involving, for example, where your car is if it hits a 
car in front of them.
    So in the privacy area, I think we have a pretty good 
cultural sense. I think the Federal Trade Commission has a 
broad mandate to do a pretty good job in that area.
    And I don't want to take all the time, but those other two 
areas I talked about in terms of the measurements and 
artificial intelligence and what they should do, it goes into 
how you get the skilled people, what you do, how you change 
your educational system, how you retrain for jobs. There's a 
lot of things that government can do and Congress can do. And I 
applaud you and this committee for taking the first big step in 
having hearings to raise the issues, but what I would expect 
Congress to do in the future is rather than come up with 
immediate solutions, is instead to focus on what the goals are 
and how we could do that.
    And I would look at two examples that I was both personally 
involved with, which was government setting big goals, but 
working with industry who came up with private things. 
Actually, I'll give three quickly.
    One is, and Congressman Issa is very aware of this because 
he was part of it, the transition to high definition 
television. That was we set the goal. We wanted to have the 
best system in the world, private industry, no spending of 
government money, we did it.
    Second is commercialization of the internet, doing business 
over it. We have done it in Virginia with bipartisan way and 
the goals were there and it worked.
    And the third is you talked about privacy for wearable 
devices, healthcare devices which came up earlier. At CTA, we 
got every everyone in the room that made those devices and we 
agreed on a regimen of saying this is what we should 
voluntarily do. This is what we should follow. It should be 
transparent, clear language, opt out, and you can't sell 
information or use it without any permission from your 
customers. And the Obama administration seemed pretty happy 
with that, and even they didn't act because that was industry 
self-regulation.
    Mr. Hurd. Got you. Thank you, Mr. Shapiro.
    I'm going to come back for another round of questions, but 
now I'd like to recognize my friend from the Commonwealth of 
Virginia for his first round of questions.
    Mr. Connolly. Thank you, Mr. Chairman.
    And, Gary, you were doing speed dating there. And welcome. 
Good to see you again.
    I want to give you a little bit more opportunity maybe 
those three things you were just talking about if you want to 
elaborate a little bit more. Because this idea--to let you 
catch you breath for a second. This idea of the zone of privacy 
and some of it is cultural bound I think is absolutely true, 
but I can remember going to Silicon Valley about 9 years ago, 
meeting with Facebook people. And their view about privacy was 
we, Americans, need to get used to shrinking boundaries for 
privacy and that the younger generations were already there. 
Older generations needed to just learn to suck it up and accept 
it.
    And I think watching what happened to Mr. Zuckerberg here 
in the last couple weeks, one needs to not be so facile. You're 
not being facile, but I mean, I think you're rising, though, 
those questions. Some of it's cultural bounds, some of it the 
rules of engagement aren't quite there yet. We debate, do we 
get involved? If so, what do we do?
    And so I think your thoughts are very helpful, given your 
experience and your position in providing some guidance. So I 
want to give you an opportunity to elaborate just a little bit.
    Mr. Shapiro. Well, thank you very much, Congressman. I 
appreciate it. I guess my view is that as a Nation, we're not 
China where we totally don't devalue privacy, and we're not 
Europe where we use privacy as a competitive tool against other 
countries, frankly, but it also tamps down innovation in our 
own country.
    Our competitive strength is innovation. That's what we're 
really good at. It's the nature of who we are. So the question 
is, how do we foster innovation in the future in AI and other 
areas and also maintain our--correct our citizens' view that 
they are entitled to certain things?
    Now, to a certain extent, it's educating. Everyone has an 
obligation. The obligation of business is to tell our customers 
what it is we're doing with their data in a clear and 
transparent way, and frankly, we haven't done a great job at 
it. I mean, if I had my way, I wouldn't want to have to click 
on those ``I agree'' just to get to the website I want to. I'd 
like to click on platinum, gold, or silver standard. If there's 
some standardization, it would probably help, and government 
can play a role in that.
    But we also want to make sure that we can innovate. And 
consumers should understand that they're giving away something 
in return for free services. You give a tailor your information 
on your body size to get clothes that fit. You give your doctor 
information about your health, and you're always giving away 
something. And, you know, the truth is if you're going to get a 
free service, like Facebook or Google, and you want to keep it 
free, they are using that data to get people to know you.
    But it's like I shop at Kroger's in Michigan actually, 
because that's where I commute to, and Kroger's knows a lot 
about me. They know everything I buy, and they give me coupons 
all the time. And I value those coupons. But they know what I 
buy, and I am willing to do that. It's the same thing with 
other frequent user programs. We're doing that all the time. 
We're giving up information about ourselves. We get discounts, 
we get deals, and we get better service.
    If we do it with our eyes open and we're educated about it, 
that's fine. Now, the role of citizens is to understand----
    Mr. Connolly. By the way, we know you shopped at Kroger's 
last Thursday, and that fondness you've got for frozen rolls 
has, frankly, surprised us.
    Mr. Shapiro. So in terms of the role of government, I think 
the role of government is to start out by having hearings like 
this one, define the goals and the measurements culturally for 
the future. And the role, frankly, of the administration, in my 
view, is to set the big goals and to make sure that we buy into 
them on a bipartisan way. And I love the idea of some big 
goals, as Mr. Clark suggested, because we need big goals in 
this area.
    You know, for example, having self-driving cars by 2025 or 
nothing--dropping the death rate from automobiles down by half 
by a certain date would be a very admirable goal that everyone 
in this country can rally around.
    Mr. Connolly. Thank you so much, Gary.
    Mr. Clark, in the time I've got left, you said in the 
report on OpenAI, that artificial intelligence continues to 
grow. Cyber attacks will utilize AI and will be, and you said, 
quote, ``more effective, finely targeted, difficult to 
attribute, and likely to exploit vulnerabilities in AI 
systems.''
    I want to give you an opportunity to expand a little bit on 
that. So how worried should we be?
    Mr. Clark. So you can think of AI as something that we're 
going add to pretty much every aspect of technology, and it's 
going to make it more powerful and more capable. So this means 
that our defenses are also going to get substantially better. 
And as Dr. Buchanan said earlier, you weren't in the room, it's 
not clear yet whether this favors the defender or the attacker. 
And this is why I think that hosting competitions, having 
government measure these capabilities as they develop, will 
give us a kind of early warning system.
    You know, if there's something really bad that's about to 
happen as a consequence of an AI capability, I'd like to know 
about it, and I'd like an organization or an agency to be 
telling us about that. So you can think about that and take 
that and view it as an opportunity, because it's an opportunity 
for us to learn in an unprecedented way about the future before 
it happens and make the appropriate regulations before harm 
occurs.
    Mr. Connolly. If the chair will allow, I don't know if Dr. 
Buchanan or Ms. Lyons want to add to that, and my time is up.
    Ms. Lyons. I have nothing more to add. Thank you.
    Mr. Buchanan. I think we probably can return to the subject 
later, but I would suggest we have seen some indications 
already of increased autonomy in cyber attack capabilities. 
There's no doubt in my mind we will see more of that in the 
future.
    Mr. Hurd. The distinguished gentleman from California is 
now recognized for his round of questions.
    Mr. Issa. You know, this is what happens when you announce 
your retirement, you become distinguished.
    You know, I know in these hearings that there's sort of an 
exhaustive repeat of a lot of things, but let me skip to 
something I think hasn't happened, and I'll share it with each 
of you, but I'll start with Mr. Clark.
    The weaponization of artificial intelligence, there's been 
some discussion about how far it's gone, but it's inevitable. 
The tools of artificial intelligence disproportionately favor 
U.S. companies.
    Now, when that happened in satellites, nuclear capability, 
and a myriad of data processing, we put stringent export 
control procedures on those things which may have a dual use. 
We've done no such thing in artificial intelligence. Would you 
say today that that is an area in which the Commerce 
Department's export assistant secretary doesn't have specific 
authority but needs it?
    Mr. Clark. Thank you. I think this is a question of 
existential importance to, basically, the world. The issue with 
AI is that it runs on consumer hardware, it's embodied in 
software, it's based on math that you can learn in high school. 
You can't really regulate a lot of aspects of fundamental AI 
development because it comes from technology which 17-year-olds 
are taught in every country of the world, and every country is 
developing this. So while the U.S. economy favors the 
development of AI here and we have certain advantages, other 
countries are working on this.
    So I think for export controls, arms controls, do not 
really apply here. We're in a new kind of regime, because you 
can't control a specific thing with this AI technology. 
Instead, you need to develop norms around what is acceptable. 
You need to develop shared norms around what we think of an AI 
as safety, which is about being able to offer guarantees about 
how the systems work and how they behave, and we need to track 
those capabilities.
    So I think that your question's a really important one, and 
I think it touches an area where much more work needs to be 
done because we don't have the right tool today to let us 
approach the problem.
    Mr. Issa. And let me follow up quickly. When we look at 
artificial intelligence, we look at those producing advanced 
algorithms. And I went to a different high school apparently 
than you did. Mine wasn't Caltech. So let's assume for a moment 
that it's slightly above high school level. The creators of 
those, and let's assume for a moment, hypothetically, they're 
all in the first world, and the first world defined as those 
who want to play nice in the sandbox: you, us, Europe, and a 
number of other countries.
    Do you believe, if that's the case, the government has a 
role, though, in ensuring that when you make the tool that is 
that powerful, the tools that, if you will allow it to be 
safely controlled, are also part of the algorithm? In other 
words, the person who can make a powerful tool for artificial 
intelligence also can, in fact, design the safety mechanism to 
ensure that it wouldn't--couldn't be used clandestinely. Do you 
think that's a social responsibility of, let's say, the 
Facebooks and the Googles?
    Mr. Clark. I think we have a social responsibility to 
ensure that our tools are safe and that we're developing 
technologies relating to safety and reliability in lockstep 
with capabilities. You know, that's something that the 
organization I work for, OpenAI, does. We have a dedicated 
safety research team, as does Google, Google's DeepMind, they 
do as well. So you need to develop that.
    But I think to your question is how do you, if you have 
those tools, make sure everyone uses it? I think there you're 
going to deal with kind of two stages.
    Mr. Issa. As we've discovered today that we've sent our CIA 
director to meet with Kim Jong-un because he can't be trusted 
with the tools he's created that got to him. I might have a 
different view on the export controls.
    But, Mr. Shapiro, since you've given me every possible look 
on your very creative face as these answers came through, let 
me allow you to answer that, but I want to shift to one other 
question. You mentioned a little bit HIPAA. Now, the history of 
HIPAA is precomputer data. It is, in fact, a time in which, 
basically, pieces of paper were locked up at night and not left 
out on desks so that one patient didn't see another patient's 
records and that you didn't arbitrarily just answer anyone over 
the phone.
    The reality, though, today is that the tools that your 
industry, the industry you so well represent, you have tens of 
thousands of tools that are available that can gather 
information, and often, they're limited by these requirements 
from really interacting with the healthcare community in an 
efficient way. Do we need to set up those tools to allow 
healthcare to prosper in an interactive cloud-based computer 
generation?
    And I'll just mention, for example, the problem of 
interoperability between Department of Defense, the Veterans 
Administration, and private doctors, that has been one of the--
and it's confounded our veterans, often leading to death to 
overdose for a lack of that capability. Do you have the tools, 
and what do we need to give you to use those tools?
    Mr. Shapiro. Well, it's probably fair to say that the 
promise of ObamaCare, which was very positive of allowing easy 
transfer of electronic medical records, has not been realized. 
I think even the American Medical Association, which urged that 
it be passed, has now acknowledged that, and it's been a great 
frustration to doctors, as I think you know.
    In terms of the tools that we have today to allow easy 
transfer, you know, the administration hasn't endorsed this 
blue button initiative which allows medical records, especially 
in emergency cases, to be transferred easily. I think we have a 
long way to go as a country to make it easy to transfer your 
own health information. The old ways they did it in the 
communist countries is you walk around with your own records. 
Your paper records were actually a simpler transaction than 
what we have today where everyone goes in and has to start from 
zero.
    Mr. Issa. Well, you know, the chairman is too young to 
know, but I walked around in the Army with that brown folder 
with all my medical records, and it was very efficient.
    Mr. Hurd. What is a folder?
    Mr. Issa. What's a folder?
    Mr. Shapiro. But the opportunity we see and we're concerned 
as an organization about the growing deficit and the impact 
that will have existentially on our country, frankly, and we 
see the opportunity there in providing healthcare and using 
technology in a multitude of ways to lower costs, to be more 
efficient, to cut down on doctor visits, and to just allow easy 
transfer of information.
    In terms of what the government can do, we're actively 
advocating for a number of things. We're working with the FDA. 
We're moving things along. And we found with this 
administration and prior administration a great willingness to 
adopt the technology system; it is just a matter of how fast.
    Mr. Issa. Thank you. Mr. Chairman, are we going to have 
another round?
    Mr. Hurd. Yes.
    Mr. Issa. Okay. I'll wait. Thank you.
    Mr. Hurd. Ranking member, round two.
    Ms. Kelly. Thank you.
    Given Facebook CEO Mark Zuckerberg's comments last week to 
Congress, how would you evaluate AI's ability to thwart crime 
online, from stopping rogue pharmacies, sex trafficking, IP 
theft, identity theft to cyber attacks? And whoever wants to 
answer that question I'm listening.
    Mr. Buchanan. I think, speaking generally here, there's 
enormous promise from AI in a lot of those areas, but as I said 
in my opening remarks, we should recognize that technology will 
not replace policy. And I think it's almost become a cliche in 
certain circles to suggest that, well, we had this very thorny, 
complex interdisciplinary problem so let's just throw machine 
learning at it and the problem will go away. And I think that's 
a little bit too reductive as a matter of policymaking.
    Ms. Kelly. Anybody else?
    Ms. Lyons. I would echo Dr. Buchanan's remarks, insofar as 
I think part of the solution really needs to be in bringing 
multiple solutions together. So I think policy is certainly 
part of the answer. I think technology and further research in 
certain areas related to security, as you mention, in the 
specific case is the answer. And I think also, you know, that 
is sort of the project of the organization that I represent, 
insofar as the interest of bringing different sectors together 
to discuss the means by which we do these things in the right 
ways.
    Ms. Kelly. Thank you.
    Dr. Buchanan, what types of jobs do you see that will be 
threatened in the short term by AI automation, and what about 
in the long term as well?
    Mr. Buchanan. Certainly, in the near term, I think the jobs 
that are most at risk are jobs that involve repetitive tasks, 
and certainly this has always been the case with automation. 
But I think, as you can imagine, as artificial intelligence 
systems become more capable, what they can do, what they 
consider repetitive certainly would increase. And I think jobs 
that involve, again, repetition of particular tasks that are 
somewhat by rote, even if they're jobs that still involve 
intellectual fire power are on balance more likely to be under 
a threat first.
    Ms. Kelly. And in the long term what do you see?
    Mr. Buchanan. As we move towards an era of things like 
self-driving cars, one could imagine that services like Uber 
and Lyft might not see a need for drivers and taxis, might not 
see a need for--taxi companies, rather, might not see a need 
for drivers. There's some suggestion that if we had such a 
world, we would need fewer cars in general. Certainly, Members 
of Congress are acutely aware of how important the auto 
industry is in the United States.
    So when you look at a longer term horizon, I think there's 
more uncertainty, but there's also a lot more potential for 
disruption, particularly with knock-on effects of if the auto 
industry is smaller, for example, what would the knock-on 
effects be on suppliers to companies even beyond just the 
narrow car companies themselves.
    Ms. Kelly. And to whoever wants to answer, what type of 
investments do you feel that we should be making now for people 
that are going to probably lose their job? What do you--how do 
you see them transitioning to these type of jobs?
    Of course.
    Mr. Shapiro. So I would--the prior question about the jobs. 
The great news is the really unpleasant, unsafe jobs, most of 
them will go away. So, for example, I was using a robotics 
company, and they have something specialized which is very good 
at picking up and identifying and moving things around using AI 
and robotics. The jobs--one of the potential buyers was a major 
pizza company that delivers to homes. The way they do it is 
they make dough, and the dough today is made by people in a 
very cold, sterile environment. They wear all this equipment to 
be warm and also to be sterile. And they can only work--it's 
very ineffective. No one wants to do the job at all. And this 
solves that problem.
    There's also, you know, thousands of other conditions where 
jobs are really difficult. It could be picking agriculturally 
where now there's--increasingly there's devices which do that, 
and they do have to be fairly smart to identify the good versus 
the bad and what to pick and what not to pick.
    In terms of what investment has to make in terms of 
retraining. I think we have to look at the best practices in 
retraining and figure out what you could do. I mean, we do have 
millions of unemployed people today, but we have more millions 
of jobs that are open and not filled. Some of it is geographic, 
and we should be honest about. And maybe we need to offer, you 
know, incentives for people to move elsewhere.
    But some of it is skills. And the question is what has 
worked before, what skills that you can train someone for, 
whether it's a customer service center or whether it's 
something basic programming or helping out in other ways.
    I think we have to look at individual situations, ferret 
out what's out there that's already worked and try some new 
things, because a lot of what has worked in the past will not 
work in the future. And the longer term investment is obviously 
with our kids. We just have to start training them differently.
    And we also have to bring back respectability to work which 
is technical work, as Germany has done, and focus on apprentice 
programs and things like that, and not just assume that a 4-
year degree is for every American, because it's not a good 
investment of society. And there's a lot of unemployed people 
who went to college who don't have marketable skills.
    Ms. Kelly. Mr. Clark.
    Mr. Clark. So I think that this touches on a pretty 
important question which is, where the job gets created, 
because new jobs will be created, will be uneven. And where the 
jobs get taken away will also be uneven.
    So I want to refer to a couple of things I think I've 
already mentioned. One is measurement. It's very difficult for 
me to tell you today what happens if I drop a new industrial 
robot into a manufacturing region. I don't have a good economic 
model to tell you how many jobs get lost, though I have an 
intuition some do. That's because we haven't done the work to 
make those predictions.
    And if you can't make those predictions, then you can't 
invest appropriately in retraining in areas where it's actually 
going to make a huge difference. So I want to again stress the 
importance of that measurements and forecasting role so the 
government can be effective here.
    Ms. Kelly. Thank you very much. I yield back.
    Mr. Hurd. Mr. Clark, you talked about a competition, you 
know, akin to robotics I, akin to self-driving car. What is the 
competition for AI? What is the question that we send out and 
say, hey, do this thing, show up on Wednesday, August 19, and 
bring your best neural network and machine learning algorithm?
    Mr. Clark. So I have a suggestion. The suggestion is a 
multitude of competitions. This being the Oversight Committee, 
I'd like a competition on removing waste in government, you 
know, bureaucracy, which is something that I'm sure that 
everyone here has a feeling about. But I think that that 
actually applies to every committee and every agency.
    You know, the veterans agency can do work on healthcare. 
They can do a healthcare moonshot within that system that they 
have to provide healthcare to a large number of our veterans. 
The EPA can do important competitions on predicting things like 
the environmental declines in certain areas affected adversely 
by extreme weather.
    Every single agency has data. It has intuitions of problems 
it's going to encounter and has competitions that it can create 
to spur innovation. So it's not one single moonshot; it's a 
whole bunch of them. And I think every part of government can 
contribute here, because the great thing about government is 
you have lots of experience with things that typical people 
don't. You have lots of awareness of things that are threats or 
opportunities that may not be obvious. And if you can galvanize 
kind of investment and galvanize competition there, it can be 
kind of fun, and we can do good work.
    Mr. Hurd. So along those lines, how would you declare a 
winner?
    Mr. Clark. In which aspect?
    Mr. Hurd. Let's say we were able to get--well, we can 
take--let's take HHS. Let's take Medicare. Medicare 
overpayments. Perfect example. And let's say we were able to 
get that data protected in a way that the contestants would be 
able to have access to it. And you got 50 teams that come in 
and solve this problem. How would you grade them?
    Mr. Clark. So NAI, we have a term called the objective 
function. What it really means is just the goal. And whatever 
you optimize the goal of a thing for is what you'll get out. So 
doing a goal selection is important because you don't want to 
pick the wrong goal, because then you'll kind of mindlessly 
work towards that.
    But a suggestion I'd have for you is the time it takes a 
person to flow through that system. And you can evaluate how 
the application of new technologies can reduce the time it 
takes for that person to be processed by the system, and then 
you can implement systems which dramatically reduce that amount 
of time. I think that's the sort of thing which people 
naturally approve of.
    And just thinking through it on the spot here, I can't 
think of anything too bad that would happen if you did that, 
but I would encourage you to measure and analyze it before you 
set the goal.
    Mr. Hurd. Ms. Lyons, what's the next milestone when it 
comes to artificial intelligence?
    Ms. Lyons. From a technical perspective or otherwise?
    Mr. Hurd. From a technical perspective.
    Ms. Lyons. Well, I think what a lot of the AI research 
community is looking towards is AI platforms that can be 
applied to more generalized tasks than just the very narrow AI 
that we see applied in most of the circumstances that we've 
described today.
    So I would say that's--that is the next sort of moonshot 
milestone for the technical community.
    Mr. Hurd. Is that a decade? Is that 9 months? Is it 20 
years?
    Ms. Lyons. You know, I have my own personal perspectives on 
this. The Partnership hasn't really formulated one yet. But I 
think we have a lot of organizations involved in ours which 
have disagreeing viewpoints on this. And I'm sure, actually, if 
this committee was quizzed, we might all have different answers 
as well.
    But I think we are--we're years and years away from that. 
And it's useful to be thinking about it right now, but I do 
think we're probably decades.
    Mr. Hurd. And what are the elements that are preventing us 
from getting there?
    Ms. Lyons. I actually don't think I'm the best person 
equipped on this panel to give you an answer to that. I'm 
pretty far away from the technical research, from where I'm 
sitting right now. But it is a--it is a--they are technical 
impediments that are stopping us from achieving that at this 
moment.
    Mr. Hurd. Good copy.
    Dr. Buchanan, how do we detect bias?
    You know, I think one of the things that we have heard 
through these hearings is bias. And we know how you create 
bias, right. Giving--not giving a full dataset, right? So you 
can--can the algorithm itself be biased? Is the only way to 
introduce bias is by the dataset? And then how are we detecting 
whether or not the decisions that are being made by the 
algorithm show bias?
    Mr. Buchanan. I'm not convinced that we're looking as much 
as we should. So when you say how are we detecting, I think in 
many cases we are not detecting bias systems.
    But speaking generally of how do you----
    Mr. Hurd. Philosophically, how do we solve that problem?
    Mr. Buchanan. Right. I think it's worth--again, as I said 
before, technology cannot replace policy, and we should first 
develop an understanding of what we mean by a bias. Is a system 
biased, whether it's automated or not, if it disproportionately 
affects a particular racial group or gender or socioeconomic 
status? And I think that most people would answer yes. And you 
would want to look at what the outcomes of that system were and 
how it treated individuals from certain groups.
    And there's a number of different values you can 
instantiate in the system that try to mitigate that bias. But 
bias is a concept that we intuitively all feel, but it's often 
quite difficult to define. And I think a lot of the work in 
detecting bias is first work in defining bias.
    Mr. Hurd. Mr. Clark.
    Mr. Clark. So I have two suggestions. I think both are 
pretty simple and doable. One is whenever you deploy AI, you 
deploy it into a cohort of people. Let's say I'm deploying a 
public service speech recognition system to do a kind of better 
version of a 311 service for a city. Well, I know I have 
demographic data for that city and I know that people that 
speak, perhaps not with my accent, but a more traditional 
American one are going to be well represented.
    Mr. Hurd. Do you have an accent?
    Mr. Clark. It's sometimes called Australian, but it's 
actually English.
    So I think that, when you look at your city, you're going 
to see people who are represented in that city but are not the 
majority. So you test your system against the least represented 
people and see how it rates. That will almost invariably 
surface areas where it can be improved.
    And the second aspect is you need more people in the room. 
This requires like a concerted effort on STEM education and on 
fixing the diversity in STEM, because if you're not in the 
room, you probably just won't realize that a certain bias is 
going to be obvious, and we do need to fix that as well.
    Mr. Hurd. So we've all--and--oh, Ms. Lyons, go right ahead.
    Ms. Lyons. I might just chime in by summarizing, because I 
don't think I heard this clarified in the way that I might 
describe it, which is in the different ways in which bias is 
represented in systems. And I think that is through data 
inputs, which we've talked a little bit about. It's also in the 
algorithms themselves. And I think that gets to some of the 
points that Mr. Clark has made around who is building the 
systems and how representatives those developer communities 
are.
    And then I think further than that, it's also in the 
outcomes represented by those various inputs and the ways in 
which there might be adverse or outsized impacts on 
particularly at-risk communities who are not involved in 
technology development. So I just wanted to add that.
    Mr. Hurd. That's incredibly helpful.
    We've all been talking about what should the standards be 
or what is the--what are the--what are the equivalent of the 
three rules from I, Robot, right?
    And in the one that it seems that there's most agreement, 
and correct me if I'm wrong on this, is making sure the 
decisions of the algorithm are audible, that you have--that you 
understand how that decision was made by that algorithm. 
There's been so many examples of an AI system producing 
something, and the people that design the algorithm have no 
clue how the algorithm produced that.
    Is that the first rule of artificial intelligence? What are 
some potential contenders for the rules of ethical AI?
    And, Dr. Buchanan, maybe start with you, go down the line, 
if anybody has opinions.
    Mr. Buchanan. I suggest that the first rule might generate 
more discussion than you'd expect on this panel.
    In general, there is oftentimes a tradeoff because of the 
technology involved in AI systems between what we call the 
explainability or interpretability of an algorithm's decision 
and how effective the algorithm is or how scaleable the 
algorithm is.
    So while I certainly think it's an excellent aspiration to 
have an explanation in all cases, and while I probably believe 
that more than many others, I could imagine cases in which we 
worry less about how the explanation--or how the algorithm 
makes its decision and more about the decision.
    For example, in medicine, we might not care how it 
determines a cancer diagnosis as long as it does so very well. 
In general, however, I suggest explanations are vitally 
important, particularly when it comes to matters of bias and 
particularly given the technology involved, they're often hard 
to get.
    Mr. Hurd. Anybody else have an opinion?
    Ms. Lyons.
    Ms. Lyons. I think the question that you've raised is 
actually fairly central to ongoing dialogues happening in the 
field right now. And the easy answer is that there is no easy 
answer, I think. And I think Dr. Buchanan has demonstrated that 
with his remarks as well.
    But generally speaking, I do think that it's--it has been a 
particular focus, especially of--especially in the last several 
years, of a certain subset of the AI machine learning technical 
community to consider questions associated with issues 
regarding fairness, accountability, transparency, 
explainability. And those are issues associated with 
auditability, as you describe. And a keen interest, I think, in 
making sure that those conversations are multidisciplinary in 
nature as well, and including people from fields, not 
necessarily traditionally associated with computer science and 
AI and machine learning communities, but also inclusive of the 
ethics community, law and policy community, and the sociology 
community more generally. So----
    Mr. Hurd. So are there other things--I recognize that this 
is a loaded question and there's not an agreement on this. But 
what are some of the other contenders that people say, hey, we 
should be doing this, even if we don't--even if we recognize 
there's not agreement, when it comes to what are the rules of 
ethical AI?
    Ms. Lyons. Well, the Partnership, for its part, has a set 
of tenets which are essentially eight rules that govern the 
behavior at a very broad level of the organizations associated 
with us. And they're posted on our website. We included them in 
our written remarks--or written testimony as well.
    But generally speaking, at a high level, we have, as a 
community, decided on certain sort of codes of conduct to which 
we ascribe as organizations involved in this endeavor. And I 
think a central project of this organization moving forward 
from this point is in figuring out how to actually 
operationalize those tenets in such a way that they can be 
practiced on the ground by developers and by other associated 
organizations in the AI technical community.
    Mr. Hurd. Mr. Clark.
    Mr. Clark. So I want to put a slightly different spin on 
this, and that's about making sure that the decisions an AI 
makes are sensible. And what I mean by sensible is, you know, 
why do we send people to school? Why do we do professional 
accreditation? Well, it's because we train people in a specific 
skill, and then they're going to be put into a situation that 
they may not have encountered before, but we trust with the 
training and education they had in school means that they'll 
take the correct action. And this is a very common thing, 
especially in areas like disaster response where we train 
people to be able to improvise. And these people may not be 
fully auditable, like you ask him why did you do that in that 
situation? And they'll say, well, it seemed like the right 
thing to do. That's not a super auditable response, but it's 
because we're comforted in the training they've had.
    And so I think some of it is about how do we make sure that 
the AI systems we're creating are trained or taught by 
appropriate people. And that way we can have them act 
autonomously in ways that may not be traditionally 
interpretable, but we'll at least say, well, sure, that's 
sensible, and I understand why we trained them in that way.
    Mr. Hurd. Mr. Shapiro, you have 45 seconds, if you'd like 
to respond.
    Mr. Shapiro. You've raised so many issues that I'll pass on 
this one.
    Mr. Hurd. Mr. Issa, you're now recognized for round two.
    Mr. Issa. Thank you.
    While you were doing that, I was listening to the 
deactivation of HAL 9000.
    [Audio recording played.]
    Mr. Issa. Well, it's not 2001 anymore, but it is.
    You know, this dialogue on AI, this portion of it I think 
is, particularly for people who saw that movie, knew is 
important because HAL was able to correspond, able to have a 
dialogue, but it didn't have to answer honestly, and it didn't 
have to be, if you will, proofed. In other words, nobody put in 
the ability in the algorithm for it to be queried and to 
answer.
    So I think for all of us that are having this dialogue and 
for those of you working in it, the question is will we make 
sure that we have open algorithms, ones that can be queried 
and, as a result, can be diagnosed. And if we don't have them, 
then what you have to rely on, as was being said, is outcome. 
Outcome is not acceptable.
    Outcome is why the IRS shut down yesterday and wasn't 
taking your tax returns on the tax day, and all they knew was 
they had to fix it, but they didn't know why it happened, or at 
least it happened in a portion of their system.
    So that's something that I can't do. We can't mandate. But 
there are some things that, hopefully, we can work on jointly.
    And, Mr. Shapiro, you know, nearly 100 years ago, the Radio 
Manufacturers Association formed. And one of things it began to 
do was standard setting. Earlier today, you talked about we 
should have a standard, if you will, for what am I disclosing? 
Platinum, gold, silver. You had a way of saying it.
    My question to you is, where are the responsible parties, 
as the radio manufacturers, now CTA, 100 years ago, who began 
saying, if we're going to have the flourishing of technology, 
we're going to have to have standards? Privacy standards are 
complex, but how do you make them simple? Well, you make them 
simple by building standards that are predictable that people 
can share a decision process with their friends. Yes, I always 
go for silver if it's my medical and gold if it's my financial.
    You alluded to it. How do we get there knowing it's not 
going to be mandated from this side of the dais? Or at least we 
certainly couldn't come up with the examples.
    Mr. Shapiro. Well, I mean, that's not the only choice. I 
mean, there's the executive branch. The FTC is comfortable in 
that area. And sometimes----
    Mr. Issa. But wait a second. I know the FTC. They're very 
comfortable, after something goes bad, telling us it went 
wrong.
    How often do they actually predictively able to say what 
the, quote, industry standard is before something? They 
certainly haven't done it in data intrusions.
    Mr. Shapiro. Well, they do have a history of providing 
guidelines and standards. And I'm not advocating that. I'm 
not--what I'm saying is, on the issue of privacy and click-on, 
there are so many different systems out there that I am not 
personally convinced that the industry could come forward 
together without some concern the government would instead. I 
think it's always preferable for government and industry to 
work together, but sometimes the concern that government will 
act does drive industry to act. That's just the reality.
    In this area, it's--that cat's out of the bag a long time 
ago, and we're all clicking on stuff we don't understand. And 
that may have been one of the issues, even in the Facebook 
disclosures and things like that, which I think cause some 
concern, is that we're agreeing on things that we don't 
understand. I mean, I used to read that stuff. I've stopped a 
long time ago. It's just--you can't read it or understand it.
    Mr. Issa. But, Gary, back to what we were talking about in 
the last round. When we look at the healthcare and at personal 
information related to your healthcare, your drugs, your body 
weight, whatever it is, those are not such a large and complex 
hypothetical. Those are fairly definable.
    If we want to have the benefits of group data, such as a 
Fit Bit gives and other data, and yet protect individual 
privacy, isn't this a standard that we should be able to demand 
be produced and then codified hopefully with some part? I mean, 
the FTC is very good if you produce a standard of saying that's 
now the industry standard. They're less good at defining it and 
then--proactively.
    Mr. Shapiro. Well, thank you for raising that specific 
case. We have done that as an industry. We've come up with 
standards. They are voluntary, and we haven't heard about any 
data breach, that I'm aware of, in the personal wearable area, 
because I think that was a model that came together.
    The automobile industry is doing something similar, and 
other industries are doing it. It's not too late. I was just 
talking about the online click agreements.
    Mr. Issa. Sure.
    Mr. Shapiro. There's opportunity in other areas. And I 
think to move forward and to move forward quick, it's an 
opportunity. The advantage for the companies is they're kind of 
safe in the herd if they follow the herd.
    Mr. Issa. Wait. And I'm going to cut you off but not finish 
this.
    One door down there in Judiciary, we control the question 
of limiting liability for certain behavior or not limiting it. 
Where, in your opinion--and I can take the rest of you if the 
chairman will allow--where is it we need to act to show that if 
you live by those best practices, knowing that, just like that 
thing I played, it will not be 100 percent. But if you live by 
those practices, your liability is in some way limited. In 
other words, nonpunitive if you're doing the right things. 
Because right now, Congress has not acted fully to protect 
those who would like to enter this industry.
    Mr. Shapiro. Well, you've asked the question and answered 
it at the same time. Obviously, being in business yourself, you 
understand that risk and uncertainty are factors. We're seeing 
that in the trade problems we face today, the potential tariffs 
that----
    Mr. Issa. I did have that told to me just last night by 
somebody who knows about the question of not setting prices for 
their customers for Christmas because they don't yet know what 
the tariff will be.
    Mr. Shapiro. So uncertainty in the business environment. 
We're seeing it increasingly reflect in the stock market. But 
in terms of potential liability, our companies welcome 
certainty.
    And one thing, for example, Congress did when credit cards 
were introduced, they said your liability as an individual is 
limited to $50, and it all of a sudden allowed people to get 
over that uncertainty of going from cash to credit cards. And 
it helped grow our economy enormously and take a lot of 
friction out.
    We're facing some of the same things now as we go forward 
in so many different areas because of AI. And we do have an 
opportunity for Congress to address and say, if you follow 
these practices, you have a safe harbor here. But that's a very 
difficult thing to do, and especially when it gets to the base 
of our privacy and leaks and things like that.
    But everyone's looking for best practices in the issues 
we're discussing earlier having to do with cyber and how do you 
protect. I mean, this is--game will never end. You build a 
better mousetrap, you get smarter mice. So we're going to keep 
raising that bar, and that's the challenge that Congress will 
face.
    But some safe harbors would be certainly welcome in this 
area as grows rapidly. And I think there's a role to play. And 
I think this is a great amazing set of first three hearings to 
start on what will be a process with government and industry 
and consumers.
    Mr. Issa. I hear that from all of you. I saw a lot of heads 
nodding, that safe harbors should exist if we're going to 
promote the advancement of and use of data in our artificial 
intelligence.
    Any noes?
    Ms. Lyons. Well, I----
    Mr. Issa. There's always a caveat, but any noes?
    Ms. Lyons. I actually don't--I don't really have any 
comments about safe harbors specifically. But I think, in 
general, the issue of generating best practices is one which is 
really important to be considered in this field. And that, 
again, was sort of the reason why the Partnership on Artificial 
Intelligence was created, because there is a sort of 
understanding, I think, that's been come to in a collective 
sense about the necessity of determining what these guardrails 
should be, to a certain extent. And I think that project can't 
really be undertaken without the policy community as well as 
other stakeholders who just necessarily need to be involved.
    Mr. Issa. Thank you, Mr. Chairman.
    Mr. Buchanan. I would also put myself down as embracing a 
caveat here, Congressman. I think one of the dangers is that we 
agree on a set of best practices that are not, in fact, 
anywhere near best practices and we think our work is done.
    So while I support safe harbors if they align to practices 
that do protect privacy and advance security, I would suggest 
we are long way from those practices in place today. So we 
should not lock in the status quo and think our work is done.
    Mr. Issa. Thank you.
    You know, I've owned Model Ts. I've owned cars from the 
fifties, sixties, seventies, eighties and so on. I don't think 
we lock in best practices. We only lock them in for a product 
at the time that the product is new and innovative and we have 
an expectation for the manufacturer that that product will 
become obsolete. Nobody assumes that at a Model T is the safest 
vehicle, or even a '66 Mustang.
    But we do we do make expectations at the time of 
manufacturing. You know, there was a time when, years ago, when 
a man lost a limb on a lathe, and he sued the company, even 
though the lathe had been made in 1932, and it was already, you 
know, 50 years later. And we had to create a law that 
prohibited you from going back and using today's standards 
against the manufacturer. You could use it against the company 
if they hadn't updated, but you couldn't use it against the 
manufacturer.
    That's an example of safe harbor where, if you make to the 
standards of the day, you are not held for the standards that 
change on a product that is a fire-and-forget. You don't own it 
or control it. And so that's what I was referring to, your 
expectation that, yes, there has to be continuous innovation 
and that people have to stay up with the standards. Of course, 
we're not expecting that. But then the question is, will we see 
it from your side, or would we try to have the same people, you 
know, who have the system that shut down on the last tax filing 
day be the ones determining best practices.
    Thank you, Mr. Chairman.
    Mr. Hurd. Would the gentleman engage in a colloquy?
    Mr. Issa. Of course.
    Mr. Hurd. What's a Model T?
    No, I'm joking.
    Mr. Issa. Well, you know, I just want you to know that when 
the big bang comes, the Model T is one of the vehicles that 
will still crank up and run.
    Mr. Hurd. Well, I'm coming to your house, Congressman Issa.
    I have two final questions. The last question is actually a 
simple question. But the first question is--I recognize we can 
have a whole hearing on the topic. And I lump it generally in 
pre-crime, right? You have jails that are making decisions on 
whether someone should be released based on decisions based on 
algorithms. We have people making a decision about whether they 
believe someone's going to potentially commit a crime in the 
future. And I would lump this in pre-crime. And the question 
is, should that be allowed?
    Gary.
    Mr. Shapiro. I'll foolishly take a shot at that. It depends 
on the risk involved. For example, in an airplane security 
situation, I think it makes sense to use biometrics and 
predictive technology and gait analysis and voice analysis and 
all the other tools that are increasingly available to predict 
whether someone's a risk on a flight. Israel does it 
increasingly, and it's--it makes sense.
    In a penal release system, I think we have more time and we 
are more sensitive to the fact that there are clearly racial 
differences in how we've approached things since day one. It 
may not make that much sense, so I'd say it's situational.
    Mr. Hurd. Mr. Clark.
    Mr. Clark. We have a robot at OpenAI, and we trained it to 
try and reach towards this water bottle. And so we obviously 
expected that the robot would eventually grab the water bottle 
and pick it up. But what we discovered the robot had learned to 
do was to take the table the water bottle was on and just bring 
it towards itself fulfilling the objective but not really in 
the way we wanted it to.
    So I think I'd agree with Gary that maybe there are some 
specific areas where we're comfortable with certain levels of 
classification because the risk of getting it wrong, like a 
plane is so high. But I think we should be incredibly cautious, 
because this is a road where, once you go down it, you're 
dealing with people's lives. And you can't, in the case of pre-
crime, really inspect wherever it's pulling that table towards 
it. It may be making completely bananas decisions and you're 
not going to have an easy way to find out, and you've dealt 
with someone's life in the process. So I'd urge caution here.
    Ms. Lyons. I'll say this with the caveat that I provided 
previously on other answers, which is that the Partnership 
hasn't yet had a chance to formulate a position on this 
formally. But I think that this question speaks to a lot of the 
challenges associated with bias in the field right now, which 
we discussed a little bit earlier. And I think also the 
challenges of what happens as a result of the 
decontextualization of technology and the application of it in 
areas where it may or may not be appropriate to have it be 
applied.
    So I think it's really important to consider the impacted 
communities, especially in the case of criminal justice 
applications. And I think that needs to be a required aspect of 
conversation about these issues.
    Mr. Hurd. Dr. Buchanan.
    Mr. Buchanan. I'd echo Ms. Lyons' points and Mr. Clark's 
points. I would make three other points here.
    The first is that, not only is there a risk of bias, but 
there's a risk--sometimes machine learning is said to be money 
laundering for bias in that it takes a system that is something 
that's dirty and outputs it in this veneer of impartiality that 
comes from the computer. And we don't interrogate that system 
as much as we should. It's a major risk, I think, in this area 
but in many areas.
    Secondly, I think you posed the question somewhat of a 
hypothetical. Mr. Clark is a measurer here, but I would 
encourage you and Mr. Clark to investigate how much the systems 
are already in place. I think ProPublica did an excellent bit 
of reporting on bias in sentencing in the criminal justice 
system already in place today. And that would certainly deserve 
more attention, in my view.
    And the third is that we should make sure that the inputs 
to the system are transparent and the system itself is 
transparent. And one of my concerns, speaking generally here, 
is that the systems used for sentencing now and in the future 
often are held in a proprietary fashion. So it's very hard to 
interrogate them and understand how they how work. And, of 
course, hard in general to understand the outputs of such a 
system. And I think while that causes me concern in general, it 
should cause extreme concern in this case if we're sentencing 
the people on the basis of proprietary closed systems that we 
do not fully understand in public view.
    Mr. Hurd. Thank you, Dr. Buchanan.
    And my last question is for the entire panel, and maybe, 
Dr. Buchanan, we start with you, and we'll work our way down. 
And it's real simple. Take 30 seconds to answer it.
    What would you all like to see from this committee and 
Congress when it comes to artificial intelligence in the 
future?
    Mr. Buchanan. Mr. Chairman, I think you've done a great job 
by holding this series of hearings. And I was encouraged by 
your suggestion that you'll produce a report on this.
    I think that the more you can do to force conversations 
like this out in the open and elevate them as a matter of 
policy discourse is important. I would suggest, as an academic, 
I view my job to think about topics that are important but are 
not urgent, that are coming but are not here in the next month 
or two. I would suggest that many committees in Congress should 
take that as a mandate as well, and I would encourage you to 
adopt that mindset as you approach AI.
    There are a lot of very important subjects in this field 
that will never reach the urgency of the next week or the next 
month, but will very quickly arrive and are still fundamentally 
important to virtually all of our society.
    Mr. Hurd. Ms. Lyons.
    Ms. Lyons. At the risk of redundancy, I also want to say 
thank you for the engagement, Chairman. I think that having 
more of these types of conversations and more knowledge 
transfer between those working on technology and those 
governing it in fora like this is deeply important.
    And I think--again, I'd like to offer myself and the rest 
of the organizations in the Partnership as a resource to 
whatever extent is possible in that project of education and 
further understanding. And I think that it's deeply important 
for our policymakers as well to consider the unique impact and 
role that they might have in technology governance, especially 
within the context of a multistakeholder setting, which is 
especially characteristic, I think, of the AI field right now.
    Thank you.
    Mr. Hurd. Well, before we get to you, Mr. Clark and Mr. 
Shapiro, you all aren't allowed to thank us, because I want to 
thank you all. We have to prevent, as we've learned in the last 
couple of weeks, and many of our colleagues in both chambers 
are unfamiliar with basic things like social media, and so we 
have to elevate the common body of understanding on some of 
these topics. And so you all's participation today, you all's 
written statements, you all's oral arguments help inform many 
of us on a topic that, you know, if--when we--when I went 
around the streets here in the Capitol and asked everybody what 
is AI, most people, if they were older than me, described how--
that's why I was laughing when Issa brought that in. And people 
that were younger than me referred to Ava, right, from Ex 
Machina. And so you all are helping to educate us.
    So, Mr. Clark, Mr. Shapiro, what should this committee and 
Congress be doing on AI?
    Mr. Clark. Until the first time I tried to build a table, I 
was a measure once, cut twice, cut type of person. And then 
after I built that really terrible broken table, I became a 
measure twice, cut once person.
    The reason why I say that is that I think that if Congress 
and the agencies start to participate in more discussions like 
this, and we actually come to specific things that we need to 
measure that we want to build around, like competitions, it 
will further understanding in sort of both groups. Like, 
there's lots that the AI community can learn from these 
discussions. And I think the inverse is true as well. So I'd 
welcome that, and I think that's probably the best next step we 
can take.
    Mr. Hurd. Mr. Shapiro, last word.
    Mr. Shapiro. I'm happy to embrace my colleagues' offers and 
views and appreciation. I have three quick suggestions.
    One, I think you should continue this, but go to field 
hearings, to great places where there is technology, like 
Massachusetts or Las Vegas in January, CES.
    Second, I think government plays a major role, because 
government's a big buyer. In terms of procurement, I think you 
should focus on where AI could be used in procurement and set 
the goals and the results rather than focus on the very 
technical aspects of it.
    Third, in terms of--I think also that--while Congress may 
not easily get legislation, it could have a sense of Congress. 
It could add a sense of Congress that it's an important 
national goal that we cut automobile deaths or we do certain 
things by a certain date. And setting a national goal with or 
without the administration could be very valuable in terms of 
gathering the Nation and moving us forward in a way which 
benefits everyone and really keeps our national lead in AI.
    Mr. Hurd. That's a great way to end our series.
    I want to thank our witnesses for appearing before us 
today. The record is going to remain open for 2 weeks for any 
member to submit a written opening statement or questions for 
the record.
    And if there's no further business, without objection, the 
subcommittee stands adjourned.
    [Whereupon, at 3:37 p.m., the subcommittee was adjourned.]