[House Hearing, 118 Congress]
[From the U.S. Government Publishing Office]


                      CONSIDERING DHS'S AND CISA'S ROLE IN 
                         SECURING ARTIFICIAL INTELLIGENCE

=======================================================================

                                HEARING

                               BEFORE THE

                            SUBCOMMITTEE ON
                    CYBERSECURITY AND INFRASTRUCTURE
                               PROTECTION

                                 OF THE

                     COMMITTEE ON HOMELAND SECURITY
                        HOUSE OF REPRESENTATIVES

                    ONE HUNDRED EIGHTEENTH CONGRESS

                             FIRST SESSION

                               __________

                           DECEMBER 12, 2023

                               __________

                           Serial No. 118-44

                               __________

       Printed for the use of the Committee on Homeland Security
                                     

[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
                                     
        Available via the World Wide Web: http://www.govinfo.gov
        
                              __________

                   U.S. GOVERNMENT PUBLISHING OFFICE                    
56-783 PDF                  WASHINGTON : 2024                    
          
-----------------------------------------------------------------------------------     

                     COMMITTEE ON HOMELAND SECURITY

                 Mark E. Green, MD, Tennessee, Chairman
Michael T. McCaul, Texas             Bennie G. Thompson, Mississippi, 
Clay Higgins, Louisiana                  Ranking Member
Michael Guest, Mississippi           Sheila Jackson Lee, Texas
Dan Bishop, North Carolina           Donald M. Payne, Jr., New Jersey
Carlos A. Gimenez, Florida           Eric Swalwell, California
August Pfluger, Texas                J. Luis Correa, California
Andrew R. Garbarino, New York        Troy A. Carter, Louisiana
Marjorie Taylor Greene, Georgia      Shri Thanedar, Michigan
Tony Gonzales, Texas                 Seth Magaziner, Rhode Island
Nick LaLota, New York                Glenn Ivey, Maryland
Mike Ezell, Mississippi              Daniel S. Goldman, New York
Anthony D'Esposito, New York         Robert Garcia, California
Laurel M. Lee, Florida               Delia C. Ramirez, Illinois
Morgan Luttrell, Texas               Robert Menendez, New Jersey
Dale W. Strong, Alabama              Yvette D. Clarke, New York
Josh Brecheen, Oklahoma              Dina Titus, Nevada
Elijah Crane, Arizona
                      Stephen Siao, Staff Director
                  Hope Goins, Minority Staff Director
                       Sean Corcoran, Chief Clerk
                                 ------                                

      SUBCOMMITTEE ON CYBERSECURITY AND INFRASTRUCTURE PROTECTION

                Andrew R. Garbarino, New York, Chairman
Carlos A. Gimenez, Florida           Eric Swalwell, California, Ranking 
Mike Ezell, Mississippi                  Member
Laurel M. Lee, Florida               Sheila Jackson Lee, Texas
Morgan Luttrell, Texas               Troy A. Carter, Louisiana
Mark E. Green, MD, Tennessee (ex     Robert Menendez,  New Jersey
    officio)                         Bennie G. Thompson, Mississippi 
                                         (ex officio)
               Cara Mumford, Subcommittee Staff Director
           Moira Bergin, Minority Subcommittee Staff Director
                            
                            
                            C O N T E N T S

                              ----------                              
                                                                   Page

                               Statements

The Honorable Andrew R. Garbarino, a Representative in Congress 
  From the State of New York, and Chairman, Subcommittee on 
  Cybersecurity and Infrastructure Protection:
  Oral Statement.................................................     1
  Prepared Statement.............................................     2
The Honorable Eric Swalwell, a Representative in Congress From 
  the State of California, and Ranking Member, Subcommittee on 
  Cybersecurity and Infrastructure Protection::
  Oral Statement.................................................     3
  Prepared Statement.............................................     5
The Honorable Bennie G. Thompson, a Representative in Congress 
  From the State of Mississippi, and Ranking Member, Committee on 
  Homeland Security:
  Prepared Statement.............................................     6
The Honorable Sheila Jackson Lee, a Representative in Congress 
  From the State of Texas:
  Prepared Statement.............................................     7

                               Witnesses

Mr. Ian Swanson, Chief Executive Officer and Founder, Protect AI:
  Oral Statement.................................................    10
  Prepared Statement.............................................    12
Ms. Debbie Taylor Moore, Senior Partner and Vice President, 
  Global Cybersecurity, IBM Consulting:
  Oral Statement.................................................    15
  Prepared Statement.............................................    16
Mr. Timothy O'Neill, Chief Information Security Officer, Product 
  Security Vice President, Hitachi Vantara:
  Oral Statement.................................................    20
  Prepared Statement.............................................    22
Mr. Alex Stamos, Chief Trust Officer, SentinelOne:
  Oral Statement.................................................    24
  Prepared Statement.............................................    26

                                Appendix

Questions From Chairman Andrew R. Garbarino for Ian Swanson......    59
Questions From Chairman Andrew R. Garbarino for Debbie Taylor 
  Moore..........................................................    59
Questions From Chairman Andrew R. Garbarino for Tim O'Neill......    61
Questions From Chairman Andrew R. Garbarino for Alex Stamos......    63

 
 CONSIDERING DHS'S AND CISA'S ROLE IN SECURING ARTIFICIAL INTELLIGENCE

                              ----------                              


                       Tuesday, December 12, 2023

             U.S. House of Representatives,
                    Committee on Homeland Security,
                         Subcommittee on Cybersecurity and 
                                 Infrastructure Protection,
                                                    Washington, DC.
    The subcommittee met, pursuant to notice, at 10 a.m., in 
room 310, Cannon House Office Building, Hon. Andrew R. 
Garbarino (Chairman of the subcommittee) presiding.
    Present: Representatives Garbarino, Gimenez, Ezell, Lee, 
Luttrell, Swalwell, Jackson Lee, Carter, and Menendez.
    Also present: Representatives Pfluger and Higgins.
    Mr. Garbarino. The Committee on Homeland Security's 
Subcommittee on Cybersecurity and Infrastructure Protection 
will come to order.
    Without objection, the Chair may recess at any point.
    The purpose of this hearing is to receive testimony from a 
panel of expert witnesses on the cybersecurity uses--use cases 
for artificial intelligence, or AI, and the security of the 
technology itself following the administration's release of the 
Executive Order on Safe, Secure, and Trustworthy Development 
and Use of Artificial Intelligence.
    I now recognize myself for an opening statement.
    Thank you to our witnesses for being here to talk about a 
very important topic, securing artificial intelligence, or AI. 
I'm proud that this subcommittee has completed thorough 
oversight of CISA's many missions this year from its Federal 
cybersecurity mission to protecting critical infrastructure 
from threats.
    Now as we head into 2024, it is important that we take a 
closer look at the emerging threats and technologies that CISA 
must continue to evolve with, including AI.
    AI is a hot topic today amongst Members of Congress and 
Americans in every single one of our districts. AI is a broad 
umbrella term, encompassing many different technology-use cases 
from a predictive maintenance alerts and operational technology 
to large language models, like ChatGPT, making building a 
common understanding of the issue difficult.
    As a general curiosity in and--as the general curiosity in 
and strategic application of AI across various sectors 
continues to develop, it is vitally important that the 
Government and the industry work together to build security 
into the very foundation of the technology, regardless of the 
specific use.
    The administration's Executive Order, or EO, is the first 
step in building that foundation. DHS and CISA are tasked in 
the EO with, No. 1, ensuring security of the technology itself; 
and No. 2, developing cybersecurity use cases for AI. But the 
effectiveness of this EO will come down to its implementation.
    DHS and CISA must work with the recipients of the products 
they develop like Federal agencies and critical infrastructure 
owners and operators to ensure the end results meet their 
needs.
    This subcommittee intends to pursue productive oversight 
over these EO tasks. The time lines laid out in the EO are 
ambitious, and it is positive to see that CISA's timely release 
of their road map for AI and internationally-supported 
Guidelines for Secure AI System Development. At its core, AI is 
software, and CISA should look to build AI considerations into 
its existing efforts rather than creating entirely new ones 
unique to AI.
    Identifying all future-use cases of AI is nearly 
impossible, and CISA should ensure that its initiatives are 
iterative, flexible, and continuous, even after the deadlines 
in the EO pass, to ensure that the guidance it provides stands 
the test of time.
    Today, we have four expert witnesses who will help shed the 
light on the potential risks related to the use of AI and 
critical infrastructure, including how AI may enable malicious 
cyber actors' offensive attacks but also, how AI may enable 
defense cyber tools for threat detection, prevention, and 
vulnerability assessments.
    As we all learn more about improving the security and 
secure usage of AI from each of these experts today, I'd like 
to encourage the witnesses to share questions that they might 
not have yet the answer to. With rapidly-evolving technology 
like AI, we should accept that there may be more questions than 
answers at this stage.
    The subcommittee would appreciate any perspectives you 
might have that could shape our oversight of DHS and CISA as 
they reach their EO deadlines next year.
    I look forward to our witness testimony and to developing 
productive questions for DHS and CISA together here today.
    [The statement of Chairman Garbarino follows:]
               Statement of Chairman Andrew R. Garbarino
                           December 12, 2023
    Thank you to our witnesses for being here to talk about a very 
important topic: securing artificial intelligence, or AI. I'm proud 
that this subcommittee has completed thorough oversight over CISA's 
many missions this year from its Federal cybersecurity mission to 
protecting critical infrastructure from threats. Now, as we head into 
2024, it's important that we take a closer look at emerging threats and 
technologies that CISA must continue to evolve with, including AI.
    AI is a hot topic today amongst Members of Congress and Americans 
in every single one of our districts. AI is a broad umbrella term, 
encompassing many different technology use cases from predictive 
maintenance alerts in operational technology to large language models 
like ChatGPT, making building a common understanding of the issues 
difficult. As the general curiosity in and strategic application of AI 
across various sectors continues to develop, it's vitally important 
that Government and industry work together to build security into the 
very foundation of the technology regardless of the specific use case.
    The administration's Executive Order is the first step in building 
that foundation. DHS and CISA are tasked in the EO with: (1) Ensuring 
the security of the technology itself, and (2) developing cybersecurity 
use cases for AI. But the effectiveness of this EO will come down to 
its implementation. DHS and CISA must work with the recipients of the 
products they develop, like Federal agencies and critical 
infrastructure owners and operators, to ensure the end results meet 
their needs. This subcommittee intends to pursue productive oversight 
over these EO tasks.
    The time lines laid out in the EO are ambitious, and it is positive 
to see CISA's timely release of their Roadmap for AI and 
internationally-supported Guidelines for Secure AI System Development. 
At its core, AI is software and CISA should look to build AI 
considerations into its existing efforts rather than creating entirely 
new ones unique to AI. Identifying all future-use cases of AI is nearly 
impossible, and CISA should ensure that its initiatives are iterative, 
flexible, and continuous, even after the deadlines in the EO pass, to 
ensure the guidance it provides stands the test of time.
    Today, we have four expert witnesses who will help shed light on 
the potential risks related to the use of AI in critical 
infrastructure, including how AI may enable malicious cyber actors' 
offensive attacks, but also how AI may enable defensive cyber tools for 
threat detection, prevention, and vulnerability assessments.
    As we all learn more about improving the security and secure usage 
of AI from each of these experts today, I'd like to encourage the 
witnesses to share questions that they might not have the answer to 
just yet. With rapidly-evolving technology like AI, we should accept 
that there may be more questions than answers at this stage. The 
subcommittee would appreciate any perspectives you might have that 
could shape our oversight of DHS and CISA as they reach their EO 
deadlines next year.
    I look forward to our witnesses' testimony and to developing 
productive questions for DHS and CISA together here today.

    Mr. Garbarino. I now recognize the Ranking Member, the 
gentleman from California, Mr. Swalwell, for his opening 
statement.
    Mr. Swalwell. Thank you, Chairman.
    As we close out the year, I want to thank the Chairman for 
what I think has been a pretty productive year on this 
subcommittee as we've taken on a lot of the challenges in this 
realm.
    I also want to offer my condolences to the Chairman of the 
overall committee for the families impacted by the devastating 
tornadoes that touched down in Chairman Green's district in 
Tennessee over the weekend. So my staff and I and the committee 
staff are keeping Chairman Green and his constituents in our 
thoughts as we grieve for those that we've lost as they 
rebuild.
    Turning to the topic of today's hearing, the potential of 
artificial intelligence has captivated scientists and 
mathematicians since the late 1950's. Public interest has 
grown, of course, from watching Watson beat Ken Jennings at 
``Jeopardy'' to AlphaGo debating and defeating the world 
champion Go player in 2015 to the debut of ChatGPT just over a 
year ago.
    The developments of AI over the past 5 years have been 
generating interest in investment and have served as a catalyst 
to drive public policy that will ensure that the United States 
remains a global leader in innovation and that AI technology is 
deployed safely, securely, and responsibly.
    Over the past year alone, the Biden administration has 
issued a blueprint for AI rights, a National AI Research 
Resource Roadmap, a National AI R&D Strategic Plan, and secured 
voluntary commitments by the Nation's top AI companies to 
develop AI technology safely and securely.
    Of course, as the Chairman referenced, just over a month 
ago the President signed a comprehensive Executive Order that 
brings the full resources of the Federal Government to bear to 
ensure the United States can fully harness the potential of AI, 
while mitigating the full range of risks that it brings.
    I was pleased that this Executive Order directs close 
collaboration with our allies as we develop policies for the 
development and use of AI. For its part, CISA is working with 
its international partners to harmonize guidance for the safe 
and secure development of AI. Two weeks ago, CISA and the 
United Kingdom's National Cybersecurity Center issued joint 
guidelines for secure AI system development. These guidelines 
were also signed by the FBI and the NSA, as well as 
international cybersecurity organizations from Australia, 
Canada, France, Germany, and Japan, among others.
    Moving forward, harmonizing AI policies with our partners 
abroad and across the Federal enterprise will be critical to 
promoting the secure development of AI without stifling 
innovation or unnecessarily slowing deployment. As we promote 
advancements in AI, we must remain cognizant that it is a 
potent dual-use technology.
    I also just want to touch a little bit on deepfakes, and I 
hope the witnesses will, as well. They are easier and less 
expensive to produce, and the quality is better. Deepfakes can 
also make it easier for our adversaries to masquerade as public 
figures, and either spread misinformation or undermine their 
credibility. Deepfakes have the potential to move markets, 
change election outcomes, and affect personal relationships.
    We must prioritize investing in technologies that will 
empower the public to identify deepfakes. Watermarking is a 
good start, but not the only solution.
    The novelty of AI's new capability has also raised 
questions about how to secure it. Fortunately, many existing 
security principles, which have already been socialized, apply 
to AI. To that end, I was pleased that CISA's recently-released 
AI road map didn't seek to reinvent the wheel where it wasn't 
necessary and instead, integrates AI into existing efforts like 
Secure By Design and software build of materials.
    In addition to promoting the secure development of AI, I'll 
be interested to learn from the witnesses how CISA can use 
artificial intelligence to better execute its broad mission 
set. CISA is using AI-enabled endpoint detection tools to 
improve Federal network security, and the Executive Order from 
the President directs CISA to conduct a pilot program that 
would deploy AI tools to autonomously identify and remediate 
vulnerabilities on Federal networks. AI also has the potential 
to improve CISA's ability to carry out other aspects of its 
mission including analytic capacity.
    As a final matter, as policy makers, we need to acknowledge 
that CISA will require the necessary resources and personnel to 
fully realize the potential of AI, while mitigating the threat 
it poses to national security.
    I once again urge my colleagues to reject any proposal that 
would slash CISA's budget in fiscal year 2024, as AI continues 
to expand and will need to embrace and use it to take on the 
threats in the threat environment.
    So with that, I look forward to witness' testimony. I thank 
the Chairman for holding the hearing.
    I yield back.
    [The statement of Ranking Member Swalwell follows:]
               Statement of Ranking Member Eric Swalwell
                           December 12, 2023
    The potential of Artificial Intelligence has captivated scientists 
and mathematicians since the late 1950's.
    Public interest has grown each time AI has achieved a new 
milestone--from Watson beating Ken Jennings at Jeopardy to AlphaGo 
defeating the World Champion Go player in 2015 to the debut of ChatGPT 
just over 1 year ago today.
    The developments in AI over the past 5 years have generated 
interest and investment and served as a catalyst to drive public policy 
that will ensure that the United States remains a global leader in 
innovation and that AI technology is deployed safely, securely, and 
responsibly.
    Over the past year alone, the Biden administration has issued a 
Blueprint for an AI Bill of Rights, a National AI Research Resource 
Roadmap, a National AI R&D Strategic Plan, and secured voluntary 
commitments by the Nation's top AI companies to develop AI technology 
safely and securely. Just over 1 month ago, the President signed a 
comprehensive Executive Order that brings the full resources of the 
Federal Government to bear to ensure the United States can fully 
harness the potential of AI while mitigating the full range of risks it 
brings.
    I was pleased that the Executive Order directs close collaboration 
with our allies as we develop policies for the development and use of 
AI. For its part, CISA is working with its international partners to 
harmonize guidance for the safe and secure development of AI. Two weeks 
ago, CISA and the UK's National Cyber Security Centre issued Joint 
Guidelines for Secure AI System Development. These Guidelines were also 
signed by the FBI and NSA, as well as international cybersecurity 
organizations from Australia, Canada, France, Germany, and Japan, among 
others.
    Moving forward, harmonizing AI policies with our partners abroad 
and across the Federal enterprise will be critical to promoting the 
secure development of AI without stifling innovation or unnecessarily 
slowing deployment. As we promote advancements in AI, we must remain 
cognizant that it is a potent dual-use technology.
    Today, deepfakes are easier and less expensive to produce and the 
quality is better. That means that it takes relatively little skill for 
a jealous ex-boyfriend to produce a revenge porn video to harass and 
humiliate a woman, or for a criminal to produce a child abuse video. 
Deepfakes can also make it easier for our adversaries to masquerade as 
public figures and either spread misinformation or undermine their 
credibility. We must prioritize investing in technologies that will 
empower the public to identify deepfakes. Watermarking is a good start, 
but it is not enough.
    The novelty of AI's new capabilities has also raised questions 
about how to secure it. Fortunately, many existing security 
principles--which have already been socialized--apply to AI. To that 
end, I was pleased that CISA's recently-released AI Roadmap didn't seek 
to re-invent the wheel where it wasn't necessary, and instead 
integrates AI into existing efforts like ``secure-by-design'' and 
``software bill of materials.'' In addition to promoting the secure 
development of AI, I will be interested in learning how CISA can use 
artificial intelligence to better execute its broad mission set.
    Already, CISA is using AI-enabled endpoint detection tools to 
improve Federal network security, and the Executive Order directs the 
Department to conduct a pilot program that would deploy AI tools to 
autonomously identify and remediate vulnerabilities on Federal 
networks. AI also has the potential to improve CISA's ability to carry 
out other aspects of its mission, including its analytic capacity.
    CISA's success rests on its ability to analyze disparate data 
streams and draw conclusions that enable network defenders to protect 
against cyber threats and help critical infrastructure owners and 
operators build resilience by understanding critical risks and 
interdependencies. The enormity of this task continues to grow.
    For example, Congress dramatically improved CISA's visibility into 
malicious cyber activity on domestic networks by authorizing mandatory 
cyber incident reporting and the CyberSentry program--both of which 
will generate large amounts of new data that CISA must ingest, analyze, 
and action. Improved operational collaboration programs--like the Joint 
Cyber Defense Collaborative--will similarly yield more data that should 
inform CISA's security products. I am interested in understanding how 
CISA can better leverage AI to scale and improve the analytic capacity 
that is central to its mission.
    As a final matter, as policy makers, we need to acknowledge that 
CISA will require the necessary resources and personnel to fully 
realize the potential of AI while mitigating the threat it poses to 
National security. I once again urge my colleagues to reject proposals 
to slash CISA's budget in fiscal year 2024.

    Mr. Garbarino. Thank you, Ranking Member Swalwell.
    Before we go on to the witnesses, without objection, I 
would like to allow Mr. Pfluger from Texas and Mr. Higgins from 
Louisiana to waive onto the subcommittee for this hearing.
    OK. So moved.
    Other Members of the committee are reminded that opening 
statements may be submitted for the record.
    [The statements of Ranking Member Thompson and Hon. Jackson 
Lee follow:]
             Statement of Ranking Member Bennie G. Thompson
                           December 12, 2023
    This hearing builds on previous work in this subcommittee to 
understand how emerging technologies will impact our National security. 
In the last two Congresses, Chairman Richmond and Chairwoman Clarke 
held hearings on how AI, quantum computing, and other technologies 
would affect cybersecurity and how the Federal Government can better 
prepare for their impact.
    The release of ChatGPT last year demonstrated to the world what we 
already knew: that AI is not some hypothetical technology of the 
future, but a tool being used today with tremendous potential but also 
risks that need to be understood for effective security policy making.
    Fortunately, since taking office, President Biden has made 
developing AI policy a priority. The October release of Executive Order 
14110 reflects months of consultations and is a comprehensive effort to 
ensure that agencies across the Federal Government are working to 
address the full range of challenges AI presents, while harnessing its 
power to improve Government services, enhance our security, and 
strengthen our economy.
    I was particularly pleased to see that the EO incorporated the 
civil rights, civil liberties, and privacy emphasis included in the 
administration's Blueprint for an AI Bill of Rights. AI systems are 
built by humans and therefore subject to the biases of their 
developers. To overcome this, addressing civil rights concerns must be 
baked into the AI development process, and I appreciate the Biden 
administration's emphasis on this issue throughout the Executive Order. 
As we all know, good intentions are not enough, which is why 
Congressional oversight of the EO's implementation will be so 
important.
    Today's hearing allows this subcommittee to hear the perspectives 
of leading AI industry stakeholders on how DHS and CISA can implement 
their responsibilities under the EO and how they can support the safe 
and secure use of AI.
    For cybersecurity, AI offers tremendous opportunities to enhance 
the ability for network defenders to detect vulnerabilities and 
intrusions and respond to incidents. But, it also may be utilized by 
our adversaries to facilitate more attacks. As generative AI continues 
to advance, the risk grows that deepfakes and other inauthentic 
synthetic content will be used to by foreign governments to undermine 
our democracy and by cyber criminals to facilitate their crimes.
    DHS and CISA must have a central role in ensuring that AI 
technology improves our security rather than harms it. To do so, it 
will be essential that we consider perspectives of our private 
industry, where so much of the most advanced work in the world in AI 
development is taking place. I hope to hear more today on how DHS and 
CISA can best utilize AI, how they can support efforts to secure AI 
systems, and how they can reduce the risks AI may pose to critical 
infrastructure.
    CISA's on-going work on developing secure-by-design principles and 
its partnerships with critical infrastructure make it an essential part 
of the administration's whole-of-Government approach to AI policy and 
will allow CISA to build AI security efforts into their existing 
programs and policies. For CISA to carry out that role, it must have 
the proper workforce that understands AI and its security implications.
    Building up our National talent pool of AI expertise and ensuring 
that Federal agencies can recruit and retain employees with the right 
skills will be essential if we are to address AI's challenges while 
utilizing its full potential. EO 14110 provides directives across the 
Federal Government to strengthen our National AI policy, and I stand 
ready to partner with my colleagues on this committee to ensure DHS and 
CISA have the resources and authorities necessary to carry out their 
responsibilities.
                                 ______
                                 
                  Statement of Hon. Sheila Jackson Lee
                           December 12, 2023
    Chairman Garbarino, and Ranking Member Swalwell, thank you for 
holding today's hearing on ``Considering DHS's and CISA's Role in 
Securing Artificial Intelligence.''
    I look forward to the questions that will follow the testimony of:
   Mr. Ian Swanson, chief executive officer and founder, 
        Protect AI;
   Ms. Debbie Taylor Moore, senior partner and vice president, 
        Global Cybersecurity, IBM Consulting;
   Mr. Timothy O'Neill, chief information security officer and 
        product security, Hitachi Vantara; and
   Mr. Alex Stamos, chief trust officer, SentinelOne 
        (*Democratic Witness*).
    I welcome the witnesses and thank them for their testimony before 
the House Homeland Security Committee.
    The purpose of this hearing is to provide an opportunity to hear 
from private industry on how the Department of Homeland Security (DHS) 
and the Cybersecurity and Infrastructure Security Agency (CISA) can 
support efforts to secure artificial intelligence (AI) and how the use 
of AI will impact cybersecurity.
    Members of the Committee will hear perspectives on how DHS and CISA 
can best implement their responsibilities under President Biden's 
recent AI Executive Order 14110.
    Executive Order 14110 represents a comprehensive effort by the 
Biden administration to maintain U.S. dominance in innovation while 
ensuring artificial intelligence (AI) reflects U.S. values, including 
prioritization of safety and security and respect for civil rights, 
civil liberties, and privacy.
    DHS and CISA have expertise and capabilities that can facilitate 
the responsible development and deployment of AI across Federal 
networks and critical infrastructure to ensure its stakeholders can 
harness the potential of AI while mitigating the potential risks.
    The Executive Order outlines that where practicable, AI security 
policy should be aligned with existing efforts to strengthen the 
security of technology, such as secure-by-design.
    Given the momentum associated with AI policy, such alignment could 
help to further accelerate the implementation of such security 
principles to broader sets of technology while streamlining DHS/CISA 
guidance.
    DHS's and CISA's responsibilities under the EO fall primarily into 
two categories: (1) ensuring the safety and security of AI; and (2) 
promoting innovation and competition, particularly with respect to 
attracting AI talent and protecting AI research and development from 
intellectual property theft.
    The Federal Executive branch is comprised of civilian Federal 
agencies that provide the full scope of benefits and services to 
residents of the States and territories as well as support of domestic 
law enforcement and homeland security needs.
    The Federal Executive branch is also charged with providing 
rigorous oversight of a full scope of goods and services to ensure the 
health and safety of the American people.
    Much of the regulatory strength of the Federal Government was built 
in the early to mid-20th Century, when notable events brought experts, 
Government oversight, industry leaders, as well as labor and consumer 
advocates together to demand safer cars, food, drinking water, safer 
construction for schools, homes, and multi-family dwellings, as well as 
safe processes that governed automotive, rail, aviation, and shipping 
to reduce hazards and accidents.
    Each of the steps taken to put demands on industries to make 
products safe were often proceeded by a calamity.
    For example, in the early 1900's, foodborne diseases such as 
botulism, tuberculosis, typhoid fever, were at the highest incidence 
recorded and prevalence while also being the leading cause of 
increasing mortality rates all over the world.
    By 1906, the U.S. Congress responded with passage of the Pure Food 
and Drugs Act, that prohibited inter-State commerce in adulterated and 
misbranded food and drugs.
    Following the experiences of World War II that brought to light the 
suffering that could be caused by the spread of disease associated with 
breakdowns in social order that impacted routine access to clean water, 
and uncontaminated food supplies many more laws were passed.
    In the early 1900's the frequency of automobile accidents prompted 
manufacturers to incrementally improve vehicles by adding windshield 
wipers, headlights, enclosed spaces for drivers and passengers.
    However, it was not until Ralph Nader's book ``Unsafe at Any 
Speed,'' shocked the American public and brought unprecedented 
attention to automobile safety.
    On the same path of Nader's work was that of Najeeb Halaby, the 
chief of the independent Federal Aviation Agency who convinced 
President Lyndon Johnson to create a Federal Transportation department 
to merge aviation and rail safety into a single agency focused on 
transportation, which would also include automobile safety.
    The development of computing technology did not follow a path that 
took it toward safety and improvements.
    This lack of Government or judicial oversight created a culture of 
normalized brokenness that exists to this day.
    Errors and problems with computing devices or applications are 
often fixed by turning a device off and on again and this is accepted 
as normal, while it would never be allowed in other serious areas of 
engineering such as for cars, planes, trains, or elevators.
    The challenge is almost inconceivable--how do we fix the 
underpinning of computing software for all applications and devices so 
that we can have a baseline of trust for the work being done for AI.
    AI's goal is to replace many tasks performed by humans with 
machines, but the consequences for human error and computing error are 
not the same.
    Human errors are costly and borne by the person or the company they 
represent, while a computer error is borne by the purchaser not the 
manufacturer.
    This situation in an AI world would create incentives to replace 
people with machines that are not held to the same standards of care as 
people.
    AI is generally understood to mean computerized systems that 
operate in ways commonly thought to require intelligence.
    While precise definitions vary, President Biden's recent Executive 
Order 14110 defined AI as ``a machine-based system that can, for a 
given set of human-defined objectives, make predictions, 
recommendations, or decisions influencing real or virtual environments.
    ``Artificial intelligence systems use machine- and human-based 
inputs to perceive real and virtual environments; abstract such 
perceptions into models through analysis in an automated manner; and 
use model inference to formulate options for information or action.''
    This makes the assumption that AI's human-based inputs function as 
intended free from errors or omissions.
    AI offers a wide range of potential applications across different 
sectors.
    In cybersecurity, AI has largely worked to the advantage of network 
defenders.
    For example, conventional cybersecurity tools defend against known 
matches to malicious code, so hackers must modify small portions of 
that code to circumvent the defense.
    AI-enabled tools, on the other hand, can be trained to detect 
anomalies in network activity, thus presenting a more comprehensive and 
dynamic barrier to attack.
    In the aftermath of the 2020 Solar Winds cyber campaign, Federal 
agencies and the private sector have expedited implementation of 
Endpoint Detection and Response systems that utilize AI to detect 
anomalous network activity.
    However, AI has and will continue to be used in myriad ways that 
undermine National security, individual privacy, or introduce new and 
novel attack vectors.
    Rapid advances in generative AI, as highlighted by the release of 
ChatGPT in November 2022, have increased concerns about how more 
advanced versions of AI may increase security risks.
    Generative AI ``means the class of AI models that emulate the 
structure and characteristics of input data in order to generate 
derived synthetic content.
    This can include images, videos, audio, text, and other digital 
content.
    There will be no one definition of AI or one method that defines 
what it is or what it will mean.
    The efforts behind AI are focused on may not be able to plan for 
all possible outcomes, but one that may make this conversation much 
more challenging is the creation of machines that can write their own 
computing code or algorithms without human intervention will quickly 
lead to code that is only understood by AI.
    Shortly after President Biden signed the EO, CISA released its 
2023-2024 Roadmap for Artificial Intelligence, which describes the 5 
lines of effort the agency will undertake under the Executive Order:
   Responsibly Use AI to Support [CISA's] Mission;
   Assure AI Systems;
   Protect Critical Infrastructure from Malicious Use of AI;
   Collaborate with and Communicate on Key AI Efforts with the 
        Interagency, International Partners, and the Public; and
   Expand AI Expertise in our Workforce.
   AI Human Bill of Rights.
    CISA's Roadmap works to leverage existing programs and policies to 
address AI security issues where possible while developing new policies 
and work streams where gaps in policies exist.
    Some of the more specific objectives CISA seeks to implement under 
its Roadmap include developing a strategy to adopt the next generation 
of AI-enabled technologies; generating best practices on the 
development and use of secure AI systems; engaging with international 
partners on global AI security; and recruiting staff with AI expertise.
    In line with CISA's commitment to international cooperation on AI 
policy and its goal of providing guidance on best practices for the 
private sector, last month, CISA and the United Kingdom's National 
Cyber Security Centre jointly released Guidelines for Secure AI System 
Development in collaboration with agencies from 16 other countries.
    The guidelines focused on securing all aspects of the AI 
development life cycle, including secure design, secure development, 
secure deployment, and secure operation, and maintenance.
    The publication aligns with CISA's broader focus on encouraging 
software developers to follow secure-by-design principles that ensure 
security is built into the technology product development process.
    As DHS increases its use of AI across its components and missions, 
in April of this year, Secretary Mayorkas established the DHS AI Task 
Force, which seeks to drive the use of AI across DHS while protecting 
civil rights, civil liberties, and privacy.
    The task force is chaired by the Department's chief AI officer and 
the under secretary for science and technology, with the officer for 
civil rights and civil liberties serving as vice chair.
    Its initial focus areas will be on the use of AI in combating 
fentanyl trafficking, strengthening supply chain security, countering 
child exploitation, and protecting critical infrastructure.
    AI offers a wide range of applications that will have significant 
security implications.
    DHS and CISA must seek to utilize AI to strengthen their ability to 
defend the homeland from cyber and other threats, defend against 
increased security risks posed by AI to critical infrastructure, and 
support secure and safe AI development across the Federal Government 
and private sector. Incorporating AI policy into existing security 
frameworks will ensure that AI security efforts align with broader 
Government policies and enhances efforts to build out stronger 
security.
    The Biden administration's efforts, including EO 14110, reflect 
major advances in Federal AI policy and full implementation of those 
policies, in consultation with private-sector experts and stakeholders, 
offer the potential to strengthen National security while mitigating 
the novel risks posed by AI.
    Thank you.

    Mr. Garbarino. I am pleased to have four witnesses before 
us today to discuss this very important topic. I ask that our 
witnesses please rise, raise their right hand.
    Do you solemnly swear that the testimony you will be--you 
will give before the Committee on Homeland Security of the U.S. 
House of Representatives will be the truth, the whole truth, 
and nothing but the truth, so help you God?
    Let the record reflect that the witnesses have all answered 
in the affirmative.
    Thank you. Please be seated.
    I would now like to formally introduce our witnesses.
    First, Ian Swanson is the CEO and founder of Protect AI, a 
cybersecurity company for AI. Prior to founding Protect AI, Mr. 
Swanson led Amazon Web Services' world-wide AI and a machine 
learning, or ML, business. He also led strategy for AI and ML 
products at Oracle. Earlier in his career he also founded 
DataScience.com and was an executive at American Express, 
Sprint, and Sometrics.
    Debbie Taylor Moore is vice president and senior partner 
for cybersecurity consulting services at IBM. She's a 20-plus-
year cybersecurity executive and subject-matter expert on 
emerging technologies in cybersecurity including AI. Ms. Moore 
has also led security organizations at SecureInfo, Kratos 
Defense, Verizon Business, and others.
    Timothy O'Neill is vice president, chief information 
security officer, and product security--and chief information 
security officer and product security at Hitachi Vantara, a 
subsidiary of Hitachi at the forefront of the information 
technology and operational technology convergence across 
multiple critical infrastructure sectors.
    Prior to this role, he held leader roles at Amazon, 
Hewlett-Packard, and Blue Shield of California. Mr. O'Neill has 
served as a law enforcement officer, focused on cyber crime 
forensics and investigations.
    Alex Stamos is the chief trust officer at SentinelOne where 
he works to improve the security and safety of the internet. 
Mr. Stamos has also helped companies secure themselves in prior 
roles at the Krebs Stamos Group, Facebook, and Yahoo! Of note, 
he also advises NATO's Cybersecurity Center of Excellence which 
this committee had the privilege of visiting Estonia in June.
    Thank you all for being here today.
    Mr. Swanson, I now recognize you for 5 minutes to summarize 
your opening statement.

STATEMENT OF IAN SWANSON, CHIEF EXECUTIVE OFFICER AND FOUNDER, 
                           PROTECT AI

    Mr. Swanson. Good morning, Members of the Subcommittee on 
Cybersecurity and Infrastructure Protection.
    I want to start by thanking the Chairman and Ranking Member 
for hosting this important hearing and inviting me to provide 
testimony.
    My name is Ian Swanson. I am the CEO of Protect AI. Protect 
AI is a cybersecurity company for artificial intelligence and 
machine learning. For many companies and organizations, AI is 
the vehicle for digital transformation, and machine learning is 
the power train. As such, a secure machine learning model 
serves as the cornerstone for a safe AI application.
    Imagine there's a cake right here before us. We don't know 
how it got here, who delivered it. We don't know the baker. We 
don't know the ingredients or the recipe. Would you eat a slice 
of this cake? Likely not.
    This cake is not just any dessert. It represents the AI 
systems that are becoming increasingly fundamental to our 
society and economy. Would you trust AI if you did not know how 
it was built, if you did not know the practitioner who built 
it? How would you know it is secure?
    Based on my experience, millions of machine learning models 
powering AI are currently operational Nation-wide, not only 
facilitating daily activities but also embedded in mission-
critical systems and integrated within our physical and digital 
infrastructure.
    Given the importance of these systems to a safe, 
functioning Government, I pose a critical question: If this 
committee were to request a comprehensive inventory of all 
machine learning models and AI in use in any enterprise or U.S. 
Government agency detailing the ingredients, the recipe, and 
the personnel involved, would any witness, business, or agency 
be able to furnish a complete and satisfactory response? Likely 
not.
    Secure AI requires oversight and understanding of an 
organization's deployments. However, many deployments of AI are 
highly dispersed and can heavily be reliant on widely-used 
open-source assets essential to the AI life cycle. This 
situation potentially sets the stage for a major security 
vulnerability akin to the solar winds incident, posing a 
substantial threat to national security and interest. The 
potential impact of such a breach could be enormous and 
difficult to quantify.
    My intention today is not to alarm but to urge this 
committee and other Federal agencies to acknowledge the 
pervasive presence of AI in existing U.S. business and 
Government technology environments.
    It is imperative to not only recognize, but also safeguard 
and responsibly manage AI ecosystems. To help accomplish this, 
AI manufacturers and AI consumers alike should be required to 
see, know, and manage their AI risk.
    Yes, I believe the Government can help set policies to 
better secure artificial intelligence. Policies will need to be 
realistic in what can be accomplished, enforceable, and not 
shut down innovation or limit innovation to just large AI 
manufacturers.
    I applaud the work by CISA, and support the three Secure By 
Design software principals that serve as their guidance to AI 
software manufacturers.
    Manufacturers of AI machine learning must take ownership 
for the security of their products and be held responsible, be 
transparent on security status and risks of their products, and 
build in technical systems and business processes to ensure 
security throughout the AI and machine learning development 
life cycle, otherwise known as MLSecOps, machine learning 
security operations.
    While Secure By Design and CISA road map for artificial 
intelligence are a good foundation, it can go deeper in 
providing clear guidance on how to tactically extend the 
methodology to artificial intelligence. I recommend the 
following three starting actions to this committee and other 
U.S. Government organizations, including CISA, when setting 
policy for secure AI.
    Create a Machine Learning Bill of Materials standard in 
partnership with NIST and other U.S. Government entities for 
transparency, traceability, accountability, and AI systems, not 
just the Software Bill of Materials but a Machine Learning Bill 
of Materials.
    Invest in protecting the artificial intelligence and 
machine learning open-source software ecosystem. These are the 
essential ingredients for AI.
    Continue to enlist feedback and participation from 
technology start-ups, not just the large technology incumbents.
    I and my company, Protect AI, stand ready to help maintain 
the global advantage in technologies, economics, and innovation 
that will ensure the continuing leadership of the United States 
and AI for decades to come. We must Protect AI commensurate to 
the value it will deliver. There should be no AI in the 
Government or in any business without proper security of AI.
    Thank you, Mr. Chairman, Ranking Member, and the rest of 
the committee for the opportunity to discuss this critical 
topic of security of artificial intelligence. I look forward to 
your questions.
    [The prepared statement of Mr. Swanson follows:]
                   Prepared Statement of Ian Swanson
                           December 12, 2023
    Good morning Members of the Subcommittee on Cybersecurity and 
Infrastructure Protection. I want to start by thanking the Chairman and 
Ranking Member for hosting this important hearing and inviting me to 
provide testimony.
    My name is Ian Swanson, and I am the CEO of Protect AI. Protect AI 
is a cybersecurity company for artificial intelligence (AI), that 
enables organizations to deploy safe and secure AI applications. 
Previously in my career, I was a world-wide leader of AI/ML at Amazon 
Web Services and vice president of machine learning at Oracle. Protect 
AI was founded on the premise that AI security needed dramatic 
acceleration. When I first started Protect AI, we had to convince 
industries that the need for security of AI was necessary. Now, 
industries and governments are openly talking about this need, and 
shifting the conversation from education of AI security to building 
security into AI. Against the backdrop of regulation, more front-page 
headlines on AI/ML security risks, and proliferation of AI/ML-enabled 
tech to deliver business value, the recognition for securing AI/ML 
applications has never been greater.
    AI is the development of computer systems or machines that can 
perform tasks that typically require human intelligence. These tasks 
can include things like understanding natural language, recognizing 
patterns, making decisions, and solving problems. AI encompasses 
machine learning (ML), which, according to Executive Order 14110 is ``a 
set of techniques that can be used to train AI algorithms to improve 
performance on a task based on data.'' A ML model is an engine that can 
power an AI application and differentiate AI from other types of 
software code. For many companies and organizations, AI is the vehicle 
for digital transformation and ML is the powertrain. As such, a secure 
ML model serves as the cornerstone for a safe AI application, ensuring 
reliability and security akin to how robust software frameworks and 
high-grade hardware fortify an organization's technology ecosystem. 
This ML model, in essence, is an asset as indispensable as any other 
technology asset, such as databases, cloud computing resources, 
employee laptops and workstations, and networks. AI/ML assets have 
numerous challenges in developing, deploying, and maintaining it 
securely. These include:
   Limited Transparency in the Operations of AI/ML 
        Applications.--The complex nature of AI/ML algorithms leads to 
        challenges in transparency, making it difficult to perform 
        audits and investigative forensics of these systems.
   Security Risks in AI/ML's Open-Source Assets.--AI/ML 
        technologies often depend on open-source software, which, while 
        fostering innovation, also raises concerns about the security 
        and reliability of these foundational elements.
   Distinct Security Needs in AI/ML Development Process.--The 
        process of developing AI/ML systems, from data handling to 
        model implementation, presents unique security challenges that 
        differ markedly from traditional software development.
   Emerging Threats Unique to AI/ML Systems.--AI/ML systems are 
        susceptible to novel forms of cyber threats, such as algorithm 
        tampering and data manipulation, which are fundamentally 
        different from conventional cybersecurity concerns.
   Educational Gap in AI/ML Security Expertise.--There is a 
        critical need for enhanced training and expertise in AI/ML 
        security. This gap in specialized knowledge can lead to 
        vulnerabilities in crucial AI/ML infrastructures.
    Based on my experience and first-hand knowledge, millions of ML 
models are currently operational Nation-wide, not only facilitating 
daily activities but also embedded in mission-critical systems and 
integrated within our physical and digital infrastructure. These models 
have been instrumental for over a decade in areas such as fraud 
detection in banking, monitoring energy infrastructure, and enhancing 
cybersecurity defenses through digital forensic analysis. Recognizing 
and prioritizing the safeguarding of these assets by addressing their 
unique security vulnerabilities and threats, is vital for this Nation 
and any organization striving to excel in the rapidly-advancing field 
of AI which impacts all elements of the American economy today, and 
into the future.
    U.S. businesses and the U.S. Government use a significant number of 
machine learning (ML) models for critical processes, ranging from 
defense systems to administrative task acceleration. Given the 
importance of these systems to a safe, functioning government, we pose 
a critical question: If this committee were to request a comprehensive 
inventory of all ML models in use in an enterprise or a USG agency, 
detailing their stages in the life cycle (including experimentation, 
training, or deployment), the data they process, and the personnel 
involved (both full-time employees, Government personnel, and 
contractors), would any witness, business, or agency be able to furnish 
a complete and satisfactory response?
    Secure AI and ML requires oversight and understanding of an 
organization's deployments. However, many deployments of AI and ML are 
highly dispersed and can be heavily reliant on widely-used open-source 
assets integral to the AI/ML life cycle. This situation potentially 
sets the stage for a major security vulnerability, akin to the 
``SolarWinds incident'', posing a substantial threat to National 
security and interests. The potential impact of such a breach could be 
enormous and difficult to quantify.
    Our intention is not to alarm but to urge this committee and other 
Federal agencies to acknowledge the pervasive presence of AI in 
existing U.S. business and Government technology environments. It is 
imperative to not only recognize but also safeguard and responsibly 
manage AI ecosystems. This includes the need for robust mechanisms to 
identify, secure, and address critical security vulnerabilities within 
U.S. businesses and the United States Federal Government's AI 
infrastructures.
    Qualcomm,\1\ McKinsey & Company,\2\ and PwC \3\ have shared 
analysis that AI can boost the U.S. GDP by trillions of dollars. We 
must protect AI commensurate with the value it will deliver. To help 
accomplish this, AI manufacturers and AI consumers alike should be 
required to see, know, and manage their AI risk:
---------------------------------------------------------------------------
    \1\ Qualcomm: The generative AI economy: Worth up to $7.9T. 
Available at https://www.qualcomm.com/news/onq/2023/11/the-generative-
ai-economy-is-worth-up-to-7-trillion-dollars.
    \2\ McKinsey and Company: The economic potential of generative AI: 
The next productivity frontier. Available at https://www.mckinsey.com/
capabilities/mckinsey-digital/our-insights/the-economic-potential-of-
generative-ai-the-next-productivity-frontier.
    \3\ PwC: PwC's Global Artificial Intelligence Study: Exploiting the 
AI Revolution. Available at https://www.pwc.com/gx/en/issues/data-and-
analytics/publications/artificial-intelligence-study.html.
---------------------------------------------------------------------------
   See.--AI/ML systems are fragmented, complex, and dynamic. 
        This creates hidden security risks that escape your current 
        application security governance and control policies. 
        Manufacturers and consumers of AI must put in place systems to 
        provide the visibility they need to see threats deep inside 
        their ML systems and AI Applications quickly and easily.
   Know.--The rapidly-evolving adoption of AI/ML adds an 
        entirely new challenge for businesses to ensure their 
        applications are secure and compliant. Safeguarding against a 
        potential ``SolarWinds'' moment in ML is business critical. 
        Manufacturers and consumers of AI need to know where threats 
        lie in their ML system so they can pinpoint and remediate risk. 
        They must create ML Bill of Materials, scan, and remediate 
        their AI/ML systems, models, and tools for unique and novel 
        vulnerabilities.
   Manage.--AI/ML security vulnerabilities are difficult to 
        remediate. When operational, technological, and/or reputation 
        security risks are identified that could harm customers, 
        employees, and partners, the business must quickly respond and 
        mitigate them to reduce incident response times. Manufacturers 
        and consumers of AI/ML should create documented policies to 
        help improve security postures, employ incident response 
        management processes, enforce human-in-the-loop checks, and 
        meet existing and future regulatory requirements.
    Yes, I believe that the Government can help set policies to better 
secure artificial intelligence. Policies will need to be realistic in 
what can be accomplished, enforceable, and not shut down innovation or 
limit innovation to just large AI manufacturers. Against this backdrop, 
the DHS and CISA play a crucial role in fortifying the security of AI 
applications.
    In the past year, CISA has published two important documents with 
regard to Securing Artificial Intelligence: ``Secure by Design'' and 
the ``CISA Roadmap for Artificial Intelligence''. The Secure by Design 
document provides a clear articulation of the ``Secure by Design'' 
approach, which is a classic and well-understood methodology for 
software resilience. I applaud the work by CISA and support the three 
``Secure by Design'' software principles that serve as their guidance 
to AI/ML software manufacturers: (1) Take ownership of customer 
security outcomes, (2) Embrace radical transparency and accountability, 
and (3) Build organizational structure and leadership to achieve these 
goals. CISA advancing the ``Secure by Design'' methodology should help 
foster wide-spread adoption. Manufacturers of AI/ML must take ownership 
for the security of their products and be held responsible, be 
transparent on security status and risks of their products, and build 
in technical systems and business processes to ensure security 
throughout the ML development life cycle--otherwise known as MLSecOps. 
While ``Secure by Design'' and the ``CISA Roadmap for Artificial 
Intelligence'' are a good foundation, it can go deeper in providing 
clear guidance on how to tactically extend the methodology to AI/ML.
    I recommend the following 3 starting actions to this committee and 
other U.S. Government organizations, including CISA, when setting 
policy for secure AI/ML:
    1. Create an MLBOM standard in partnership with NIST and other USG 
        entities.--The development of a Machine Learning Bill of 
        Materials (MLBOM) standard, in partnership with NIST and other 
        U.S. Government bodies, is critical to address the unique 
        complexities of AI/ML systems, which are not adequately covered 
        by traditional Software Bill of Materials (SBOM). An MLBOM 
        would provide a more tailored framework, focusing on the 
        specific data, algorithms, and training processes integral to 
        AI/ML, setting it apart from conventional software transparency 
        measures.
    2. Invest in protecting the AI/ML open-source software ecosystem.--
        Per a 2023 study by Synopsis Corporation,\4\ nearly 80 percent 
        of AI/ML, Analytics, and Big Data systems use open-source 
        software. To protect this, CISA and DHS can mandate and direct 
        other Federal agencies to rigorously enforce and adhere to 
        standardized security protocols and best practices for the use 
        and contribution to open-source AI/ML software, ensuring a 
        fortified and resilient National cybersecurity posture. The 
        committee should help expand Senate Bill 3050, which includes a 
        proposition and directive on the requirement for AI/ML bug 
        bounty programs in foundational artificial intelligence models 
        being integrated into Department of Defense missions and 
        operations, and be inclusive of all AI/ML assets.
---------------------------------------------------------------------------
    \4\ Synopsis Corporation: 2023 Open Source Security and Risk 
Analysis Report. Available at https://www.synopsys.com/software-
integrity/resources/analyst-reports/open-source-security-risk-
analysis.html.
---------------------------------------------------------------------------
    3. Continue to enlist feedback and participation from technology 
        startups.--It took a startup in the form of OpenAI to open the 
        eyes of the world to the power and potential of AI. As such, 
        when Congress and other authorities look to regulate AI, it is 
        important to have a broad set of innovative opinions and 
        solutions, and prevent only large enterprises from dominating 
        the conversation, ensuring diverse and forward-thinking 
        perspectives are included in shaping future AI policy and 
        regulation.
    In closing and as previously stated, I agree with and support the 
three principles in CISA's ``Secure by Design.'' However, as mentioned 
in that document, ``some secure by design practices may need 
modification to account for AI-specific considerations.'' To that end, 
we realize AI/ML is different from typical software applications and 
these principles will need to be continuously refined. I welcome the 
opportunity to propose ideas and solutions that will help drive 
Government and industry adoption of MLSecOps practices, which can be 
enhanced by new technical standards and sensible governance 
requirements. I and my company, Protect AI, stand ready to help 
maintain the global advantage in technologies, economics, and 
innovations that will ensure the continued leadership of the United 
States in AI for decades to come.
    Thank you, Mr. Chairman, Ranking Member, and the rest of the 
committee, for the opportunity to discuss this critical topic of 
security of artificial intelligence. I look forward to your questions.

    Mr. Garbarino. Thank you, Mr. Swanson.
    Just for the record, I probably would have eaten the cake.
    Ms. Moore, I now recognize you for 5 minutes to summarize 
your opening statement.

   STATEMENT OF DEBBIE TAYLOR MOORE, SENIOR PARTNER AND VICE 
        PRESIDENT, GLOBAL CYBERSECURITY, IBM CONSULTING

    Ms. Moore. Thank you, Chairman Garbarino, Ranking Member 
Swalwell, and distinguished Members of the subcommittee. I'm 
very honored to be here.
    In my 20-plus-year career in cybersecurity, including 
working with DHS since its inception as both a Federal 
contractor, as well as a woman-owned small business leader, let 
me ground my testimony by saying that the potential for AI to 
bolster cybersecurity for our critical infrastructure is 
enormous.
    Second, as IBM, who's been engaged for more than half a 
century in the AI space, is a leading AI company, let me add 
that AI is not intrinsically high-risk. Like other 
technologies, its potential for harm is expressed in both how 
it is used, and by whom.
    Industry needs to hold itself accountable for the 
technology it ushers into the world, and Government has a role 
to play, as well. Together, we can ensure the safe and secure 
development and deployment of AI in our critical infrastructure 
which, as this subcommittee knows well, underpins the economic 
safety and the physical well-being of the Nation.
    In fact, my clients are already taking measures to do just 
that. I work with clients to secure key touch points, their 
data, their models, and their AI pipelines, both legacy and 
their plans for the future. We help them to better understand, 
assess, and clearly define the various levels of risk that 
Government and critical infrastructure alike need to manage.
    For example, through simulated testing, we discovered that 
there are ways for adversaries to conduct efforts like 
derailing a train, or other disruptive and destructive types of 
attacks. That knowledge helped us to create preventative 
measures to stop it from happening in real-world instances and 
as the same is true for things like compromise of ATM machines 
and other critical infrastructure.
    We also conduct simulations of red-teaming to mimic how an 
adversary could or should attack. We can apply these 
simulations to, for example, popular large language models to 
discover flaws and exploitable vulnerabilities that could have 
negative consequences, or just produce unreliable results. 
These exercises are helpful in identifying risks to be 
addressed before they could manifest into active threats.
    In short, my clients know that AI, like any technology, 
could pose risks to our Nation's critical infrastructure, 
depending on how it's developed and deployed. Many are already 
engaging to assess, mitigate, and manage that risk.
    So my recommendation for the Government is to accelerate 
existing efforts, and broaden awareness and education rather 
than reinventing the wheel.
    First, CISA should execute on its road map for AI and focus 
on three particular areas: No. 1 would be education and work 
force development. CISA should elevate AI training and 
resources from industry within its own work force and critical 
infrastructure that it supports.
    As far as the mission, CISA should continue to leverage 
existing information-sharing infrastructure that is sector-
based to share AI information such as potential vulnerabilities 
and best practices.
    CISA should continue to align efforts domestically and 
globally with the goal of wide-spread utilization of tools and 
automation. From a governance standpoint, to improve 
understanding of AI and its risks, CISA needs to know where the 
AI is enabled and in which applications.
    This existing AI usage inventory, so to speak, could be 
leveraged to implement an effective AI-governance system. An 
AI-governance system is required to visualize what needs to be 
protected.
    Last, we recommend that when DHS establishes the AI Safety 
and Security Advisory Board, it should collaborate directly 
with those existing AI and security-related boards and 
councils, and rationalize the threat to minimize hype and 
disinformation. This collective perspective matters.
    I'll close where I started. Addressing the risks posed by 
adversaries is not a new phenomenon. Using AI to improve 
security operations is also not new, but both will require 
focus. What we need today is urgency, accountability, and 
precision in our execution.
    Thank you very much.
    [The prepared statement of Ms. Moore follows:]
               Prepared Statement of Debbie Taylor Moore
                           December 12, 2023
                              introduction
    Chairman Garbarino, Ranking Member Swalwell, and distinguished 
Members of the subcommittee, I am honored to appear before you today to 
discuss the important topic of cybersecurity and its relationship to 
and with AI.
    My name is Debbie Taylor Moore, and I am VP and senior partner for 
IBM Consulting. I lead the Quantum Safe and Secure AI consulting 
practice for North America, including the delivery of security 
consulting services to commercial critical infrastructure and 
Government clients. During my 20-plus-year career in cybersecurity, I 
have had the great privilege to participate and witness first-hand, the 
impact of successful public and private-sector partnership. With each 
innovation we have risen to the occasion and asked ourselves the 
difficult questions: ``how to optimize the promise, while minimizing 
the peril of technology advancement?'' I have also collaborated with 
the Department of Homeland Security (DHS) since its inception as a 
Federal contractor, a woman-owned small business at an early stage 
start-up, and a Fortune 100 executive, to today, working at the 
intersection of security and emerging technology for IBM.
    Let me ground my testimony at the outset on three foundational 
points.
    First, AI is not intrinsically high-risk, and like other 
technologies, its potential for harm is expressed in both how it is 
used, and by whom. AI risk is not a new story--we've been here before, 
as any new powerful technology poses both risks and benefits. Like 
then, we provide appropriate guardrails and accountability for our 
technology.
    Second, the economic potential for AI is phenomenal. Yet, industry 
needs to hold itself accountable for the technology it ushers into the 
world. That is part of the reason that IBM recently signed onto the 
White House Voluntary AI Commitments to promote the safe, secure, and 
transparent development and use of generative AI (foundation) model 
technology.
    Third, the Government has a critical role to play, in collaboration 
with industry and all stakeholders. The White House Executive Order on 
the Safe, Secure, and Trustworthy Development and Use of Artificial 
Intelligence (``EO on AI'') assigns DHS and its Cybersecurity and 
Infrastructure Security Agency (CISA) with tasks to ensure agencies and 
critical infrastructure providers understand what is needed to deploy 
AI safely and securely in executing their missions. It also tasks DHS 
to continue to work with industry through a soon-to-be-developed AI 
Safety and Security Advisory Board. This subcommittee's hearing and 
oversight of the implementation of the EO on AI is a critical part of 
this dialog.
    My testimony will raise awareness and share how organizations today 
are: (A) Utilizing AI to improve security operations; (B) promoting the 
trustworthy and secure use of AI broadly; and (C) protecting AI in 
critical infrastructure. Last, I will share recommendations.
                           a. ai for security
    In my work with clients in the public and private sector, I see how 
deploying AI is helping to enable cybersecurity defenders more 
effectively and efficiently do their job. AI systems are proving to be 
security assets that industry is using to bolster existing security 
best practices regardless of critical infrastructure designation. AI 
can help to:
   Improve speed and efficiency.--When AI is built into 
        security tools, cybersecurity professionals can identify and 
        address, at an accelerated rate, the increasing volume and 
        velocity of threats. For example, machine learning can be used 
        to identify and analyze patterns and key indicators of 
        compromise. Over time the system trains itself on the data it 
        collects, reducing the number of false positives, honing in on 
        the incidents which require human intervention and 
        investigation. This form of augmentation helps Security 
        Operation Centers personnel who can be overwhelmed by the sheer 
        number of events. In certain cases, IBM's managed security 
        services team used these AI capabilities to automate 70 percent 
        of alert closures and speed up their threat resolution time 
        line by more than 50 percent within the first year of 
        operation.
   Contextual awareness.--Providing context from multiple 
        sources delivers insights, prioritization, and offers 
        recommendations for security analysts to follow to remediate 
        issues. For example, generative AI can confidentially and 
        comprehensively answer questions and render responses which 
        make it possible for a junior analyst to achieve higher-level 
        skills and complete complex tasks above and beyond current 
        proficiency.
   Improve resilience and response time.--For example, AI 
        leverages machine learning algorithms to predict future risk 
        and to develop a consistent risk profile and set of potential 
        actions based on historical data. This predictive modeling 
        helps organizations anticipate problems and proactively address 
        them, reducing mean time to resolution and costs. IBM's Cost of 
        a Data Breach 2023 report found that using AI was the single 
        most effective tool for lowering the cost of a data breach. The 
        average cost of a data breach is $4.5 million dollars; up 15 
        percent over the previous year.
           b. promoting the trustworthy and secure use of ai
    At IBM, we recognize that the use of AI and large language models 
in an application or system may increase the overall attack surface 
which must be protected, and that traditional security controls alone, 
may not be sufficient to mitigate risk(s) associated with AI. That is 
why we are proud to help clients deploy Trustworthy AI, ready for 
enterprise use--which means it is fair, transparent, robust, 
explainable, privacy-protecting, and secure--now and in the future.
    Here are examples at how we implement Trustworthy AI practices, 
including security, at three key touchpoints in client engagements:
    First, data.--We use data that is curated, protected, and trusted. 
Our guardrails help ensure data quality, compliance, and transparency. 
Data ownership is also extremely important. Our clients trust that 
their data will not be used by someone else. And we help clients to 
protect training and sensitive data from theft, manipulation, and 
poisoning, and compliance violations and to employ zero-trust access 
management policies and encryption.
    Second, AI models.--Securing the model development stage is 
paramount, as new applications are being built in a brand-new way, 
often introducing new, exploitable vulnerabilities for attackers to use 
as entry points to compromise AI, introducing the risk of supply chain 
attacks, API attacks, and privilege escalations. For example, we help 
clients:
   Secure the usage of AI models themselves, by implementing 
        security controls for privileged access management, preventing/
        detecting data leakage, and preventing/detecting new attacks 
        like poisoning (where you control a model by changing the 
        training data), extraction (where you steal a model by using 
        queries), or evasion (where you change the model behavior by 
        changing the input).
   Secure against new AI-generated attacks, by helping them 
        monitor for malicious activity like using AI to rapidly 
        generate new malware, or to mutate existing examples to avoid 
        detection. Also help clients detect highly personalized 
        phishing attacks and impersonation.
   Employ red-team testing: as attack surfaces of AI will 
        continually be uncovered, we are committed to and invested in 
        discovering these to stay ahead of the adversary. We do 
        comprehensive security assessments which simulate a layered 
        attack on an organization's physical systems, data, 
        applications, network and AI programs and assets. Expanding far 
        beyond a routine penetration test or vulnerability assessment, 
        red-teaming seeks to offer a learning opportunity while 
        evaluating an organization's response in a crisis. It mimics 
        the tactics, techniques, and procedures of known threat actors 
        and helps the organization to identify gaps and improve its 
        security posture. Participation is encouraged across multi-
        stakeholders and domains.
    Third, AI pipeline--we give clients the tools to extend governance, 
trust, and security across the entire AI pipeline. Even the most 
powerful AI models cannot be used if they are not trusted--especially 
in mission-critical industries. That is why we are creating and using 
AI governance tool kits to help make them more transparent, secure, and 
free of bias. Instilling trust in AI is key for AI to be deployed 
safely and widely. Security, too, must be extended to the inferencing 
and live use stage of the AI pipeline, to protect against prompt 
injections, model denial of service, model theft risks, and more, as 
discussed further below.
              c. protecting ai in critical infrastructure
    Critical infrastructure underpins the economic safety and the 
physical well-being of the Nation. Adversaries have worked for years to 
disrupt, exploit, and undermine the safety and security of power grids, 
air and land transportation systems, telecommunications, and financial 
networks. Further, we recognize that highly-capable AI models that are 
not developed and deployed with responsible guardrails can today, and 
could in the future, be modified by bad actors to pose safety risks to 
these networks from adversarial attacks to deep fakes giving false 
instructions to undermine industrial control systems.
    By ``breaking'' AI models we can better understand, assess, and 
clearly define the various levels of risk that governments and critical 
infrastructure alike need to manage.
    Let me explain. To address the security risk of an AI system, we 
can ``breakdown'' AI to learn of its potential weaknesses. In 
addressing security, to protect a system--whether software or 
hardware--we often tear it down. We figure out how it works but also 
what other functions we can make the system do that it wasn't intended 
to. Then, we address appropriately--from industrial/military-grade 
strength defense mechanisms to specialty programs built to prevent or 
limit the impact of the unwanted or destructive actions. We, 
collectively as industry and critical infrastructure providers, have 
the tools to do this--and in many cases are already doing this. We also 
have the governance and compliance know-how to enforce.
    Here are two examples from IBM efforts.
   Through security testing, we discovered that there are ways 
        for adversaries to get a train to derail from its tracks. That 
        know-how allowed us to create preventative ways to stop it from 
        happening in a real-world instance. Same with ATM machines 
        being compromised to eject unsolicited cash. And so forth.
   IBM X-Force research illustrated months ago how an attacker 
        could hypnotize large language models like ChatGPT to serve 
        malicious purposes without requiring technical tactics, like 
        exploiting a vulnerability, but rather simple use of English 
        prompts. From leaking confidential financial information and 
        personally identifiable information to writing vulnerable and 
        even malicious code, the test uncovered a new dimension to 
        language learning models as an attack surface. It is important 
        for Government and critical infrastructure entities to 
        recognize that AI adds a new layer of attack surface. We are 
        aware of this risk and can create appropriate mitigation 
        practices for clients before adversaries are able to materially 
        capitalize on it and scale.
    Further, the critical infrastructure ecosystem is also aware of the 
increased risk vectors that could be applied to critical infrastructure 
due to AI. Critical infrastructure providers are not only taking 
internal steps, or working with companies like IBM, to address this, 
but also working with the technology industry, Government, and others 
to set and advance best practices and tools. Here are some examples:
   Defcon red-teaming.--Thousands of offensive security 
        professionals recently gathered in Las Vegas to attack multiple 
        popular large language models in a bid to discover flaws and 
        exploitable vulnerabilities that could serve malicious 
        objectives or that could otherwise produce unreliable results, 
        like bad math. Those ``fire drills''--often called ``red-
        teaming'' as discussed above--identified risks to be addressed 
        before they could manifest into active threats.
   Public-private ``best practices''.--Government, working 
        closely with industry, has published best practices, guidance, 
        tools, and standards to help bolster our Nation's security. 
        These include: NIST's Secure Software Development Framework and 
        CISA's Software Bill of Materials as well as secure development 
        best practices, emphasized in CISA's Secure by Design 
        Principles and subsequent Guidance to Secure AI Systems, to 
        provide a path for AI models to be built, tuned, trained, and 
        tested following safe and secure best practices.
   Public-private collaboration and information sharing.--
        Collaboration vehicles for critical infrastructure providers, 
        industry, and Government exist already. For example, IBM is 
        pleased to partner, across verticals and industry through 
        collaboration with the private-sector-led Information Sharing 
        and Analysis Centers (ISACs). The ISACs are critical 
        collaborators for DHS and CISA to develop proactive, essential 
        platforms to effectively communicate best practices, like those 
        listed above, and outcome from the soon-to-be-launched NIST AI 
        Safety Institute. This Institute will convene experts to set 
        the guidelines for ``red-teaming'' best practices and other 
        similar AI safety standards. CISA has a role here, too. Just as 
        CISA's Secure Software by Design leveraged NIST's Secure 
        Software Development Framework, we see a role here for 
        collaboration as well, which we discuss further in the next 
        section.
                            recommendations
    Addressing the risks posed by adversaries around AI and critical 
infrastructure will require a combination of smart policy, tight 
collaboration, and efficient agency execution. Thankfully, the U.S. 
Government is aware that a multi-faceted, multi-stakeholder approach is 
needed evidenced from the U.S. National Cybersecurity Strategy, the 
recent EO on AI, and this hearing.
    We have a strong foundation to build on. What we need is urgency, 
accountability, and precision in our execution. Specifically, we 
encourage:
    1. CISA should accelerate existing efforts and broadened awareness, 
        rather than reinventing the wheel.--CISA is ``America's Cyber 
        Defense Agency'' chartered to help protect systems of 16 
        critical infrastructures sectors, the majority of which are 
        owned and operated by the private sector. As it achieves its 
        mission through partnerships, collaboration, education and 
        raising awareness, as well as conducting risk assessments, risk 
        management, and incident response and recovery, AI security 
        should be embedded into the agencies' work as a top priority. 
        We suggest that CISA:
      a. Execute on its Roadmap for AI.--Published in November, this is 
            a great first step. The Roadmap seeks to promote the 
            beneficial uses of AI to enhance cybersecurity 
            capabilities, protect the Nation's AI systems from 
            cybersecurity threats, and deter malicious actors' use of 
            AI capabilities to threaten critical infrastructure. 
            Critically it has a component that addresses workforce as 
            well. We strongly support this and hope to see its timely 
            execution.
      b. Elevate AI training and education resources from industry 
            within CISA's own workforce and critical infrastructure 
            that it supports.--And, it should accelerate implementation 
            of the National Cyber Workforce and Education Strategy. To 
            help close the global AI skills gap, IBM has committed to 
            training 2 million learners in AI by the end of 2026.
      c. Advance information sharing.--CISA should leverage existing 
            information-sharing infrastructure that is sector-based to 
            share AI information, such as potential vulnerabilities and 
            best practices. Also, share outcomes from the NIST Safety 
            AI Institute as well as threat intelligence, as 
            appropriate, from National Security Agency with Federal 
            Civilian Executive Branch Agencies and ISACs to ensure the 
            broadest reach of AI information.
      d. Implement AI Governance.--To improve understanding of AI and 
            its risk, CISA needs to know where AI is enabled and in 
            which applications. This existing ``AI usage inventory'' 
            could be improved through common definitions of AI and its 
            componentry. Ideally, this could then be leveraged to 
            implement an effective AI governance system.
      e. Align efforts domestically, and globally, with the goal of 
            wide-spread utilization of tools, rather than just their 
            development.--For example, encourage the tracking of 
            security requirements, risks, and design decisions 
            throughout the AI life cycle. CISA has made progress here 
            through its Secure by Design Principles and Guidelines for 
            Secure AI System Development issued this year in 
            collaboration with the United Kingdom and other governments 
            across the globe. To increase utilization of these tools, 
            guidance on execution is also important.
    2. The Department of Homeland Security should have a collaborative 
        and strategic AI Safety and Security Advisory Board as directed 
        by the EO on AI.--We recommend that it:
      a. Ensure members are a diverse representation of critical 
            infrastructure owners, technologists, security experts, and 
            agency stakeholders to best determine scope of work and 
            mission.
      b. Collaborate with existing efforts to leverage learnings and 
            outcomes from the National AI Advisory Committee, NIST AI 
            Safety Institute, and CISA Cyber Safety Review Board. These 
            board and committee outputs matter.
      c. Rationalize the threat to minimize hype and disinformation. 
            Attention should be directed toward addressing and 
            mitigating material risks. This Advisory Board can help to 
            identify best practices and guidance for securing AI for 
            our Government systems and critical infrastructure. Then, 
            it can educate on that and how to address the new threats 
            to our citizens, agencies, and critical infrastructure 
            providers.
    3. The Department of Homeland Security should implement the 
        directives from the EO on AI in a timely manner.--DHS is 
        directed to study how to better use AI for cyber defense and to 
        conduct operational pilots to identify, develop, test, 
        evaluate, and deploy AI capabilities. These capabilities will 
        aid in discovery and remediation of vulnerabilities in critical 
        U.S.G. software, systems, and networks. This subcommittee can 
        invite DHS to present any relevant findings and identify what 
        would be needed to ensure interoperability and scale across 
        Government.
                               conclusion
    I will end where I started, addressing the risks posed by 
adversaries is not a new phenomenon. Using AI to improve security 
operations is also not new. Both will require focus on what we have 
already assembled. We do not need to re-invent the wheel. What we need 
is urgency, accountability, and precision in our execution.

    Mr. Garbarino. Thank you, Ms. Moore.
    Mr. O'Neill, I now recognize you for 5 minutes to summarize 
your opening statement.

   STATEMENT OF TIMOTHY O'NEILL, CHIEF INFORMATION SECURITY 
   OFFICER, PRODUCT SECURITY VICE PRESIDENT, HITACHI VANTARA

    Mr. O'Neill. Thank you, Chairman Garbarino, Ranking Member 
Swalwell, and Members of the subcommittee for inviting me here 
today.
    I'm Tim O'Neill, the chief information security officer and 
vice president of product security at Hitachi Vantara. Hitachi 
Vantara's a subsidiary of Hitachi, Limited, a global technology 
firm founded in 1910, whose focus includes helping create a 
sustainable society via data and technology.
    We co-create with our customers to leverage information 
technology (IT), operational technology (OT), and our products 
and services to drive digital, green, and innovative solutions 
for their growth.
    IT is probably familiar to you, but OT encompasses data 
being generated by equipment, infrastructure, or a control 
system that can then be used to optimize the operation and for 
other benefits.
    Because of our heavy focus on the intersection of IT and 
OT, one of our major areas of business development and research 
has been in the industrial AI area. Industrial AI has the 
potential to significantly enhance the productivity of U.S. 
manufacturing and create working environments that benefit 
employees assembling products.
    Today's AI systems include tools that workers could use to 
enhance their job performance. Programs are predicting possible 
outcomes and offering recommendations based on the data being 
given to them and what the program has been trained to 
understand as the most likely scenario.
    That is true of a predictive maintenance solution Hitachi 
may create for a client to help them more quickly ascertain the 
likely cause of a breakdown or in the case of a generative AI 
system that is predicting what the next sentence could be in a 
maintenance manual.
    The U.S. Government has taken a number of positive steps 
over the last 5 years to promote and further development--and 
further the development of AI. We encourage the United States 
to further the development of AI through international 
engagements and reaffirming the United States' commitment to 
digital trade standards and policies, and digital trade titles 
and treaties like the ones found in the USMCA.
    The recent AI Executive Order (EO) speaks frequently to the 
necessity of securing AI systems. CISA's core mission focuses 
on cyber threats and cybersecurity, making them the obvious 
agency to take the lead in implementing this part of the EO. 
CISA is integral to supporting and providing resources for 
other agencies on cyber threats and security as those agencies 
then focus on their roles in implementing the Executive Order. 
This mission is vital to the Federal Government and where CISA 
is by far the expert.
    We applaud the CISA team for their excellent outreach to 
stakeholders and private industry to understand implications of 
security threats and help carry out solutions in the 
marketplace. Their outreach to the stakeholder community is a 
model for other agencies to follow.
    As CISA's expertise lies in assessing the cyber threat 
landscape, they are best positioned to support the AI EO and 
help further development of AI innovation in the United States.
    As CISA continues its mission, we recommend focusing on the 
following areas to help further the security of AI systems:
    No. 1, work across agency to avoid duplication--duplicative 
requirements that must be tested or complied with.
    No. 2, focus foremost on the security landscape being the 
go-to agency for other Federal agencies as they assess cyber-
related AI needs.
    No. 3, be the agency advising other agencies on how to 
secure AI or their AI testing environments.
    No. 4, recognize the positive benefits AI can bring to the 
security environment, detecting intrusions, potential 
vulnerabilities, and/or creating defense--defenses.
    Hitachi certainly supports CISA's on-going cybersecurity 
work. CISA's road map for AI has meaningful areas that can help 
promote the security aspects of AI usage. Avoid duplicating the 
work of other agencies is important so manufacturers do not 
have to navigate multiple layers of requirements.
    Having such a multi-layered approach could create more harm 
than good and divert from CISA's well-established and much-
appreciated position as a cybersecurity leader. It could also 
create impediments for manufacturers, especially small and 
medium-sized enterprises, from adopting AI systems that would 
otherwise enhance their workers' experience and productivity, 
improve factory safety mechanisms, and improve the quality of 
products for customers.
    Thank you for your time today, and I'm happy to answer any 
questions.
    [The prepared statement of Mr. O'Neill follows:]
                 Prepared Statement of Timothy O'Neill
                           December 12, 2023
    Good morning. Thank you, Chairman Garbarino, Ranking Member 
Swalwell, and the Members of the subcommittee for inviting me here 
today.
    My name is Tim O'Neill and I am the vice president, chief 
information security officer & product security, at Hitachi Vantara. 
Hitachi Vantara is a subsidiary of Hitachi, Limited, a global 
technology firm founded in 1910 and focused on creating a sustainable 
society via data and technology. We co-create with our customers to 
leverage information technology (IT), operational technology (OT), and 
our products and services to drive digital, green, and innovation 
solutions for their growth. Our regional subsidiary was established in 
the United States in 1959 and for over 30 years we have heavily 
invested in U.S. research & development through our 24 major R&D 
centers that are supporting high-skilled jobs in manufacturing and 
technology. Our commitment to the United States is demonstrated by the 
establishment of our digital business unit's global headquarters in 
Santa Clara, California, and we now employ over 16,000 in the United 
States in 30 States and across 60 group companies. North America is our 
second-largest market, representing 17 percent of our global revenue.
    Because of our heavy focus on the intersection of IT and OT 
technology, one of our major areas of business development and research 
has been in the industrial Artificial Intelligence (AI) area. This use 
of AI is often overlooked in favor of conversations about generative AI 
and ChatGPT; however, industrial AI has the potential to significantly 
enhance the productivity of U.S. manufacturing and create working 
environments that benefit employees assembling products. Our co-created 
AI solutions can address challenges in factories, from the quality of 
products to the productivity of workers, and respect and address worker 
concerns on health, safety, discrimination and bias, privacy, and 
security.
    Today's AI systems are tools that workers can use to enhance their 
job performance. Programs are predicting possible outcomes based on the 
data being given to them and what the program has been trained to 
understand as the most likely scenario. That is true of a predictive 
maintenance solution Hitachi may create for a client to help them more 
quickly ascertain the likely cause of a breakdown, or of a generative 
AI system that is predicting what the next sentence could be for a 
maintenance manual.
    The system cannot think for itself, and thus humans are necessary 
to confirm the AI's outcomes or make the ultimate decision. It is like 
a piece of software that we would use in our jobs to perform a 
calculation, but just as in the case of an Excel document that is 
running a formula on a group of cells, it is important for the user to 
ensure the formula is correct.
    The U.S. Government has taken a number of positive steps over the 
last 5 years to promote and further the development of AI. The previous 
administration laid the foundation with their request to the 
stakeholder community asking how AI could be used in the Federal 
Government. This set the course for the AI standards work that we have 
seen from the National Institute of Standards and Technology (NIST). 
The Biden administration has continued that work with their Blueprint 
for an AI Bill of Rights, and now this AI Executive Order. We encourage 
the United States to further the development of AI via engagement with 
international standards-setting bodies as well as by reaffirming the 
United States' commitment to digital trade standards, digital trade 
titles in treaties like the ones found in the United States-Mexico-
Canada Agreement (USMCA), and promotion of digital trade policies in 
international trade settings.
    The AI EO speaks frequently to the necessity of secure AI systems. 
CISA's core mission focuses on cyber treaties and cybersecurity, making 
them the obvious agency to take the lead in implementing this part of 
the EO. As an example, CISA's work on ransomware and the on-going 
updates and alerts of ransomware attacks has been vital to informing 
businesses and stakeholders and helping them identify, defend, and 
recover from attacks. This same type of threat assessment can be 
provided for AI. CISA is integral to supporting and providing resources 
for other agencies on cyber threats and security as those agencies 
focus on their roles in implementing the EO; this mission is vital to 
the Federal Government and where CISA is by far the expert. Other 
agencies should turn to CISA for this threat identification and cyber 
threat detection support.
    We applaud the CISA team for their excellent outreach to 
stakeholders and private industry to understand the implications of 
security threats and help carry out solutions in the marketplace. Their 
outreach to the stakeholder community is a model for other agencies to 
follow. As CISA's expertise lies in assessing the cyber landscape, they 
are best positioned to support the AI Executive Order and help further 
development of AI innovation in the United States. It is also important 
that CISA recognize the potential benefits AI could pose to critical 
infrastructure systems to help them identify possible attacks or defend 
against cyber or physical attacks, and not just on the ways AI could 
make them vulnerable to failure.
    There is great potential for CISA to work across agencies to 
support or augment their AI work and provide insight into cybersecurity 
guidance and/or threat identification. CISA is also discouraged against 
creating separate frameworks, processes, or testbeds and instead should 
work collaboratively across the Federal Government to utilize the 
resources other agencies have, have already, or are currently creating. 
Manufacturers, especially those who are making products for critical 
infrastructure industries, have been engaged with their respective 
agencies and are assisting in the development of AI systems. While some 
manufacturers may not have engaged with CISA as they implement 
technology solutions in their operations, as CISA coordinates across 
agencies to implement the EO, it can broaden its reach to educate all 
on the crucial role cybersecurity plays in core IT and AI processes.
    As an example, the Department of Energy's Cybersecurity, Energy 
Security, and Emergency Response office (CESER) is charged with 
overseeing cybersecurity in the electric grid, and thus manufacturers 
of components for the grid have worked with and continue to engage with 
CESER. CISA is best served by working with CESER on electric grid AI 
security issues versus creating a new regime that may duplicate 
existing work. We envision that CISA would continuously update CESER of 
threats or security concerns--on-going or new--that could be used to 
attack the energy grid, and work with the office to develop guidance to 
direct manufacturers on how to mitigate potential threats in the 
manufacturing process.
    The Department of Energy (DOE) and the National Science Foundation 
(NSF) are tasked with creating testing environments for AI systems, 
including testbeds. CISA, therefore, should avoid creating testbeds and 
instead work with the DOE and NSF on securing testing environments, 
including how they are accessed and used, to support their integrity 
and mitigate potential data manipulation which could compromise the 
subsequent testing or training of AI systems. CISA should also guide 
DOE and NSF on specific needs within those testbeds or testing 
environments to challenge the cyber resiliency of AI systems. This 
requires CISA's unique expertise, which agencies can lean on versus 
creating redundant processes or procedures for developers. Continuous 
evaluation of AI models for CISA should only be focused on the evolving 
cybersecurity threat landscape.
    NIST is tasked with creating and promoting guidelines, standards, 
and best practice development. To date, it already has a well-
established Cybersecurity Framework, the Secure Software Development 
Framework, and now the AI Risk Management Framework. CISA should 
encourage use of those existing documents and focus on additional 
frameworks to address gaps specific to their cybersecurity mandate. 
There is no need for CISA to create its own risk management or 
analytical framework for assessing AI systems. Rather, the agency must 
work with NIST to promote awareness of emerging threats and ensure that 
frameworks and testing environments are regularly updated to address 
them.
    Some manufactured products, including Hitachi-produced railcars and 
energy grid equipment, have been, and will be, in the market for 
decades. As new technology is incorporated into new products--for 
example, to assist in creating predictive maintenance schedules to 
anticipate failures before they happen, or create guided repair 
solutions to fix equipment issues faster--the future cybersecurity 
landscape needs to be better understood. CISA can, for instance, 
facilitate understanding around protecting assets when there are 
multiple versions of a technology in use at the same time. We believe 
that CISA's intention to create SBOM toolchains, and the desire to 
provide information on how AI fits into the Secure by Design program, 
are valuable avenues to pursue. Manufacturers of AI-enabled equipment, 
developers of AI programs, and deployers of AI systems must determine 
mitigation measures to keep the security of their equipment intact 
throughout its life cycle. CISA can thus help develop threat assessment 
guidelines, and the necessary mitigation efforts, to guard against 
legacy technology becoming a possible gateway for bad actors.
    Hitachi certainly supports CISA's on-going cybersecurity work. 
CISA's Roadmap for AI has very meaningful areas that can help promote 
the security aspects of AI usage. We strongly recommend that CISA avoid 
duplicating the current or tasked work of other agencies as that could 
create multiple layers that manufacturers would then have to navigate. 
Such a multi-layered approach would create more harm than good and 
divert from CISA's well-established and much-appreciated position as a 
cybersecurity leader. It could also create impediments for 
manufacturers, especially small and medium-sized enterprises, from 
adopting AI systems that would enhance their workers' experience and 
productivity, improve factory safety mechanisms, and improve the 
quality of products for customers.
    Thank you for your time today. I am happy to answer your questions.

    Mr. Garbarino. Thank you, Mr. O'Neill.
    Mr. Stamos, I now recognize you for 5 minutes to summarize 
your opening statement.

  STATEMENT OF ALEX STAMOS, CHIEF TRUST OFFICER, SENTINEL ONE

    Mr. Stamos. Thank you, Mr. Chairman. Thank you Mr. 
Swalwell. I really appreciate you holding this hearing and 
inviting me today.
    So I'm the chief trust officer of SentinelOne. I've had the 
job for about a month, and in that role, I've got two 
responsibilities. So SentinelOne's a company that uses AI to do 
defense. We also work with companies directly to help them 
respond to incidents.
    So, I get to go out in the field and work with companies 
that are being breached, help them fix their problems. But then 
I'm also responsible for protecting our own systems because 
security companies are constantly under attack these days, 
especially since the SolarWinds incident.
    So what I thought I'd do is, if we're going to talk about 
the impact of AI and cybersecurity, is just set the stage of 
where we are in the cybersecurity space and where American 
companies are right now so, we can have a honest discussion 
about what AI, the effects might be.
    The truth is, is we're not doing so hot. We're kind-of 
losing. We talk a lot in our field about the really high-end 
actors, the State actors, the GRU, the FSB, the MSF, the folks 
you get Classified briefings on. That's incredibly important, 
right? Just this weekend we learned more about Typhoon, a 
Chinese actor breaking into the Texas power grid, a variety of 
critical infrastructure providers. That is scary and something 
we need to focus on.
    But while that very high-end stuff has been happening, 
something much more subtle has been occurring that has kind-of 
crept up on us, which is, the level of adversity faced by kind-
of your standard mid-sized company, the kind of companies that, 
honestly, employ a lot of your constituents, 5,000 employees, 
10,000 employees, successful in their field, but not defense 
contractors or oil and gas or banks or the kinds of people who 
have traditionally had huge security teams.
    Those kinds of companies are having an extremely difficult 
time because of professionalized cyber crime. The quality of 
the cyber criminals has come up to the level that I used to 
only see from state actors 4 or 5 years ago, right? So now you 
will see things out of these groups, the BlackCats, the Alphas, 
the LockBits, the kinds of coordinated, specialized 
capabilities that you used to only see for hackers working for 
the Ministry of State Security or the Russian SVR.
    Unfortunately, these companies are not ready to play at 
this level. Now the administration has done some things to 
respond to this. There is, as you all know, there have been 
sanctions put in place to make paying ransoms to certain actors 
more difficult. That strategy, I understand why they did it. 
I'm glad they did it, but it has failed. The current strategy 
of sanctioning, all it has done has made, created new 
compliance and billable-hour steps for lawyers before ransom is 
paid.
    It hasn't actually reduced the amount of money that is 
being paid to ransomware actors, which is something on the 
order of over $2 billion a year being paid by American 
companies to these actors. That money, then they go reinvest in 
their offensive capabilities.
    While this has been happening, the legal environment for 
these companies has gotten more complicated. You folks in 
Congress passed a law in 2022 that was supposed to standardize 
how do you tell the U.S. Government that somebody has broken 
into your network. That law created a requirement for CISA to 
create rules. Now it's taken them a while to create those, and 
I think it would be great if that was accelerated.
    But in the mean time, while we've been waiting for CISA to 
create a standardized reporting structure, the SEC has stepped 
in and created a completely separate mechanism and requirements 
around public companies that don't take into account any of the 
equities that are necessary to be thought of in this situation, 
including having people report within 48 hours, which from my 
perspective, usually at 48 hours you're still in a knife fight 
with these guys. You're trying to get them out of the network. 
You're trying to figure out exactly what they've done. The fact 
you're filing 8-Ks in EDGAR that says exactly what you know and 
the bad guys are reading it, not a great idea.
    Some other steps that'd been taken by the SEC and others 
has really over-legalized the response companies are taking.
    So as we talk today, I hope we can talk about the ways the 
Government can support private companies. These companies are 
victims. They are victims of crime, or they're victims of our 
geopolitical adversaries attacking American businesses. They 
are not there to be punished. They should be encouraged. They 
should have requirements for sure. But when we talk about these 
laws, we also need to encourage them to work with the 
Government, and the Government needs to be there to support 
them.
    Where does AI come into this? I actually think--I'm very 
positive about the impact of AI on cybersecurity. Like I said, 
these normal companies have to play at the level Lockheed 
Martin had to 10 years ago, right? When I was the CISO at 
Facebook, I had an Eximius malware engineer. I had threat intel 
people that could read Chinese, that could read Russian. I had 
people who had done incident response at hundreds of companies.
    There is no way an insurance company in one of your 
districts can go hire those people, but what you can do through 
AI is we can enable kind-of more normal IT folks who don't have 
to have years of experience fighting the Russians and the 
Chinese and the Iranians, we can enable them to have much 
greater capabilities. That's one of the ways I think AI could 
be really positive.
    So as we talk about AI today, I know we're going to talk 
about the downsides. But I also just want to say there is a 
positive future here about using AI to help normal companies 
defend themselves against these really high-end actors.
    Thank you so much.
    [The prepared statement of Mr. Stamos follows:]
                 Prepared Statement of Alex Stamos \1\
---------------------------------------------------------------------------
    \1\ SentinelOne.
---------------------------------------------------------------------------
                           December 12, 2023
    Chairman Garbarino, Ranking Member Swalwell, and Members of the 
subcommittee, thank you for having me here today to discuss the 
challenges and opportunities presented by artificial intelligence and 
machine learning. These world-changing technologies have the potential 
to impact nearly every aspect of our lives, and they are likely to 
continue to scale at lightning speeds. This subcommittee, and policy 
makers at all levels of Government, face the challenging task of 
matching the pace of innovation with thoughtful policies to harness the 
positive aspects of AI while minimizing its dangers. In the context of 
cybersecurity, AI and machine learning provides attackers and scammers 
with a powerful new tool that can probe for weaknesses, and make 
ransomware-targeting more convincing and effective, among other 
dangers. But, used properly, these technologies also give defenders new 
resources that can make security technologies more effective and 
intuitive, while helping to ameliorate cyber workforce shortages.
    I am currently the chief trust officer of SentinelOne, a company 
that uses AI to help defend small to large enterprises, governments and 
nonprofits around the world. I am also a lecturer in the Computer 
Science and International Relations departments at Stanford University, 
where I teach classes in cybersecurity and on-line safety that include 
the creation of new AI tools by my students. I previously served as the 
chief information security officer at two large public companies, 
Facebook and Yahoo, and have consulted with hundreds of companies 
around the world both before and after serious cybersecurity incidents. 
I just finished a 2-year term as a member of the DHS Cybersecurity 
Advisory Committee, am currently a member of the Aspen Institute U.S. 
Cybersecurity Working Group and also advise the NATO Cybersecurity 
Center of Excellence.
    In my testimony, I will draw on my personal experience as a career 
cybersecurity professional to lay out a brief picture of the current 
security environment, with a focus on the ransomware threat, as well as 
some thoughts on how we can harness the power of AI in a safe way. I 
will also offer my thoughts on how we can build off of recent Federal 
policy efforts like President Biden's AI Executive Order \2\ to create 
an effective and sustainable framework for the safe use of AI in the 
public and private sectors.
---------------------------------------------------------------------------
    \2\ Executive Order on the Safe, Secure, and Trustworthy 
Development and Use of Artificial Intelligence/The White House.
---------------------------------------------------------------------------
                   the current situation in the field
    Over the last two decades I have helped investigate and respond to 
dozens of attacks against American businesses. Before addressing how AI 
could impact cybersecurity I wanted to offer a handful of observations 
from the past year:
   Cyber-extortion is a massive risk for companies of all 
        sizes. While we do continue to see interesting and important 
        intrusions from state-sponsored actors, the baseline risk for 
        every company in the United States, no matter their size or 
        industry, are the professional extortion groups--cyber 
        criminals.
   Extortionists are getting bold and inventive. Extortion 
        groups are regularly demanding massive ransoms, in the range of 
        $40-60 million. When the victim (appropriately) attempts to 
        negotiate this to a more reasonable level, threat actors use 
        text messages to employees, emails to vendors and customers, 
        ACH theft from the bank accounts of counterparties, and even 
        the threat of Securities and Exchange Commission (SEC) 
        investigation \3\ to try to drive negotiations forward.
---------------------------------------------------------------------------
    \3\ Ransomware group reports victim it breached to SEC regulators/
Ars Technica.
---------------------------------------------------------------------------
   The current sanction regime has made paying more complicated 
        but not less logical. The cyber crime wave has created a niche 
        industry of companies that specialize in tracking extortion 
        groups. During several recent incidents, my clients were told 
        by these specialists that, from the decision to pay the ransom 
        being made, it would take 5 to 7 days for the ransom payment to 
        reach the threat actor. Most of that time is spent with 
        sanctions compliance work, and given that these groups don't 
        operate on layaway, the delay makes the strategy of paying to 
        speed up recovery of systems less effective.
   The SEC is creating new requirements that confuse cyber 
        reporting. In 2022, Congress passed the Cyber Incident 
        Reporting for Critical Infrastructure Act \4\ (CIRCIA) to 
        standardize the process of reporting intrusions to the U.S. 
        Government. Via CIRCIA, Congress specifically directs CISA to 
        be the focal point of cyber incident reporting in the U.S. 
        Government. The SEC has ignored Congress' will and imposed new 
        reporting requirements for public companies that do not 
        consider the difficult tradeoffs involved in public disclosure. 
        While it is important that public companies are honest with 
        investors, the requirement to file statements during the 
        opening hours of a response and negotiation period gives the 
        attackers more leverage and distracts from key response steps 
        during the period when containment is almost never guaranteed. 
        Threat actors have noticed and have used threats of SEC 
        reporting to gain leverage, as previously referenced.
---------------------------------------------------------------------------
    \4\ Cyber Incident Reporting for Critical Infrastructure Act of 
2022 (CIRCIA)/CISA.
---------------------------------------------------------------------------
   Many companies are vulnerable due to their traditional 
        Microsoft architecture, and upgrading is extremely expensive. 
        Microsoft continues to dominate the enterprise information 
        technology stack, with many organizations still running the 
        same traditional on-premise Active Directory infrastructure 
        that Microsoft recommended for years. Unfortunately, 
        professional attackers have become extremely adept at finding 
        and exploiting the common weaknesses in this kind of corporate 
        network. More modern designs for Windows networks now exist, 
        but generally require companies to subscribe to monthly 
        Microsoft cloud services that many organizations find 
        prohibitively expensive. The cost of Microsoft's licenses 
        continue to slow down the adoption of modern technologies, and 
        are also related to the forensic challenges faced by multiple 
        Government agencies that struggled with investigating the 
        breach of Microsoft's systems due to the lack of logging in 
        their base cloud subscriptions.\5\
---------------------------------------------------------------------------
    \5\ Microsoft under fire after hacks of U.S. State and Commerce 
departments/Reuters.
---------------------------------------------------------------------------
   Legal risks bend companies away from smart, transparent 
        responses. The first call by a company during a breach is to 
        outside counsel, and due to privilege concerns the cyber-
        lawyers are represented on every single call or email thread. I 
        have worked with some excellent attorneys on breaches, but the 
        over-legalization of executive decision making is keeping 
        companies from making smart, ethical, and transparent decisions 
        because doing so might increase their risk of Department of 
        Justice (DOJ), SEC, or shareholder action in the future. I once 
        worked a breach where there were four law firms on every call, 
        representing various parties at the company, which did not 
        engender long-term, transparent decisions from the executive 
        team.
   It has become very hard to hire qualified chief information 
        security officers (CISOs). There is a massive deficit of 
        security leadership with the technical and leadership skills 
        necessary to guide large enterprises through a cyber crisis. 
        Recent actions by the SEC to lay the blame for systemic 
        security failures on the CISO are exacerbating this problem,\6\ 
        and I personally know two well-qualified people who have passed 
        up promotions to CISO roles due to the personal risk they would 
        be taking.
---------------------------------------------------------------------------
    \6\ Cyber Chiefs Worry About Personal Liability as SEC Sues 
SolarWinds, Executive--WSJ.
---------------------------------------------------------------------------
                 the impact of ai on cybersecurity \7\
---------------------------------------------------------------------------
    \7\ In this testimony I will restrict myself to discussing the 
impact of AI on the traditional information security field. I also have 
concerns around the impact AI could have on the manipulation of the 
U.S. public by our foreign adversaries which I discussed in my 
testimony to the bipartisan Senate AI Forum. Alex Stamos Statement--AI 
Insight Forum on Elections and Democracy.
---------------------------------------------------------------------------
    I mention these facts because I expect that the AI revolution that 
we are just beginning to witness will have massive impacts on the 
struggle to secure U.S. businesses from attacks, and that the basic 
roles played by security operators will look quite different in only a 
few years. This is mostly a good thing! As you can tell from my 
observations above, while great strides have been taken by Congress, 
the Executive branch, and companies across the Nation, I am overall 
quite pessimistic about the current state of cybersecurity in the 
United States. One of the major drivers of our challenges is a lack of 
qualified individuals compared to the huge number of organizations that 
require them. While other industries rightfully fear AI replacing the 
jobs of humans, I am hopeful that the next several years will lead to 
AI developments that help close the massive gap in cybersecurity skills 
while leaving plenty of high-paying jobs for humans supervising AI 
agents.
    Some of the benefits for defenders will include:
   Automated agents that can sort through petabytes of security 
        events and provide real-time visibility \8\ across a huge 
        network. Our industry has done a great job of creating a huge 
        amount of security telemetry from the tens of thousands of 
        computers and other devices in a typical corporate network, but 
        we have yet to put the ability to understand that data into the 
        hands of your typical IT team.
---------------------------------------------------------------------------
    \8\ SentinelOne is one of several companies working to deploy LLMs 
and other AI models to this end: Purple AI/Empowering Cybersecurity 
Analysts with AI-Driven Threat Hunting, Analysis & Response--
SentinelOne.
---------------------------------------------------------------------------
   AI-operated security operations centers (SOC), where the 
        difficult 24x7 work of responding to security alerts will be 
        left in the hands of computers while humans are woken up to 
        provide oversight and to double-check the decisions of the AI 
        agents. AI-enabled investigations will be much faster and 
        simpler for defenders, allowing them to make plain-English 
        queries like ``Show me all the computers that spoke to our 
        secure network in the last 8 hours'' instead of struggling to 
        get the exact syntax right on a search like:  ip.addr in 
        (10.10.0.0 .. 10.10.0.254, 192.168.1.1..192.168.1.50).
   Real-time analysis of unknown binaries, user behaviors, and 
        potentially malicious scripts in a manner that most IT workers 
        can understand. ``Figure out what this potentially malicious 
        piece of code does'' used to be a question answered by a 
        highly-skilled individual with a disassembler and debugger, and 
        only the most highly-resourced security teams can have such 
        professionals as full-time staff. AI systems that can 
        supplement these skill sets and provide plain-English 
        explainability of complex programs will be hugely beneficial to 
        defenders.
   More flexible and intelligent response automation. Many 
        security coordination tools require a huge amount of effort to 
        initially configure and are based upon fragile, human-written 
        rulesets. AI systems that respond to attacks in ways not fully 
        foreseen by human defenders are both a scary idea and also 
        likely necessary to cope with future attacks.
   Software development tools that point out insecure coding 
        patterns to software developers in real time, well before such 
        bugs can make it into production systems. Reducing security 
        flaws upstream is a much cheaper solution to our overall 
        software security challenges than trying to patch bugs later.
    It is also likely, however, that AI will be useful to attackers in 
several ways:
   AI could help attackers sort through the billions of exposed 
        services they regularly scan to automatically exploit and 
        install malware after the release of new flaws. This already 
        happens, using human-written scripts, but AI could become a 
        competitive advantage for groups that are able to use it to 
        move faster and automate currently manual exploitation steps. 
        Ultimately, speed kills in cyber, and AI may give attackers a 
        new advantage.
   We will start to see regular exploit creation via binary 
        analysis. Just as it requires specialized skills to analyze 
        advanced malware, it also requires specialized skills to write 
        it, and there has already been research into using AI to create 
        stable exploit code just through analyzing vulnerable programs 
        with minimal human guidance.
   Smart malware that operates free of human direction or 
        Command and Control (C2). AI could create new opportunities for 
        criminal organizations to create smart malware that operates 
        behind air gaps \9\ or moves through networks intelligently, 
        choosing the correct exploits and escalation paths without 
        human intervention.
---------------------------------------------------------------------------
    \9\ The best example of malware with this capability is Stuxnet, 
which clearly required large amounts of intelligence around the design 
of the Natanz facility. Smart malware that does not require this kind 
of pre-existing knowledge is a goal of attackers and a nightmare for 
defenders.
---------------------------------------------------------------------------
   Large Language Models are already automating the work of 
        social engineering and ransom negotiations. Transformer tools 
        are actively being used by cyber criminals to write more 
        effective communications, including random demand emails, 
        overcoming prior limitations in their grasp of the English 
        language.
    It is quite possible that we are moving toward a world where the 
``hands on keyboard'' actions currently performed by human attackers 
and defenders are fully automated, while small groups of experienced 
people supervise the AI agents that are automatically exploiting 
networks or fighting back against those exploits.
    Defenders may currently have an advantage in this space, as there 
has already been a decade of investment and research by security 
vendors into the defensive application of AI, however, we should not 
expect it to take long for attackers to catch up. That will be true for 
both the groups that hack for money and those who work for America's 
adversaries.
                   the near-term ai policy landscape
    President Biden's AI Executive Order gave broad responsibilities to 
the Department of Homeland Security (DHS) and CISA, in particular, to 
aid the implementation of responsible, safe use of AI. The Order tasks 
CISA with developing guidance for critical infrastructure operators, 
and collaborating with public and private stakeholders to develop 
policies and practices around the use of AI.\10\
---------------------------------------------------------------------------
    \10\ CISA's initial output on this topic has been published in 
tandem with the UK NCSC.
---------------------------------------------------------------------------
    This is a critical mission, and just one of many that CISA has, and 
will continue to perform. The creation of a defense-only, non-
regulatory agency that can support and partner with U.S. companies was 
a great step by the 115th Congress and President Trump, and Congress 
should continue to ensure that CISA has the resources it needs to carry 
out this mission in an effective, responsive, and timely way. As cyber 
incident reporting requirements are built out pursuant to CIRCIA, 
Congress should continue to support CISA as the focal point for these 
reports, as well as response and remediation, and should work to de-
conflict the various reporting requirements being invented by agencies 
outside of Congress' direct recommendations.
    As AI technologies evolve, it is important for policy makers to 
adopt nimble policies and safeguards made in careful collaboration with 
the private sector, and civil society groups representing a broad 
cross-section of the country. As lawmakers carry out this vital but 
difficult mission, it is important that every effort is made to nurture 
and harness the positive benefits of AI, especially in the realm of 
security. Too many regulatory discussions around AI assume that only a 
handful of large American companies will dominate the space and can be 
utilized as chokepoints for preventing the malicious use of AI or 
spread of fundamental capabilities.
    This point of view is misguided and has led to warped regulatory 
priorities in the European Union and elsewhere.
    The truth is that the AI genie is out of the bottle. There will be 
no reversing the spread of fundamental knowledge around modern AI 
techniques around the world. My Stanford students regularly use or even 
create new AI models as part of their classwork, and the amazing 
advances in open-source foundation models has demonstrated the 
capability of crowds of people to compete with U.S. tech giants.
    The spread of AI into every corner of personal and enterprise 
computing is inevitable. Congress should focus on encouraging 
responsible, thoughtful applications of these technologies and on 
maintaining the competitiveness of American champions instead of trying 
to control the spread of AI knowledge. America's adversaries, and cyber 
criminals at home and abroad are sure to use these capabilities at 
every opportunity. It is critical that new regulations around the use 
of AI, however well-intentioned, don't hinder the ability of defenders 
to innovate and deploy these technologies in a beneficial way.
    Thank you again for having me here today. I look forward to your 
questions.

    Mr. Garbarino. Thank you, Mr. Stamos.
    Like you, I agree that the SEC rule is terrible.
    Mr. Stamos. Same.
    Mr. Garbarino. Yes, and hopefully the Senate will fix that 
this week. We can take it up in January.
    Members will be recognized by order of seniority for their 
5 minutes of questioning. An additional round of questioning 
may be called after all the Members have been recognized.
    I now recognize--well, we're not going to go in seniority. 
I'm going to go with Mr. Luttrell from Texas for 5 minutes.
    Mr. Luttrell. Thank you, Mr. Chairman.
    Thank you all for being here today. This is definitely a 
space that we need to be operating in from now until the very 
extensive future.
    Mr. Stamos, you brought up a very valid point. It's the 
lower entities. I had a hospital get hit in my district the day 
after we had a CISA briefing in the district.
    My question is, because people show up after the attack 
happens. We have--and I would say it's inevitably when you peel 
this onion back, it's the human factor that more or less is the 
problem set because we can't keep up with the advances in AI. 
Every second of every hour of every day, it's advancing.
    It seems like, because industry is very siloed, AI ML is 
very siloed, depending on the company you work for, as we try 
to secure artificial intelligence, and we have that human 
factor, my question is--and this may even sound silly. But, 
again, I don't know what I don't know.
    I don't--can AI itself secure AI? Is there any way that we 
can remove as much error as possible and have artificial 
intelligence work to secure artificial intelligence? Because as 
humans, we can't work that fast. Does that question at all make 
sense?
    Mr. Stamos, you start that.
    Mr. Stamos. Yes, absolutely, sir. I think it does make 
sense.
    I think where we're going to end up is, we're moving out of 
a realm, this stuff is happening so fast, where human reaction 
time is not going to be effective anymore.
    Mr. Luttrell. Correct, yes.
    Mr. Stamos. It is going to be AI v. AI. You'll have humans 
supervising, training, pushing the AI in the right direction on 
both the defender side and the attacker side.
    Mr. Luttrell. Is there anything that lives out there right 
now in the AI ML space that's combating on its--and I dare not 
say on its own, because I don't want to talk about the 
singularities and scare people out of their clothes.
    Mr. Stamos. Yes.
    Mr. Luttrell. But are we even remotely close? Because I 
agree with your other statement. You said, we're behind on this 
one.
    Mr. Stamos. Yes, so I--there's a bunch companies including 
our own that use AI in defensive purposes. Most of it right now 
is you get--one of the precepts of modern defense in large 
networks is you gather up as much telemetry data, you suck as 
much data as possible into one place. But the problem is having 
humans look at that is effectively impossible.
    So using AI to look through the billions of events that 
happen per day inside of a medium-sized enterprise is what is 
happening right now.
    The problem is that AI is not super inventive yet, and I 
think that's where we're looking as defenders to make it more 
creative and more predictive of where things are going, and 
better at noticing things, weird attacks that have never been 
seen before, which is still a problem.
    Mr. Luttrell. How do we even see something at that speed? I 
mean, we're--we're into exoscale computing, if I'm saying that 
correctly.
    How does this even--how does the Federal Government model 
and scale this in order to support our industry?
    Mr. Swanson. Yes, it's a great question.
    I think we need to boil it down, though, to the basics in 
order to build----
    Mr. Luttrell. I'm all about that, absolutely, yes, please.
    Mr. Swanson. I think the simplest things need to be done 
first, and that is we need to use and require a Machine 
Learning Bill of Materials so that record, that ledger, so we 
have provenance, so we have lineage, so we have understanding 
of how the AI works.
    Mr. Luttrell. Is it even possible to enclave that amount of 
retrospective-prospective data?
    Mr. Swanson. It is.
    Mr. Luttrell. Where----
    Mr. Swanson. It is, and it's necessary.
    Mr. Luttrell. I believe it's necessary but how do we even--
I don't even know what that looks like. We have 14 National 
laboratories with some of the fastest computers on the planet. 
I don't even think we touch it yet.
    Mr. Swanson. As I said, I think there are millions of 
models live across the United States. But there definitely is 
software from my company and others that are able to index 
these models and create Bill of Materials, and only then do we 
have visibility and audibility of these systems. Then you can 
add security.
    Mr. Luttrell. How do we share that with Rosie's Flower Shop 
in Magnolia, Texas?
    Mr. Swanson. I think that's a challenge, but we're going to 
have to work on that. We can----
    Mr. Luttrell. That's something we're trying to figure out.
    Mr. Swanson [continuing]. Go down with all of you and say: 
How do we bring this down to small, medium-sized businesses and 
not just the large enterprises and the AI incumbents?
    Mr. Luttrell. I have 30 seconds. I'm sorry I can't get to 
each and every one of you.
    But I would really like to see a broken-out infrastructure 
on the viability of threats and the attack mechanism that we 
can populate or support at our level to get you what you need.
    Again, we can't see that speed. That is just--and I don't 
think people can appreciate the amount and just the sheer 
computational analytics that go into where we are right now, 
and we are still in our infancy.
    But if we had the ability to--you can put it in crayon for 
me, that's even better--So we can understand and not only 
speak--and then we understand but we can speak to others, like, 
this is why this is important and this is why we need to move 
in this direction in order to stay up front of the threats.
    But thank you.
    Mr. Chairman, I yield back.
    Mr. Garbarino. The gentleman yields back.
    I now recognize the Ranking Member, Mr. Swalwell, from 
California for 5 minutes.
    Mr. Swalwell. Great. Thank you, Chairman.
    Like every witness, I share in the excitement about the 
potential of AI.
    One piece of this that is not discussed enough is equity in 
AI, in making sure that every school district in my district 
gives a child the opportunity to learn it. I think that's one 
part we have to get right is to make sure you don't have kind-
of two classes of kids, the class that learns AI and the class 
that doesn't have the resources. That's a separate issue.
    But on cybersecurity, Mr. Stamos, if you could just talk 
about: What can AI do on the predictive side to help small and 
medium-sized businesses to kind-of see the threats that are, 
you know, coming down the track and stopping them? Is that 
affordable right now? Is it off the shelf? Like how do they do 
that?
    Mr. Stamos. Yes, so I think this is related to Mr. 
Luttrell's flower shop he was talking about. If you're a small 
or medium business, it has--it has never been either cost-
effective or really honestly possible to protect yourself at 
the level that you're dealing with by yourself.
    So, I think the way that we support small or medium 
businesses is we try to encourage, No. 1, to move them to the 
cloud as much as possible, effectively collective defense. If 
your mail system is run by the same company that's running 
100,000 other companies and they have a security team of 4- or 
500 people that they can amortize all across those customers, 
that's the same thing with AI.
    Then the second is probably to build more what are called 
MSSP, Manage Security Service Provider relationships so that 
you can go hire somebody whose job it is to watch your network, 
and that they give you a phone call.
    Hopefully if everything is worked out and AI is worked out, 
you get a call that says, Oh, somebody tried to break in. They 
tried to encrypt your machine. I took care of it.
    Mr. Swalwell. What can CISA do to work with the private 
sector on this?
    Mr. Stamos. So I like what CISA has done so far. I mean, I 
think their initial guidelines are smart. You know, CISA, like 
I said before, I think a key thing for CISA to focus on right 
now is to get the reporting infrastructure up.
    One of the problems we have as defenders is we don't talk 
to each other enough, right? The bad guys are actually working 
together. They hang out on these forums. They trade code. They 
trade exploits.
    But when you deal with a breach, you're often in a lawyer-
imposed silo that you're not supposed to talk to anybody and 
not, you know, send any emails and not work together. I think 
CISA breaking those silos apart so the companies are working 
together is a key thing they can do.
    Mr. Swalwell. Do you see legal risks that are bending 
companies away from smart, transparent responses?
    Mr. Stamos. Yes, unfortunately, you know, it's something I 
put in my written testimony. I once worked in instant response 
where there were four law firms on every single call because 
different parts of the board were suing each other and there's 
a new CEO and an old CEO and it was a mess. Like you can't do 
instant response in a situation where it's all overly 
legalized.
    I think part of this is--the part of this comes from the 
shareholder stuff is that any company that deals with any 
security breach, any public company, automatically ends up with 
derivative lawsuits that they spend years and years defending 
that don't actually make anything better.
    Then part of it is the regulatory structure, you know, of 
the SEC and such, creating rules that now kind-of really over-
legalize defense.
    Mr. Swalwell. Do we have the talent pool, or the 
willingness, of individuals right now to go into these fields 
to work as a chief information security officer?
    Mr. Stamos. Yes, so we actually have a real talent.
    Mr. Swalwell. Can you speak to that?
    Mr. Stamos. Yes, as a CISO, we have a real talent pool 
problem on two sides. So on--I don't want to say the low-end, 
but the entry-level jobs, we are not creating enough people for 
the SOC jobs, the analyst jobs, the kind of things that most 
companies need.
    I think that's about, you know, investing in community 
colleges and retraining programs of helping people create, get 
these jobs either mid-career or without going and doing a 
computer science degree, which really isn't required for that 
work.
    Then at the high end, chief information security officer, 
CISO is the worst C-level job in all of public capitalism.
    Mr. Swalwell. Why is that?
    Mr. Stamos. Sorry, sir.
    Because it is--you are naturally--like when I was a CISO 
and I would walk in the room, people would mutter under their 
breath, like, Oh, my God, Stamos is here.
    It's partially because you're kind-of the grim reaper, 
right? You're only there for negative downside effects for the 
company. You have no positive impact on the bottom line, 
generally.
    So, it's already a tough place. But what's also happened is 
that there's now legal actions against CISOs for mistakes that 
have been made by the overall enterprise, and this is something 
else I'm very critical of the SEC about is that they're going 
after the CISO of SolarWinds.
    Mr. Swalwell. Is that a deterrent to people wanting to be a 
CISO?
    Mr. Stamos. Oh, absolutely. I have two friends this last 
month who have turned down CISO jobs because they don't want 
the personal liability. They don't want to be in a situation 
where the entire company makes the mistake, and then they're 
the ones facing the prosecution or an SEC investigation. It's 
become a real problem for CISOs.
    Mr. Swalwell. I yield back. Thanks.
    Mr. Garbarino. The gentleman yields back.
    I now recognize my friend from Florida, Mr. Gimenez, for 5 
minutes of questioning.
    Mr. Gimenez. Thank you, Mr. Chairman.
    I just asked my ChatON if there are AI systems right now 
actively protecting computer systems in the United States and 
around the world. They said yes. So you do have, you know, the 
rudimentary aspects of AI, because some months ago we were at a 
conference or at least a meeting with a number of 
technologists, Google, Apple, all those.
    I asked them a question. You know, where--in terms of AI, 
where are we? Imagine that 21 being an adult. Where are we in 
that race?
    They refused to answer and give me an age. What they did 
do, though, they said it was we're in the third inning. So, you 
know, baseball analogy, 9 innings is the full game. So we're 1/
3 of the way there which is kind-of scary because of the 
capabilities right now that I see are pretty scary.
    So at the end of the day, do you think--this could all be 
elementary. I mean, it appears to me that what we're heading 
for is cyber attacks are going to be launched by artificial 
intelligence networks, and they're going to be guarded against 
by artificial intelligence networks and that it's who has the 
smartest artificial intelligence is going to win the race or is 
going to win out in that battle or war, et cetera. Would that 
be accurate?
    Yes, anybody.
    Mr. Stamos. Yes, sir, that's absolutely accurate.
    Mr. Gimenez. So it all--now it means that we have to win 
the artificial intelligence battle. Or is this just going to be 
a race that's going to be forever?
    Mr. Stamos. Yes, I mean, I think basic American economic 
competitiveness is absolutely dependent on us maintaining our 
lead in overall AI technologies, but then especially AI 
technologies that are focused on cybersecurity.
    Mr. Gimenez. So where do you see the future? Am I too far 
off? Is this going to be machines going at each other all the 
time, testing each other, probing each other, defending against 
each other? Then, you know, somebody will learn a little bit 
more, get into one system, and then that system learns and 
combats the next one? But is this just going to be continuous, 
around-the-clock cyber warfare?
    Mr. Stamos. Yes, unfortunately, I think that's the future 
we're leading to.
    I mean, it was 7 years ago, in 2016, DARPA ran an event 
which was about teams building computers that hacked each other 
without human intervention, and that was successful. So, you 
know, we're 7 years on from that kind of basic research that 
was happening.
    I am very afraid of the constant attacks. The other thing 
I'm really afraid of is smart AI-enabled malware that, you 
know, you look at the Stuxnet virus that the United States has 
never admitted to have a part of. But whoever created Stuxnet 
spent a huge amount of money and time building a virus that 
could take down the Natanz nuclear plant, and it required a 
huge amount of human intelligence because it was specifically 
built for exactly how Natanz's network was laid out.
    My real fear is that we're going to have AI-generated 
malware that won't need that, that if you drop it inside of an 
air gap network in a critical infrastructure network, it will 
be able to intelligently figure out, Oh, this bug here, this 
bug here, and take down the power grid, even if you have an air 
gap.
    Mr. Gimenez. This is just, you know, conjecture. OK? Could 
we ever come to the point that we say, What the heck? Nothing 
is ever going to be safe. Therefore, chuck it all and say we're 
going go back to paper and, you know, and we gotta silo all our 
stuff. Nothing can be connected anymore, because anything 
that's connected is vulnerable. All our information's going to 
be vulnerable no matter what we do, that eventually somebody 
will break through and then we're going to be at risk.
    Is it possible that in the future we just say, OK, enough, 
we're going back to the old analogue system? Is that a 
possibility?
    Ms. Moore. I'd like to answer that.
    I think that in our industry, in general, that we have a 
lot of emphasis on the front end of detection of anomalies and 
findings and figuring out that we have vulnerabilities and 
trying to manage threats and attacks. I think there's less so 
on resilience because bad things are going to happen.
    But what is the true measure is how we respond to them, and 
AI does give us an opportunity to work toward: How do we 
reconstitute systems quickly? How do we bounce back from severe 
or devastating attacks? With critical infrastructure, that's 
physical, as well as cyber.
    So the--when you look at the solutions that are in the 
marketplace, in general, the majority of them are on the front 
end of that loop. The back end is where we need to really look 
toward how we prepare for the onslaught of how creatively 
attackers might use AI.
    Mr. Gimenez. OK. Thank you.
    My time is up, and I yield back.
    Mr. Garbarino. The gentleman yields back.
    I now recognize Mr. Carter from Louisiana for 5 minutes of 
questioning.
    Mr. Carter. Thank you, Mr. Chairman.
    Thank all of the witnesses for being here. What an exciting 
content, and as exciting it is, is the fear of how bad it can 
be. What can we learn from the lack of regulations and social 
media, Facebook, and others on the front side that we can do 
better with AI?
    Ms. Moore.
    Ms. Moore. Well, I think that there are many lessons to be 
learned. I think that, first of all, from a seriousness 
perspective, I think that AI has everyone's attention. Now that 
it's dis-intermediated sort-of, the--all the middle people, and 
it's directly in the hands of the end-users, and now folks have 
work force productivity tools that leverage AI.
    We have been using AI for years and years. Anybody here who 
has a Siri or Alexa, you're already in the AI realm.
    The piece that we have to consider is one of the points 
that Congressman Swalwell brought up around the idea of 
education and upskilling and making sure that people have the 
skills in AI that are necessary to become part of this future 
era.
    We work to train folks. We're training over 2 million 
people over the next 3 years strictly in AI. We've all got to 
upskill. This is all--this is all of us collectively.
    I think also a point was brought up about the harmonization 
piece. I think this is one area that we can all agree that if 
we aren't expedient in the way we approach it, that it's going 
to run right over us.
    Mr. Carter. So let me re-ask that. Thank you very much.
    But what I really want to know is we're here. It's here. 
How can we learn, and how can we regulate it better to make 
sure that what has so much power, and so much potential to be 
good, we thwart the bad part?
    One example, I recently saw on social media a few weeks ago 
a message from what looked like, sounded like the President of 
the United States of America giving a message. Now to the naked 
eye, to the individual that's out there that's not paying 
attention to the wonders of AI, that was the President.
    How do we manage that? From a security risk, how do we know 
that it's that this person that's purporting to be Secretary 
Mayorkas telling us about a natural disaster or a security 
breach isn't some foreign actor?
    Any one of you, in fact, everyone quickly. We have about 2 
minutes.
    Mr. Stamos.
    Mr. Stamos. So on the deepfakes, for--on the kind of 
political disinformation, I mean, I think one of the problems 
now is it is not illegal to use AI to create. There's no 
liability of creating totally real things that say embarrassing 
things that are used for political. It is totally legal to use 
AI in political campaigns and political advertising.
    Mr. Carter. For the moment.
    Mr. Stamos. I would--right.
    So I would start there, and then work your way down. I 
think the platforms have a big responsibility here to try to 
detect, but turns out detection of this stuff is a technical 
challenge.
    Mr. Carter. Mr. O'Neill.
    Mr. O'Neill. I was going to say, if we could focus on the 
authentication in giving consumers and the public the ability 
to be able to validate easily the authenticity of what they're 
seeing, that would be important.
    The other thing that was talked about, which I agree with 
Ms. Moore, about the back end is making sure that we have these 
resilient systems. What we've learned in--with social media and 
cybersecurity, in general, is it's an arms race. It always has 
been. It always will be. We're always going to be defending 
and, you know, spy-versus-spy type activities, trying to outdo 
each other.
    We need to make sure that we have the back-end systems, the 
data's available, the ability to recover quickly, and get to 
normal operations back-up.
    Mr. Carter. We have about 40 seconds. Thank you very much.
    Ms. Moore, did you have anything more to add? I want to get 
to Mr. Swanson.
    Ms. Moore. I'd just say that there are technologies 
available today that do look at sort-of defending reality, so 
to speak, but that disinformation and the havoc that it wreaks 
is an extreme concern, and I think that the industry is 
evolving.
    Mr. Carter. Mr. Swanson.
    Mr. Swanson. I think from a manufacturer of AI perspective, 
we need to learn and we need to understand that AI is different 
than typical software. It's not just code. It's data. Yes, it's 
code. It's a very complex machine-learning pipeline that 
requires different tactics, tools, and techniques in order to 
secure it. We need to understand, and we need to learn that 
it's different in order to secure AI.
    Mr. Carter. And I--the disadvantage that we have is 
oftentimes, the bad actors are moving as fast, if not faster, 
than we are. So we stand ready, particularly from this 
committee's standpoint, to work closely with you to identify 
ways that we can stay ahead of the bad actors to make sure that 
we're protecting everything from universities to bank accounts 
to political free speech. There's a real danger.
    So thank you all for being here.
    Mr. Chairman, I yield.
    Mr. Garbarino. The gentleman yields back.
    I now recognize Ms. Lee for 5 minutes from Florida.
    Ms. Lee. Thank you, Mr. Chairman.
    Yesterday, it was widely reported that China launched a 
massive attack, a cyber attack, against the United States and 
our infrastructure. This incident is just one single event in a 
decades-long cyber warfare campaign launched against the United 
States.
    We should not expect these threats to lessen, and should 
continue to engage with the proper stakeholders to determine 
how best to defend our infrastructure, and one of the things 
that's so important is what each of you has touched on here 
today: how artificial intelligence is going to empower and 
equip and enable malicious cyber actors to do potential harm to 
the United States and our infrastructure.
    I'd like to start by returning to something you mentioned, 
Mr. Stamos. I was interested in this point during your 
testimony with Mr. Gimenez. You described a scenario where 
artificial intelligence malware could essentially deploy within 
critical infrastructure on an air gap network.
    Can you share with us a little bit more about how you 
visualize that threat occurring, how would it get to the air 
gap network in the first place?
    Mr. Stamos. Right. So, you know, the example I was using is 
the most famous example. This is Stuxnet, where the exact 
mechanism where the jump to air gap has not been totally 
determined. But one of the theories is that Stuxnet was spread 
pretty widely among the Iranian population, and somebody made a 
mistake. They charged a phone. They plugged in their iPod at 
home, and it jumped on the USB device into the network.
    So, there are constantly, whenever you work with, like, 
secure air gaps networks, there are constant mistakes that 
being made where people hook them up, people bring devices, and 
stuff like that.
    Ms. Lee. Thank you.
    Ms. Moore, I'd like to go back to your point. When you 
talked about really the inevitability, that there will be 
incidents, that there will be vulnerabilities, and that one of 
the things that we can do that's most productive is focus on 
resilience, recovery, rebuilding.
    Would you--you've had unique experience working before in 
Federal Government on several cybersecurity initiatives.
    Would you share with us your perspective on how DHS and 
CISA can best be thinking about those concepts and how we 
should be measuring success and performance in that way?
    Ms. Moore. That's a very good question.
    I think that one of the things that we have to move away 
from, in general, is measuring based on compliance and the 
knowledge only of that--that we have around what we know is a 
known threat.
    So, again, the way I said earlier that we spend a lot of 
time sort-of cataloging all of our concerns and I think that 
when you look at industry and you look at industry globally, 
and you look at the power of AI and you consider the way that 
we manage markets today, the way that we have transactional 
data moving all over the globe in context and we have the 
ability to have that information in front of us in real time, 
that's the way security needs to be. That's the way threat 
intelligence needs to be. It needs to be that way across all 
sectors, but curated for the specific sector.
    So, it'd be a way of sort-of having a common record of 
knowledge amongst all of the critical infrastructure players 
and DHS and the FCEBs and something that we could rely on that 
would be expedient in helping us to at least stay ahead of the 
threat.
    Ms. Lee. As far as the EO itself that directs DHS to 
develop a pilot project, within the Federal Civilian Executive 
branch systems, is there any specific information that you 
think would be helpful for DHS and CISA to share with the 
private sector as they determine lessons learned from the 
pilot?
    Ms. Moore. That question's for me?
    Yes, I think that there is extreme importance around what 
we consider to be iterative learning. In the same way that AI 
models go out and they train themselves iteratively--literally 
train themselves iteratively, we need to do the same thing.
    So in so many instances throughout global enterprises 
everywhere, we have lessons learned, but they're not always 
shared completely, or nor do we model these threats in a way 
that we learn collectively where the gaps are and do that 
consistently.
    Ms. Lee. Mr. O'Neill, a question for you. Do you find that 
generally companies in the private sector are considering the 
cybersecurity background risk profile of artificial 
intelligence products when deciding whether to use them? How 
can CISA better encourage that type of use of AI that is secure 
by design?
    Mr. O'Neill. Thank you for the question.
    I'm a big fan of CISA, especially with the guidance, the 
tactical and strategic information they're providing to 
businesses about threat actors, and so forth. In their secure 
by design, one of the things they call for is doing threat 
modeling. When you're designing applications and systems, and 
so forth, if you're doing the threat modeling, you're basically 
now having to contend with and understand that you're going to 
be attacked by automated systems or having AI used against you.
    So I think that helps--sorry. I don't know what that is.
    Ms. Lee. Not to worry.
    Mr. O'Neill. That would be one thing.
    The other thing I would recommend to CISA would be, they're 
very focused on giving great information about the type of 
exploits that attackers are using, and it really helps with 
defenses, and so forth. But, again, if they could take some of 
that leadership focused on resiliency, preparedness, and 
recovery so that the companies once you--it's a matter of time 
that you will likely have an event. It's how you are able to 
respond to that event. There's many companies, such as the one 
that I work for that, you know, help companies prepare for the 
inevitable event to be able to recover, and so forth.
    But having the workaround procedures, especially for 
critical infrastructure, to get that working and functional so 
that it can carry out its mission while the recovery occurs, 
that type of thing, and having your data secured, so that it's 
available and before the attackers got to it, encrypted it and 
you can go to a known good copy and stuff is very important. I 
think they could expand their scope a little more to help 
companies to be able to really have the workaround procedures 
and the testing, and so forth, just like they do the red team 
testing to find the vulnerabilities and try to prevent the 
issues, but also on the backside, to recover and learn from the 
incidents to drive continuous improvement.
    Thank you.
    Ms. Lee. Thank you, Mr. O'Neill.
    Mr. Chairman, I yield back.
    Mr. Garbarino. Not a problem. Will just deduct that extra 
time from Mr. Menendez.
    I now recognize Mr. Menendez from New Jersey for 3.5 
minutes.
    Mr. Menendez. I appreciate that, Mr. Chairman. I'd always 
be happy to yield to my colleague from Florida who is one of 
the best Members of this subcommittee, and I always appreciate 
her questions and insight.
    Mr. Chairman, Mr. Ranking Member, thank you for convening 
today's hearing. To our witnesses, thank you for being here.
    I want to talk about one of the fundamental structural 
issues with AI, how its design can lead to discriminatory 
outcomes. These types of generative AI that have captured 
public attention over the last year produce content based on 
vast quantities of data. Here's the problem: If those vast 
quantities of data, those inputs are biased, then the outcome 
will be biased as well.
    Here are a few examples. The Washington Post published a 
story last month about AI-image generators amplify bias in 
gender and race. When asked to generate a portrait photo of a 
person in Social Services, the image generator Stable Diffusion 
XL issued images exclusively of non-White people. When asked to 
generate a portrait photo of a person cleaning, all of the 
images were of women.
    In October, a study led by Stanford School of Medicine 
Researchers was published in the academic journal, ``Digital 
Medicine,'' that showed that large language models could cause 
harm by perpetuating debunked racist medical ideas.
    These questions are for any of our witnesses. How can 
developers of AI models prevent these biased outcomes?
    Ms. Moore. I'll take that.
    First of all, in terms of both from a security standpoint 
as well as from a bias standpoint, all teams need to be 
diverse. Let me just say that from a security standpoint when 
we're doing things like red-teaming and we're going in and 
assessing vulnerabilities, we need a team of folks that are not 
just security people. We need folks who are also very deep in 
terms of subject-matter expertise around AI and how people 
develop models, training models associated with malware that is 
adaptive, maybe, in nature, but those teams don't look like our 
traditional red-teaming teams.
    On the bias front, same thing. The data scientists, 
developers, and folks that are building the models and 
determining the intent of the model need to look like everybody 
else who is impacted by the model. That's how we move further 
away from disparate impact where groups are impacted more than 
others. Algorithms control who gets in what school, what kind 
of insurance you have, where you live, if you get a mortgage, 
all of these things. These are very important things that 
impact our lives.
    So when folks are building models, the intent of the model 
and the explainability of the model, being able to explain the 
purpose, where the data came from, and attribute those sources, 
being able to ensure that that model is ethical. These are all 
things that security may not--may be able to point out to you 
the problem, but the tone is at the top of the organization in 
terms of----
    Mr. Menendez. Yes, so I want to follow up with you, and 
then I'll circle back to any of the other witnesses on that 
first question. But that's a question that we've sort-of 
grappled with on this committee is one just--the work force 
development within the cyber community and what that looks like 
and then ensuring, right, especially with AI as you allude to 
in your answer, that it's reflective of the larger community.
    In your opinion, how do we build teams? How do we grow the 
cyber work force so it's a diverse group of individuals that 
can bring these backgrounds into the cyber career?
    Ms. Moore. Well, I think it's commitment. I know that IBM, 
for instance, has stood up 20 HBCU cybersecurity centers across 
11 States, and this is all at no additional cost to the folks 
who will get this training. I think that AI is not unlike 
cybersecurity. I think that when we look at the threats 
associated with that AI, it's just an expanding of the attack 
surface.
    So, we really need to treat this not as a completely 
totally different thing, but employ the tactics that have 
worked in educating and training people and ensuring that there 
is not a digital divide in AI in quantum and cybersecurity and 
all of the emerging technology areas.
    I also think that a best practice is to implement these 
things K-12, to start when folks are very young, and as they 
grow and as the technologies evolve, the knowledge can be 
evolving as well.
    Mr. Menendez. Agree with that approach and would love to 
build that from an earlier age.
    I have to pivot real quickly. One of the things that I want 
to focus on is less than a year before the 2024 election, we 
see the potential for generative AI increasingly likely 
spreading misinformation with respect to our elections.
    For any of the witnesses, what specific risk does AI pose 
to election security?
    Mr. Stamos. So when I--I think there's too much focus on a 
specific video or image being created of a Presidential 
candidate. You know, if that happened, every media organization 
in the world would be looking into whether it's real or not. I 
think the real danger from AI in 2024 and beyond--and, again, 
you've got India. You've got the European Union. There's a ton 
of elections next year. The real problem is this is a huge 
force multiplier for groups who want to create content.
    If you look at what the Russians did in 2016, they had to 
fill a building in St. Petersburg with people who spoke 
English. You don't have to do that anymore. A couple of guys 
with a graphics card can go create the same amount of content 
on their own. That's what really scares me is that you'll have 
groups that used to not have the ability to run large 
professional troll farms to create all this content, the fake 
photos, the fake profiles, the content that they push, that now 
a very small group of people can create the content that used 
to take 20 or 30.
    Mr. Menendez. I think if we quickly share, right, through 
social media, so it's a force multiplier, exactly right, not 
just the production but sharing quality as well rapidly 
increases, and the spread of that is going to be a challenge.
    I wish I had more time, but the Chairman distracted me at 
the beginning of my questioning, so I have to yield back the 
time that I don't have.
    Mr. Garbarino. You're not allowed to take time from me, so 
it's all right. I believe we're going to do a second round 
because this is so interesting.
    So the gentleman yields back time that he didn't have.
    I now recognize myself for 5 minutes of questions.
    Ms. Moore, Mr. O'Neill brought up red-teaming in one of his 
answers before, and I understand CISA is tasked in the 
Executive Order with supporting red-teaming for generative AI. 
Do you believe that CISA has the expertise and bandwidth 
necessary to support this? What would a successful red-teaming 
program look like?
    Ms. Moore. I think that CISA is like everyone else. We're 
all looking for more expertise that looks like AI expertise in 
order to be able to change the traditional red team.
    With the traditional red-teaming, the important piece of 
this is that so you're essentially testing the organization's 
ability to both detect the threat, and also, how does the 
organization respond, and these are real-world simulations.
    So, once you've established that there are gaps, the hard 
part is remediation. The hard part is, now I need more than the 
folks that have looked at all of this from a traditional 
security standpoint, and I need my SMEs from the data 
scientists, data engineer perspective to be able to help to 
figure out how to remediate.
    When we're talking about remediation, we're back to where 
we started in terms of this discussion around, We have to close 
the gaps so that they are not penetrated over and over and over 
again.
    Mr. Garbarino. So I guess there's a concern, you know, if 
we find the weakness, it might not be the knowledge to fix it?
    Ms. Moore. We have to upscale.
    Mr. Garbarino. OK.
    Mr. O'Neill, another question about the EO. CISA is tasked 
with developing sector risk--sector-specific risk assessments 
in the EO, but I understand there are many commonalities or 
similar risks across sectors. How can CISA ensure that it 
develops helpful assessments that highlight the unique risks 
for each sector? Would it make more sense if CISA--for CISA to 
evaluate risk base on use cases rather than sector by sector?
    Mr. O'Neill. I believe CISA needs to take the approach like 
they've done in other areas. It's a risk-based approach based 
on the use case within the sector, because you're going to 
need, like, a higher level of confidence for an artificial 
system that may be used in connection with critical 
infrastructure making decisions, versus artificial intelligence 
that would be used to create a recipe or something like that 
for consumers.
    But the other thing where CISA could really help, again, is 
the secure by design in making sure that when they're doing--
when you're doing threat modeling, you're not only considering 
the malicious actors that are out there, but also the 
inadvertent errors that could occur that would introduce, like, 
bias into the artificial model--artificial intelligence model.
    Thank you.
    Mr. Garbarino. So you've just said it, they've done this 
before. So there are existing risk assessment frameworks that 
CISA has, that CISA can build off of. What would they be? 
That's for anybody if anybody has the answer.
    Ms. Moore. I'll take that one.
    I think that one that is tremendous is MITRE ATLAS. MITRE 
ATLAS has all attacks associated with AI that are actually 
real-world attacks, and they do a great job of breaking them 
down, according to the framework, everything from 
reconnaissance to discovery, and to tying and mapping the 
activities of the bad actor to their tactics, techniques, and 
procedures, and giving people sort-of a road map for how to 
address these from a mitigation standpoint, how to create 
countermeasures in these instances. The great part about it is 
that it's real-world. It's free. It's right there, out there on 
the web.
    I would also say that one other resource that CISA has at 
its disposal which is very good is the AI arm of the Risk 
Management Framework, but the play books, the play books are 
outstanding. Now, there's the Risk Management Framework, but 
the play books literally give folks an opportunity to establish 
a program that has governance.
    Mr. Garbarino. Mr. Swanson, CISA's AI road map details a 
plan to stand up a JCDC for AI. You know, this committee has 
had questions about what CISA does with the current JCDC, and 
we haven't gotten them all answered, but they want to do this 
JCDC for AI to help share threat intel related to AI.
    How do you share information with CISA currently? What 
would the best structure for JCDC.AI, what would that look 
like?
    Mr. Swanson. Yes, thanks for the question.
    Like my fellow testimonial-givers up here, we talked about 
the sharing of information and education that's going to be 
critical in order for us to stay in front of this battle, this 
battle for securing AI. You asked specifically, How is my 
company sharing? We actually sit in Chatham House rules events 
with MITRE, with NIST, with CISA in the room, and we share 
techniques that adversaries are using to attack the systems. We 
share exploits. We share scripts. I think more of this 
education is needed, and also amongst the security companies 
that are up here so that we can better defend against AI 
attacks.
    Mr. Garbarino. Thank you very much.
    My time is up.
    I think we're going to start a second round. I will now 
recognize--I believe second round we start with Mr. Gimenez for 
5 minutes.
    Mr. Gimenez. Thank you, Mr. Chairman.
    I'm going back to my apocalyptic view of this whole thing. 
OK. I guess, I may have been influenced by Arnold 
Schwarzenegger, you know, in those movies, you know, with 
coming from the future and these machines battling each other. 
I think that's not too far off. I mean, it's not going to be 
like that, but I'm saying the machines battling each other is 
going to be constant, so the artificial intelligence battling 
each of the artificial intelligence until the one that is 
dominant will defeat one and penetrate and defeat the system, 
whether it's the aggressor or the defender, but--which, to me, 
makes it much more important that we are resilient and that 
we're not wholly dependent on anything. Instead of becoming 
more and more dependent on these systems, we become less 
dependent. Yes, it's nice to have. As long as they're working, 
they're great. But you have to assume that one day they won't 
be working, and we have to continue to operate.
    So where are we in terms of resiliency of the availability 
of us to decouple critical systems vital to America? Our 
electric grid would be one. Our piping would be another, et 
cetera. All of those things that are vital to our everyday 
life, where is CISA in trying to get companies and the American 
Government to be able to decouple or extract itself from the 
automated systems and still give us the ability to operate, 
because I do believe that every one of those systems eventually 
will be compromised, eventually will be overcome, eventually 
will be attacked, and we may find ourselves in really bad 
shape, especially if it's an overwhelming kind of attack to try 
to cripple, you know, America. So anybody want to tackle that 
one?
    Because we seem to be looking more and more about how we 
can defend our systems. I believe that that's great, but those 
systems are going to be compromised one day. They're going to 
be overwhelmed one day. So we have to have a way to not be so 
dependent on those systems so we can continue to operate.
    Mr. O'Neill. Go ahead.
    Ms. Moore. I think that one of the things with any sort of 
preparation for the inevitable, or preparation for potential 
disaster, you know, catastrophe, if you will, really is rooted 
in exercises. I think that from an exercise perspective, we 
have to look at where we are vulnerable certainly, but we have 
to include all of the players. It's not just the systems that 
get attacked, but also everything from every place within the 
supply chain, as well as emergency management systems, as well 
as municipalities and localities.
    I think that one of the things that CISA does so well is 
around PSAs, for instance. I know that this is sort-of like a 
first step in this realm. What I mean by that is that does the 
average American know exactly what to do if they go to the ATM 
and it's failed or if their cell phone is not working or if 
they can't get community----
    Mr. Gimenez. Oh, No, cell phone is not working, we're done.
    Ms. Moore. Yes, exactly, exactly. So we have to have 
default strategies, and the key piece of that is that these 
things have to be devised and also communicated so everyone 
sort-of knows what's to happen when the unthinkable happens.
    Mr. Gimenez. Yes, Mr. Swanson.
    Mr. Swanson. Something to add here, you mentioned AI 
attacking AI, what is actually being attacked and what is 
vulnerable? What is vulnerable is the supply chain. It's how AI 
is being built. It's the ingredients as I mentioned before in 
my cake analogy. Most of AI is actually built on open-source 
software. Synopsis did a report that 80 percent of the 
components in AI is open-source. Open-source is at risk.
    CISA can set guidelines and recommendations and also, with 
the Government's help, bug bounties to actually go in there and 
secure the supply chain. That's what AI will be attacking.
    Mr. Gimenez. I'm not so much--you know, I know about the 
supply chain. I was actually worried about the critical 
infrastructure itself----
    Mr. Swanson. Yes.
    Mr. Gimenez [continuing]. Our grid, our electric grid being 
knocked out, our energy grid being knocked out. You're right 
about food--the supply chain, et cetera, food and all that, all 
that being knocked out. I'm not so sure that we're resilient--
I'm not so sure that--well, I'm pretty sure that we have relied 
way too much on automated systems that are going to be very, 
very vulnerable in the future, and that we haven't focused 
enough on resiliency. If, in fact, those systems go down, that 
we are heavily reliant on, do we have a way to operate without 
those systems?
    Mr. Swanson. Mr. Chairman, if I may.
    Mr. Garbarino. Yes.
    Mr. Swanson. So I would like to respond. So your scenario, 
totally get, and let me play that back, industry----
    Mr. Gimenez. By the way, the movies were ``The 
Terminators.'' OK. Go ahead.
    Mr. Swanson. All right. The industry, energy pipelines. The 
use, predictive maintenance in pump seals and valves. The 
attack, we're going to trick, manipulate models to purposely 
invalidate alerts on pressures, impact physical and mechanical 
failure. How do we remediate? How do we solve for this? This is 
where pen testing and red-teaming comes in, model robustness. 
When I talk about the supply chain, it's how these things are 
built and making sure those are resilient.
    But I agree with we ought to protect the critical 
infrastructure, and we need to take records of what machine 
learning is in what infrastructure and go and stress test those 
machine learning models.
    Mr. Gimenez. Thank you.
    I, again, yield back.
    Mr. Garbarino. The gentlemen yields back.
    I now recognize the Ranking Member from California, Mr. 
Swalwell, for a second round of questions.
    Mr. Swalwell. Thank you, Chair.
    Ms. Moore, pivoting to the international realm, how 
important is it that our international partners and allies work 
with us in setting AI security standards? What role do you see 
for CISA and the Department of Homeland Security in supporting 
this effort?
    Ms. Moore. What I see internationally is that the whole 
world depends quite a bit on the National Institute of 
Standards and Technology, NIST. I see that with quantum safe, 
and I see that also with AI, and that this foundational way of 
thinking about things offers us a level of interoperability 
that makes it as global an issue as the way that we function as 
a global society.
    I think from the standpoint of the work that's happening 
today with CISA and DHS, I feel that globally, they're very 
focused on leveraging those tools and the communications aspect 
of it. We see a lot of duplication around the world of people 
picking up these best practices and standards. So I think we 
need to continue in that direction as much as possible for the 
future, but it's very similar to many other areas that CISA and 
NIST and DHS work with today.
    Mr. Swalwell. Thank you.
    Mr. O'Neill, just as a part of an international company, 
what's your perspective on it?
    Mr. O'Neill. Well, one of CISA's strengths is the way that 
they go out and they constantly engage with stakeholders, both 
in the United States and the international circles. You know, 
cybersecurity is a team sport, and, you know, cybersecurity 
practitioners within the United States and internationally need 
to work together to be able to face the common threat.
    I think that's all.
    Mr. Swalwell. Mr. Stamos, I want to vent a little bit. As a 
former prosecutor, perhaps there's no crime today that exists 
that has less of a deterrent in its punishment than cyber 
crimes, and it's really frustrating to see, whether it's an 
individual who's the victim, whether it's, as you said, any 
size company, or our country, it's frustrating because you 
can't punish these guys. It seems like they're just 
untouchable.
    I wanted you to maybe talk a little bit about--like 
recognizing that if these attacks are coming from Russia or 
China or, you know, other Eastern European countries, many of 
them are not going to recognize a red notice. So we could work 
up a case and send a red notice to Moscow, like, they're not 
going to go and grab these guys.
    Do you see any deterrent that's out there? Is there a way 
to punish these guys? Does AI help us? I know we have our own 
limitations on going offensive for private companies, but 
should we reexamine that? I just--like, how do you impose a 
cost on these actors who are just savage in the way that they 
take down our individuals and companies?
    Mr. Stamos. Yes. I mean, it is extremely frustrating to 
work with companies and to watch these guys not just demand 
money but text family members of employees, and do ACH 
transfers from small vendors just to intimidate them and to 
laugh about it effectively.
    I mean, I think there's a bunch of things we can do. One, I 
do think like the FBI workups and the red notices do have a 
deterrent effect. You know, people don't--Russians love to go 
visit their money in Cyprus, right, especially in the winter. 
So, locking people, 22-year-olds that can never travel for the 
rest of their lives, I think actually is a positive thing. 
Like, enjoy Kazakhstan, right. So I do think that is good.
    I would like to see--obviously, I don't see what happens on 
the Classified side. It felt like after Colonial Pipeline that 
there was an offensive operation by cyber command against a lot 
of work to try to deter these guys and to disrupt their 
operations, and that that is, perhaps, slacking off. So I would 
like to see the United States--I don't think private companies 
should do it, but I do think the U.S. offensive capability 
should be used against them.
    Then I think it's seriously time for Congress to consider 
outlawing ransomware payments.
    Mr. Swalwell. Can we just briefly talk about that? Because 
you and I have talked about this for a long time, and I do 
think in a perfect world, that stops it. But what do you do in 
the gap between the day you outlaw them and then the weeks 
after where they're going to test to see if they're paid and 
you can see just a crippling of critical infrastructure?
    Mr. Stamos. Yes. I mean, if you outlawed ransom repayments, 
there would be 6 months of carnage as they try to punish the 
United States and to reverse it.
    I think a couple of things have to happen here. No. 1, I 
think that this is something that Congress should do, not the 
administration unilaterally, because I think it needs to be a 
unified political stand of both political parties saying we are 
not doing this anymore. We are not sending billions of dollars 
a year to our adversaries to hack us. So it doesn't become a 
political football. If the admin did it by themselves, I think 
it would be much easier to blackmail them into undoing it, 
right. Congress needs to speak as one voice here.
    Second, I think Congress would need to set up--delay the 
implementation, and especially focus on nonprofits and local 
and State municipalities. You know, you could be buying them 
insurance policies. There's been a lot of interesting work 
around State national guards, State guards of direct 
commissions. I know, like, CISOs my age getting direct 
commissions, so that if something bad happens to a State or 
locality, they have the legal authority to go work with them. I 
do think, though, it's the time to do that because the current 
status quo is not working.
    Mr. Swalwell. Great.
    I yield back. But, again, Chairman, I think this has been 
one of our most productive hearings this year. Thank you and 
the witnesses for making it so constructive.
    Mr. Garbarino. Thank you.
    The gentleman yields back.
    I now recognize Mr. Ezell from Mississippi for 5 minutes of 
questions.
    Mr. Ezell. Thank you, Mr. Chairman. Thank you all for being 
here today and sharing with us because we are way behind, and 
we recognize that.
    So the capabilities of AI are advancing very rapidly as 
we've talked about today. It's just kind-of like when you buy a 
phone, it's outdated, and they want to sell you another one.
    I have some concern about Government oversight and 
overregulation that I'd like to talk about a little bit. I 
spent most of my career as a law enforcement officer and a 
sheriff. I've seen directly how Government red tape can get in 
the way of law enforcement. If American industry is smothered 
by regulation and reporting requirements, our adversaries are--
you know, they're going to develop new AI capabilities before 
we do, and we cannot let this happen.
    I have concerns that the Biden administration's Executive 
Order on AI grants for departments and countless agencies' 
jurisdiction over AI. Specifically under this committee's 
jurisdiction, DHS is tasked with establishing guidelines and 
best practices around AI. As always, when regulating an 
industry, especially when the Government is involved, the 
language must be clear in its intent so that we can get it 
right.
    Ms. Moore, how could a lack of coordination between Federal 
agencies and the private industry, especially while 
establishing guidelines, hamper innovation in AI?
    Ms. Moore. I think it's most important that we focus on not 
hampering innovation for starters. By that, what I mean is that 
we have, you know, these open-source systems that people who 
are in medium and small businesses or technology groups, or 
research and development groups, have an opportunity to 
innovate and help bring us further along than what we are today 
from a cybersecurity standpoint, from an AI standpoint, and we 
can't stifle that innovation. A lot of the greatest ideas come 
out of those entities.
    But also, we have to guard against the idea of AI as a 
technology. This is such an inflexion point. It's too important 
a technology to be just in the hands of a small group of large 
organizations, let's say.
    So, I think that there is a balance that needs to be 
struck, that we need to be able to walk and chew gum at the 
same time, but that we need thoughtful leadership around 
achieving AI that's not predatory, achieving AI that's open, 
achieving AI that is like when you go to a restaurant, and you 
get to see the kitchen and how the people are cooking your food 
and whether there's cleanliness and there's----
    Mr. Ezell. Right.
    Ms. Moore [continuing]. Good best practices there. AI needs 
to be open as well in that way.
    Mr. Ezell. That's why we need to try and keep the 
Government out of it as much as possible.
    Ms. Moore, in your opinion, should DHS and CISA have a role 
in the regulation of AI?
    Ms. Moore. I think that DHS and CISA have a lot of inputs 
and a lot of important learnings that need to be incorporated 
in any sort of discussion around regulation. I also think that 
really with AI, we have to look at the use cases. We really 
have to examine that, and everything needs to be--we need to 
offer a standard of care that allows us to not be draconian. 
This is an evolving space, and so, we want to make sure that 
the folks who are closest to it are experts, are also engaging 
in providing inputs.
    Mr. Ezell. Thank you very much.
    You know, I was listening to Representative Swalwell talk 
about lack of prosecution, lack of anything getting done. Going 
back to my small home town, one of our local churches got 
hacked and locked everything down, and the preacher had to use 
his own credit card to pay $500 to get them to turn that thing 
loose. A hospital system was hacked. It's just--you know, it 
goes on and on, and it seems like there's just no recourse. 
It's almost like credit card fraud sometimes. As a law 
enforcement officer, I have seen so many victims out here, and 
there's hardly anything that can be done about it.
    Would any of you like to expand on that just a little bit?
    Mr. Stamos. If I may, sir, I think you're totally right. I 
think one of our problems is we have a serious lack in--gap in 
law enforcement between local and the FBI. If you're a big 
company, you can call an FBI agent. You can get 3 or 4 of them 
on the phone with you. They will be as supportive as possible. 
If you're Mr. Luttrell's flower shop or the church in your 
district, they're not going to get an FBI agent on the phone. 
If they call the local police, generally those folks are not 
prepared to help with international cyber crime.
    Mr. Ezell. And we're not.
    Mr. Stamos. So I do think there's a gap here that Congress 
should consider how to fill. A good example of that, where this 
has been positive is in work called the ICAC which is the child 
safety world, which I've done a bunch of work in, where the 
creation of local folks who are then trained and supported by 
Federal agencies to do child safety work, and in the end, it's 
local sheriffs, deputies, and local detectives, but they can 
call upon investigative resources from the Secret Service, from 
the FBI, from HSI. I think something like that around cyber 
crime of investing in the local capabilities would be a good 
idea.
    Mr. Ezell. Thank you very much.
    Mr. Chairman, I yield back. Thank you all for being here 
today.
    Mr. Garbarino. Thank you, Mr. Ezell. The gentleman yields 
back.
    I now recognize Mr. Carter from Louisiana for 5 minutes.
    Mr. Carter. Thank you, Mr. Chairman.
    Ms. Moore, you mentioned in your earlier comment about IBM 
and the efforts that you had with HBCUs. Can you expound on 
that as we know that HBCUs have been the target of many cyber 
attacks?
    Ms. Moore. Yes, indeed.
    So, we developed a program where rolling out these skill 
sets, or these CLCs--they're cyber leadership centers--in HBCUs 
around the country, so there are roughly in 11 different 
States, but 20 of them and working with the faculty and working 
with a liaison within the HBCU to develop and to share 
curricula that we've established that is very professional 
grade in terms of our own expertise that we bring to it.
    But we recognize that there's a tremendous amount of talent 
everywhere, and that we really have to pursue--with the skills 
gap that we see in cybersecurity, it's kind of a--as someone 
mentioned on the panel here, a team sport, and we need all 
hands on deck. We also need to ensure that communities are not 
left behind, that everyone has an equal opportunity to be able 
to learn the skill sets and have the credentials necessary to 
work in this important field.
    Mr. Carter. So you mentioned 10 States or 10 institutions--
I don't know if it was 10 States or 10 institutions, but 
whatever the case--for HBCUs that are out that are in need of 
the services that you indicated. Is there more bandwidth to 
include additional ones? Is there any directions you can give 
me? I represent Louisiana with a rich group of HBCUs that would 
love to have IBM partner or look at opportunities to be a part 
of that. Any direction you can give?
    Ms. Moore. Well, it's 20 centers across 11 States, and we'd 
be happy to talk to you about what you would like to see happen 
there in Louisiana.
    Mr. Carter. Fantastic. Thank you.
    Mr. O'Neill, with the emergence of AI, are you concerned 
about what this means for academia for students using ChatGPT 
or others for term papers or research or the validity of 
students following an exercise or an assignment without 
cheating, if you will, through AI?
    Mr. O'Neill. I'm concerned about that, but it also enables 
students also to be more empowered with more information, and 
maybe to even be more effective at what they're learning, and 
so forth. So they're going to have to learn different in a 
world with AI. Like, they're going to have to learn to use, or 
write prompts to get the information out of AI, and they're 
going to have to learn to look at the sources that are cited in 
the output from the AI to validate that they're not receiving 
hallucination--hard word for me to say--and so forth. So it's--
--
    Mr. Carter. What about the student that asks a direct 
question of ChatGPT, and insert the answer based exactly on 
what was asked? How do we determine the validity of that? How 
do we make sure that students are not misusing--while we 
understand that it's a great tool for research----
    Mr. O'Neill. Yes, yes, yes.
    Mr. Carter. Anybody can chime in. Ms. Moore or Mr. Stamos, 
it looks like you guys----
    Mr. O'Neill. Yes, I would just say it's like a mini arms 
race because you have the students that want to use it, some of 
them, for nefarious purposes, but then you have the counter 
programs that academia is using to identify when it's being 
used, and so forth.
    So right now, I was just reading in the news about this 
where, you know, the AI to detect the use of AI----
    Mr. Carter. I have about 55 seconds. Would you mind 
sharing, Mr. Stamos and Ms. Moore?
    Mr. Stamos. Yes. I mean, teach two classes at Stanford, and 
this is a huge discussion among the faculty is how do you give 
an essay in the modern world. I mean, I think one of the great 
things about AI is it is going to even out the playing field 
and that people who have lack of, you know, business email 
skills, perfect English and such, the AI will be a huge help, 
but you don't want to give kids that crutch as they get there. 
This is going to become much harder over the next couple of 
years, because AI is being turned on in default. So students 
won't have to go actively cheat. They will get in trouble for 
not going and turning off things that are turned on by default 
in Google Docs or in Microsoft Word and such. So, I think it's 
a huge problem for both higher and lower education.
    Mr. Carter. Ms. Moore.
    Ms. Moore. I would just say that the space is evolving, and 
that there are many tools that are out there to detect this in 
papers and research work and that sort of thing. But you have 
to remember that generative AI looks and scans all of the work 
that's out there, and a lot of people have a lot of work out 
there.
    So, being able to defend against that and also being able 
to make sure that there is critical thinking happening in 
universities and critical thinking happening still for students 
even though they have this, you know, magnificent tool. I 
recently had a friend whose daughter had to appeal the 
university because she was accused of that, of having used a 
generative large language model. In reality, she was very--she 
was very prolific on the internet, and it was picking up her 
own work. So we have a ways to go with these technologies.
    Mr. Carter. Yes. Thank you. My time is over.
    Mr. Garbarino. The gentleman yields back.
    I now recognize Ms. Lee from Florida for a second round of 
questions.
    Ms. Lee. Thank you, Mr. Chairman.
    Mr. Swanson, I want to return to something you said a 
little while back, which was a comment that 80 percent of open-
source software is at risk. I know you touched on this as well 
in your written testimony and specifically encouraged this 
committee and Congress to support certain measures, including 
bug bounty programs and foundational artificial intelligence 
models that are being integrated into Department of Defense 
missions and operations.
    Would you share for us a little bit more about how bug 
bounty programs specifically could help in that kind of 
program, and any other specific things you think Congress 
should be looking at or considering in helping protect our 
infrastructure and critical systems as relates to AI?
    Mr. Swanson. Yes. Thank you for the question. I appreciate 
it.
    My statement was 80 percent of the components, the 
ingredients used to make AI come from open source. As such, 
protecting open source is really important. So what is the bug 
bounty program? A bug bounty program basically gets to a threat 
research community and focuses them to find vulnerabilities, 
and in this case, find vulnerabilities in machine learning 
open-source software.
    I'll give an example. Through this research, through a bug 
bounty program, we were able to find a critical vulnerability 
in what's called a model registry. What is a model registry? 
It's where we host machine learning models that power AI. What 
was the exploit? A malicious actor can get access to the model 
registry to modify the code, steal the model, or perhaps 
traverse it to get to other sensitive areas of critical 
infrastructure. NIST and MITRE gave this a critical 
vulnerability score.
    Now, a lot of research hasn't been done in open-source 
software as it relates to machine learning and bug bounty 
programs. It's an area--if you look at all of the big security 
incumbents, it's not where they focus. But yet, it's the 
massive amount of ingredients that's used in AI machine 
learning.
    So what I was asking Congress was for focus to build to 
say, Hey, let's protect the ingredients. As I shared with Mr. 
Gimenez, it's not AI attacking AI in the models. It's going to 
be attacking the supply chain of how these things are built, 
and bug bounties will help find vulnerabilities and 
remediations to fix those ingredients.
    Ms. Lee. Thank you.
    Mr. Stamos, I would like to have you elaborate a bit. 
Earlier, we were talking about the concept of Congress 
potentially outlawing ransomware payments, and you indicated 
that you anticipated that if Congress were to do so, it would 
follow--it would be followed with 6 months of carnage.
    Would you tell us a little bit more about what you 
anticipate that 6 months of carnage would look like, and what 
could we be doing to help mitigate that vision?
    Mr. Stamos. Yes. Maybe I'm being a little too colorful 
here, but I do think--these are professionals. They're used to 
making tens of millions of dollars a year. They are somewhat 
rational actors. So eventually I think that they will have to 
adapt their economic model. But in the short run, being cut 
off, they would do everything they can to try to convince the 
United States that this policy was not appropriate.
    So, I think there are things you can do. No. 1, no 
exceptions. I've heard people talk about bug bounty limits, and 
they say, Oh, well, we'll exempt hospitals or something like 
that. If you have an exception, then that's all they'll do, 
right. If there's exceptions for a hospital, all they're going 
to hack is hospitals, right. So it's terrible, but we'd have to 
live through the President getting up there and saying, We are 
not negotiating with terrorists. We are not paying this bounty. 
It's terrible for the people who live in this place. We're 
going to give them as much support as possible.
    Second, I do think that there's a role to play, especially, 
like I said, the locals and the States are in real trouble 
here, and so, preemptive grants for them to upgrade their 
architectures. When we usually see these bug bounty--or I'm 
sorry--these ransomware actors are really good at breaking the 
networks that are built the way that Microsoft told you to 
build them in 2016, right. That's kind-of a very traditional--
not to get too technical, but Active Directory, SCCM, like a 
very traditional Windows network that the bad guys love that, 
and that's how your local States, your counties and such. So, 
an aggressive move to try to get them on to more modern 
technology stacks is something you could do in that run-up.
    Then, I think the third is, like Mr. Swalwell was talking 
about, trying to impose costs on the bad guys that--in that 
active time in which they are trying to, you know, deter the 
Government from standing strong, that you're also actively 
going after them. You're doxxing them. You have the FBI 
indicting them. You have cyber command, you know, destroying 
their command-and-control networks and such. Eventually, they 
would have to change their business models to match. That 
wouldn't make the world--that wouldn't all of a sudden, make 
America be totally secure, but it would get rid of the cycle of 
these guys being able to get better and better, both by 
practicing their craft all day, and also collecting all of this 
money and building these huge networks.
    Ms. Lee. Thank you, Mr. Stamos.
    Mr. Chairman, I yield back, on time.
    Mr. Garbarino. The gentlelady yields back on time, actually 
1 second. Give that to Mr. Menendez.
    Mr. Menendez, I now recognize you for 5 minutes for a 
second round of questions.
    Mr. Menendez. Thank you, Mr. Chairman.
    I just want to return back to the risks that AI poses to 
election security. I appreciate Mr. Stamos' answer. I just want 
to quickly open up to any of the other witnesses if they would 
like to expand on it.
    OK. So let me ask a question. How can CISA best support 
election officials in combating the risk? Mr. Stamos, I'll go 
back to you.
    Mr. Stamos. So not just about AI, but the kind of coalition 
that came together to protect the 2018, 2020, 2022 elections 
has fallen apart. I think Congress has a role to play here. 
This is due to investigations elsewhere in the House and to 
civil lawsuits. There's a lot of arguments over what is the 
appropriate role of Government here, and there are totally 
legitimate arguments here, right. There are totally legitimate 
arguments that there are things that the Government should not 
do, especially when we talk about mis- and disinformation.
    Instead of this being a 5-year fight in the courts, I think 
Congress needs to act, and say these are things that the 
Government is not allowed to say, this is what the 
administration cannot do, with the social media companies; but 
if the FBI knows that this IP address is being used by the 
Iranians to create fake accounts, they can contact Facebook, 
right. That--recreating that pipeline of cyber command and NSA 
to the FBI who can help social media companies stop foreign 
interference, recreating that, I think, is a super critical 
thing and only Congress has the ability to do that.
    Mr. Menendez. Got it.
    Just looking through the election system, how can we 
support our local election officials who face some of the same 
challenges small businesses do, in terms of fewer resources, 
but having the same challenge arrive at their doorstep?
    Mr. Stamos. Yes. I mean, so traditionally this has been the 
role of the election infrastructure ISAC and the multi-State 
ISACs in that, you know, unlike any other developed economy, we 
have 10,000 election officials who run our elections, and that 
does provide security benefits in that it would be extremely 
hard to steal the entire election because you have so many 
disparate systems, so many different ways of counting and such. 
But it also makes it much easier to cause chaos.
    So I think reaffirming CISA's role as a supporter here, and 
reaffirming the role of the ISACs as providing that level of 
support is a key thing because that's, again, something that's 
kind-of fallen apart since 2020.
    Mr. Menendez. Are there any other considerations that we in 
Congress should be thinking about as we go into 2024 with 
respect to election integrity?
    Mr. Stamos. I mean, I guess there's a bunch--I think the 
other thing that a number of people have proposed, a colleague 
of mine, Matt Masterson, came and wrote a report with us at 
Stanford on the things that he would do. So, I'm happy to send 
you a link to that. But there's been discussion of creating 
standards around, you know, what audits look like, what does 
transparency look like, and such. I think it would be nice to 
see the States with a push from the Federal Government to more 
aggressively mentally red-team their processes to see how does 
it look to people; that if you have rules that you're counting, 
you know, that it takes you 2 weeks here--in California, it 
takes us forever to count our ballots because of a bunch of 
different rules, and that makes people think the election is 
being stolen. It's not being stolen, but it's not fair. But you 
should set your policies with the expectation that people will 
take advantage of those kinds of situations to say the election 
is being stolen. So, I think doing a better job of setting up 
our rules to be very transparent, to make it clear to people 
this is how an audit works, is one of the things that we've got 
to think about going into 2024, so that when you have these 
things that seem a little weird, it does not create an 
opportunity for bad actors to try and imply that the entire 
election was rigged.
    Mr. Menendez. Appreciate it. Thank you so much.
    I yield back.
    Mr. Garbarino. The gentleman yields back.
    I now recognize myself for the last 5 minutes of questions.
    I'll start with Mr. Swanson. The EO directs DHS to 
establish an Artificial Intelligence Safety and Security Board. 
How can the Secretary best scope the composition and mission of 
the board? What kind of perspectives do you think the DHS 
should ensure are represented?
    Mr. Swanson. Yes, thank you for the question.
    I think for the composition of the board, it needs to be a 
board that technically understands that artificial intelligence 
is different than your typical software. That's first and 
foremost.
    The second part is, the actions of that board is we need to 
take an inventory. We need to understand where all of our 
machine learning models are, the lineage, the providence, how 
they're built, and only then do we have the visibility, the 
audibility to actually secure these.
    Mr. Garbarino. Mr. Stamos, quick question for you. Now I've 
lied, I'm not going to be the last person to speak.
    CISA's information-sharing mission is crucial. Do you think 
CISA has the tools it needs to be able to notify entities of 
potential AI threats? Is CISA's ability to issue administrative 
subpoenas sufficient?
    Mr. Stamos. So the administrative subpoena thing, my 
understanding, is mostly used for if you find vulnerabilities 
and you can't assign them to a specific--you know, here's an 
open port, and we think it's a dam, but we're not sure exactly 
who it is, that you can find out who that is.
    What I would like to see is, I think it would be great to 
follow on what Congress did of centralizing cyber incident 
reporting to some equivalent around AI incidents that--you 
know, effectively blame-free, regulatory-free, that, you know, 
you have a free--I would like to see a model more of what 
happens with aviation, where if there's a near-miss, you can 
report that to a system that NASA runs, and nobody is going to 
sue you. Nobody is going to take your license away. That 
information is used to inform the aviation safety system. I 
would love to see the same thing out of AI, and I don't think 
CISA has that capability right now.
    Mr. Garbarino. So subpoenas are useful, but something like 
CIRCIA would be--would deal with the----
    Mr. Stamos. I just feel like subpoenas are for a very 
specific thing. The key thing we need is we need defenders to 
work together, and right now, the lawyers don't let them. So 
finding out what those barriers are that make the lawyers give 
that advice and taking those barriers down, I think, is a good 
idea.
    Mr. Garbarino. Mr. O'Neill, I'm concerned about the use of 
AI further exacerbating the risk associated with the 
interdependencies across critical infrastructure sectors. Does 
your sector understand the risk associated with these 
interdependencies in AI? What are you doing to mitigate that 
risk? Is there more that CISA can do to help?
    Mr. O'Neill. Thank you for the question.
    You know, working for Hitachi, we work in multiple sectors, 
so we have a company focused on energy, a company focused on 
rail. The subsidiary I'm in is focused on critical 
infrastructure, like data storage and stuff, helping companies 
be more resilient, and so forth.
    What we're doing as a company is we're getting the people 
from all of the sectors together and along with our 
cybersecurity experts, and we're going through the use cases 
ourself in the absence of regulations to look and do threat 
modeling, and so forth, and look at the use cases so that we 
can help these critical sectors be more effective in 
protecting, you know, what they do.
    What was said earlier in regards to, you know, a mass event 
where the technology is unavailable and the critical sectors 
thus are unable to function, the thing that I think CISA could 
do, again, is helping bring some business acumen at looking at 
the problem of how to recover and what the mission is, and 
being able to deliver the mission of the critical 
infrastructure, maybe in the absence of the technology being 
available.
    When I worked at a health insurance company, one of the 
things we did is we approved people to get medical procedures, 
you know, in an emergency. So we went through scenario training 
that said if that technology fails, we're going to fail open, 
and we're going to approve all the requests come in and we'll 
sort it out later so no one would be denied care. That would be 
an example.
    Thank you.
    Mr. Garbarino. Thank you, Mr. O'Neill.
    Last, Mr. Swanson, how do you expect malicious actors will 
leverage AI to carry out cyber attacks? Do you think the 
efforts to use AI for cyber defense--and do you think the 
efforts to use AI for cyber defense will progress faster than 
efforts to use AI for offensive cyber operations?
    Mr. Swanson. Yes, that's a great question.
    I always think it's going to be a give-and-take here. It's 
going to be hard to stay one step in front of the attackers. 
What I will say is as long as we understand the foundation of 
how these things are built to protect our foundation, then 
we're going to be less at risk for these attacks. That's where 
the focus needs to be.
    Mr. Garbarino. Thank you very much. My time is up.
    I now recognize Ms. Jackson Lee from Texas, 5 minutes of 
questions.
    Ms. Jackson Lee. I thank you for yielding. Let me thank 
Ranking Member Swalwell for a very important hearing.
    I'm probably going to take a lot of time reading the 
transcript, having been delayed in my district, but I wanted to 
come in the room, first of all, to express my appreciation that 
this hearing is being held, because I have been in discussions 
in my district where I've heard, and media commentary, that 
Congress has no interest in regulating or understanding AI.
    I want to go on record for saying that we, as Members of 
Congress, have been engaged in a task force. I'm a member of 
the task force led by a bipartisan group of Members. I know 
that the Ranking Member and others, we have been discussing the 
cruciality of AI and how we play a role. It is not always good 
for Congress to say me, me, me. I'm here to regulate and not 
ensure that we have the right road map to go forward.
    So, Mr. Stamos, if I have asked you questions that have 
been asked and answered, forgive me. I'd like to hear them 
again. In particular, let me start by saying you mention in the 
last page of your testimony that it is important for policy 
makers to adopt nimble policies. This is something that I am 
very wedded to. I don't know if I'm right, but I'm very wedded 
to because AI is fluid. It is something today, it was something 
yesterday, and it will be something tomorrow and then the day 
after. But nimble policies and safeguards in collaboration with 
the private sector, how would you recommend we implement that? 
In that would you please use the word, should Congress--is 
there a space, a place for Congress to jump in and regulate? 
Again, this is a fluid technology that is moving faster than 
light, I would imagine. But let me yield to you.
    Mr. Stamos. Yes, Congresswoman. I mean, I think you had 
made a very good point about being flexible here. My suggestion 
on AI regulation is to do it as close to the people it's 
impacting as possible. So the people you can learn from what 
not to do here would be Europe, right, so with the----
    Ms. Jackson Lee. Would be?
    Mr. Stamos. The Europeans, in that the European Parliament 
believes that effectively every problem can be solved by 
regulating the right 5 American companies, right. The truth is 
with AI, is while it feels like 5 or 6 companies are dominating 
it, the truth is that the capabilities are actually much more 
spread out than you might tell from the press because of open 
source, like Mr. Swanson's been talking about, and just because 
of the fact that my Stanford students build generative AI 
models as upper division class projects now. That is just 
something they do in the spring to get a grade.
    So, what I would be thinking about is----
    Ms. Jackson Lee. These are students who have not yet become 
experts?
    Mr. Stamos. Right, right. But I'm saying, like, they go out 
into the workplace, and they don't necessarily work for an open 
AI or a Microsoft or a Google, they can go work for an 
insurance company, and the way that they will be building 
software for State Farm in the future is going to be based upon 
the basic skills that they've learned now, which includes a 
huge amount about AI.
    So, my suggestion is regulate the industries that have 
effect on people about the effects. The fact that it's AI or 
not, if an insurance company makes a discriminatory decision 
about somebody, it is the discriminatory decision that should 
be punished, not the fact that there's some model buried in it. 
I think that's--I think it's not going to be effective to try 
to go upstream to the fundamental models and foresee every 
possible use, but if it's misused in medical purposes, if a car 
kills somebody, if a plane crashes, we already have regulatory 
structures to focus on the actual effect on humans, not on the 
fact that AI was involved.
    Ms. Jackson Lee. So then how would you reach AI as--what 
would be Congress' reach to AI where Congress could say on 
behalf of the American people, we have our hands around this?
    Mr. Stamos. So where you could, in those cases, is--I think 
one of the things that's very confusing to people is where does 
liability accrue when something bad happens. Is it only at the 
end, or is there some liability upstream? So I think clarifying 
that is important.
    I do think, you know, like the EO said, having your hands 
around some of the really high-end models to make sure that, 
you know, they're still being developed in the United States, 
that there's appropriate protections about that intellectual 
property being protected, I think that's important. But there's 
just not--there's not a magical regulation you can pass at the 
top of the AI that's going to affect all of the possible bad 
things that happen at the bottom.
    Ms. Jackson Lee. Ms. Moore, let me quickly get to you about 
the deep state, or the utilization of AI in gross 
misrepresentation, being someone else fraudulently such, and 
dangerously such, that impacts individual lives, but also 
national security.
    Ms. Moore. I think that as Congress looks at AI in general 
and the fact of the matter being that AI has been in place for 
a very long time already, I think that the AI Human Bill of 
Rights, that sort-of outlines some of those areas where we've 
not necessarily given due care to individuals in terms of their 
ability to move within the world without all of the 
``algorithms'' making all of the decisions about them. I think 
that fair and trustworthiness is critically important in that 
industry has to regulate itself in that it really needs to 
explain how its models make decisions.
    I believe that the ability to prove that your AI and your 
model is not predatory is an important part of trustworthy AI. 
I think you have to start, as Alex said, where the individual 
is most impacted. There are a number of use cases. There have 
been tons of groups convened for the purpose of collecting this 
sort of data, and it shows up in that AI Bill of Rights. I 
think it's a good starting place to think about disparate 
impact. But it's not that the algorithms need to be regulated. 
It's the use cases.
    Ms. Jackson Lee. So it's if you have a height find, it's 
not the top. It's down at the ultimate impact.
    Let me thank the--I have many more questions, but let me 
thank you for this hearing and thank both the Chairman and the 
Ranking Member.
    With that, I yield back.
    I'll dig in even more. Thank you.
    Mr. Garbarino. Well, I want to thank--Mr. O'Neill, we don't 
know what those--that buzzing means either.
    I want to thank you all for the valuable testimony, and I 
want to thank the Members for their great questions. This has 
been the longest hearing that we've had this year, and it's 
because of your expertise on the panel, to the witnesses. So 
thank you all for being here.
    Before we end, I just want to take a point of personal 
privilege to thank Cara Mumford on my team here. This is her 
last--this is her last hearing as a member of the committee. 
She is--I don't think it's greener pastures, but she's moving 
on to a much nicer position and we will miss her dearly. This 
committee would not have been as successful this year without 
her, and I would not--I would not look like I would know what I 
am doing without her.
    So if we could all give her a round of applause.
    All right. So the Members of the subcommittee may have some 
additional questions for the witnesses, and we would ask the 
witnesses to respond to those in writing. Pursuant to the 
committee rule VII(D), the hearing record will be held open for 
10 days.
    Without objection, this subcommittee stands adjourned.
    [Whereupon, at 12:02 p.m., the subcommittee was adjourned.]



                            A P P E N D I X

                              ----------                              

      Questions From Chairman Andrew R. Garbarino for Ian Swanson
    Question 1a. I understand the AI EO placed a heavy emphasis on the 
usage of the NIST AI Risk Management Framework, including a requirement 
for DHS to incorporate the Framework into relevant guidelines for 
critical infrastructure owners and operators.
    In your opinion, is this Framework sufficient to ensure the secure 
usage of AI?
    Answer. Response was not received at the time of publication.
    Question 1b. How should the Federal Government work with the 
private sector to ensure any such frameworks evolve at pace with 
innovation of the technology?
    Answer. Response was not received at the time of publication.
    Question 2. I understand you work with CISA and other Government 
agencies already. Do you think the EO strikes the right balance of the 
breakdown of responsibility between DHS and CISA?
    Answer. Response was not received at the time of publication.
    Question 3. When we discuss the broad umbrella term AI, much of the 
conversation is focused on large language models. But AI encompasses 
much more than just large models, including small models and open-
source models, among other applications. In your opinion, how should 
CISA support critical infrastructure to protect against malicious use 
of small and open-source models? Do you think CISA has the personnel 
and resources necessary to address risks associated with the use of 
small and open-source models?
    Answer. Response was not received at the time of publication.
  Questions From Chairman Andrew R. Garbarino for Debbie Taylor Moore
    Question 1a. CISA's AI Roadmap briefly mentioned plans to develop 
an AI testbed, but I understand several other Federal departments/labs 
may have similar capabilities.
    What can CISA offer that is unique?
    Answer. As mentioned in my testimony, AI security needs to be 
embedded in all of the agency's work and we are pleased to see this 
reflected in CISA's Roadmap for AI.\1\ CISA is uniquely positioned to 
deliver on development of an AI testbed in three ways: First, it can 
leverage its critical information-sharing role within and among private 
industry, researchers, governments, emergency responders, intelligence, 
defense, and other communities to focus AI testbeds on emerging risks. 
Second, CISA seeks to have comprehensive observability and visibility 
into Federal departments' AI-enabled applications in order to 
prioritize, build, fund, and train testbed processes and programs for 
stakeholders. Third, it is the agency in the best position to 
understand use-case risks to critical infrastructure. Accordingly, CISA 
can offer actionable, comprehensive AI red-teaming, table-top 
exercises, workshops, and threat assessments to stakeholders. In order 
to be most effective, AI testbeds should have team members from 
relevant sectors. CISA teams and testbeds can focus on identifying 
potential harms or undesirable outcomes from AI models for the specific 
context and use case of AI in Government application and in critical 
infrastructure, and the development of AI tools that are effective at 
mitigating those risks.
---------------------------------------------------------------------------
    \1\ 2023-12-12-CIP-HRG-Testimony.pdf (house.gov).
---------------------------------------------------------------------------
    Question 1b. How can CISA leverage innovation from the private 
sector in utilizing AI for both offensive and defensive cyber 
capabilities?
    Answer. AI will enable cybersecurity defenders to better do their 
job. AI systems are bolstering existing security best practices 
regardless of critical infrastructure designation.
    AI can help to:
   Improve speed and efficiency that are fundamental to the 
        success of resource-constrained security teams. When AI is 
        built into security tools, cybersecurity professionals can 
        identify and address, at an accelerated rate, increasingly 
        sophisticated threats. For example, AI can be used to train 
        computers in the ``language of security'' to help augment 
        Security Operation Centers.
   Collect information, provide context, deliver insights, etc. 
        to help security analysts prioritize demands. For example, AI 
        for anomaly detection can help to identify patterns or actions 
        that look atypical and deserve further investigation.
   Automate functions to enhance the constrained workforce to 
        operate more efficiently. AI is helping to iron out and remove 
        security system complexity. For example, IBM's managed security 
        services team used AI to automate 70 percent of alert closures 
        and speed up their threat management time line by more than 50 
        percent within the first year of operation.
   Speed up recovery time and reduce costs of incidents. For 
        example, IBM's Cost of a Data Breach 2022 report found that 
        using AI was the single most effective tool for lowering the 
        cost of a data breach.
    CISA should also leverage the NIST AI Risk Management Framework,\2\ 
OWASP ``Top 10'' set of assets for security risks (e.g. web application 
security risks, LLMs),\3\ and MITRE ATLAS guidance \4\ to bolster 
existing security programs' use of vulnerability testing and threat 
intelligence capabilities especially as it pertains to AI. In recent 
years, many of the existing providers of security tools and platforms 
currently in use by Federal agencies have further developed and refined 
their capabilities to include leveraging AI/ML (Machine Learning). ML, 
a subset of AI, which identifies patterns, and can streamline high 
volumes of alerts, reducing complexity and prioritizing the most 
meaningful responses to events. Robust AI-generated remediation 
recommendations and summaries can assist in making it possible for even 
junior analysts to effectively respond. Many of these capabilities are 
available to agencies and may include tools and platforms in the 
following categories:
---------------------------------------------------------------------------
    \2\ AI Risk Management Framework/NIST.
    \3\ OWASP Top Ten/OWASP Foundation, OWASP-Top-10-for-LLMs-2023-
v05.pdf.
    \4\ MITRE/ATLAS.
---------------------------------------------------------------------------
   EDR (Endpoint Detection and Response)
   SIEM (Security Information and Event Management)
   TIP (Threat Intelligence platforms) to include customized 
        services derived from custom SOCs (Security Operation Centers) 
        and MSSP (Managed Security Service Providers).
    Unfortunately, in many instances, these newer AI-powered features 
and capabilities may not be ``turned on'', deployed, or implemented in 
organizations due to lack of awareness, training, or education.
    Additionally, CISA can work with private-sector and critical 
infrastructure traditional cybersecurity red teams on upskilling the 
participants to recognize the presence of adversarial AI and its 
associated threat actors.
    It is recommended that CISA strongly promotes the departmental and 
agency use of these existing ML capabilities mentioned above. Machine 
Learning (again, a subset of AI) can rapidly recognize patterns of 
attack, indicators of compromise, and facilitate organizations' sharing 
of threat intelligence broadly, while offering robust recommendations 
for response. The other benefit derived from ML-driven, automated 
security platforms is that they learn over time, and increase the 
recognition of adversarial trade craft and incrementally improve the 
overall program's response to known and zero-day attacks.
    Question 2. DHS is tasked with strengthening international 
partnerships on AI in the EO. As part of this effort, the Secretary is 
directed to develop a plan to prevent, respond to, and recover from 
potential cross-border critical infrastructure disruptions from AI use.
    What are some of the main themes you hope to see in that plan?
    Answer. With the advent of AI, governments must seize the 
opportunity to assess and build upon what is working, what needs to be 
improved, and what needs to be harmonized in order to provide clarity 
for organizations to comply and yet remain nimble to prepare and react 
to threat environment. DHS, and CISA specifically, can be that 
international convener to socialize concepts, operational plans, and 
get stakeholder agreement on outcomes.
    We have seen this in practice through various planning exercises 
and real-time engagements with CISA's Joint Cyber Defense Collaborative 
(JCDC) as well as CISA's capability to commit our international cyber 
partners on important guidance publications, like Secure by Design and 
Guidelines for Secure AI System Development. We would encourage CISA to 
build upon its international collaborations to enable a harmonized 
approach to things like incident reporting principles in the same 
manner as Secure by Design to spur a responsible approach nationally 
and globally.
    There will always be sectors of critical infrastructure that share 
more actionable intelligence than others. We would encourage DHS and 
CISA to set a goal of achieving international AI threat intelligence 
sharing which is comparable to the speed, intensity, and sophistication 
of global financial institution's sharing and dissemination of market 
intelligence today. Global Financial markets tolerate zero down time 
and are resilient when challenged or disrupted. The ability to fully 
recover operations post-attack, should also be the aspirational goal 
for global cooperation around resilience and threat intelligence 
sharing.
      Questions From Chairman Andrew R. Garbarino for Tim O'Neill
    Question 1a. How does AI for OT and AI for IT differ in your 
sector?
    How should CISA account for these differences in their Risk 
Assessments for your sector?
    Question 1b. How does the risk of AI usage in OT systems differ 
from AI usage in IT systems?
    Answer. AI for IT (Information Technology) and AI for OT 
(Operational Technology) refer to two distinct domains in the realm of 
technology, each with its own focus and applications.
    AI for IT has a primary focus on enhancing the performance, 
efficiency, security, and automation of IT-related tasks, 
infrastructure, and operations. Examples would be using an AI algorithm 
to predict potential failures or issues in servers, networks, or 
hardware components before they occur (predictive maintenance); 
implementing an AI-driven cybersecurity tool to detect and respond to 
cyber threats and breaches in real-time (security and threat 
detection); applying AI to streamline help desk operations, automate 
ticketing systems, and improve the overall efficiency of IT support (IT 
service management); or using AI algorithms to analyze network traffic 
patterns and optimize data flow for better performance reliability 
(network optimization).
    OT, also known also as ``cyber-physical,'' is technology and 
automation to control devices that have an effect on the physical 
world, including manufacturing machinery; energy generation, 
transmission, and distribution; transportation (including autonomous 
vehicles, traffic management, etc.); industrial and household building 
systems; and many others. The primary requirement for automation in 
these systems is the safety of operators and the public, and most 
industries have instituted a well-developed system of safety and 
quality complete with national and international standards and best 
practices. AI for OT has great promise to provide not only enhanced 
safety but also improved efficiency, uptime, and economics. Hitachi 
believes it is essential that AI follow the tradition in these 
industries of putting safety and quality first, observing good practice 
where it exists, and collaborating with stakeholders to create new 
practices where needed.
    While AI for both IT and OT aims to leverage artificial 
intelligence to enhance different technological domains, they have 
distinct focuses and applications tailored to the specific needs and 
challenges of their respective domains. Integration of AI in both 
domains, however, can lead to more holistic and efficient technological 
solutions in various industries.
    In both domains, the approach to risk assessment involves 
identifying potential threats, vulnerabilities, and impacts associated 
with AI implementation. However, the focus areas and the nature of 
risks differ due to the distinct characteristics of IT and OT 
environments. Evaluating these risks comprehensively and implementing 
appropriate mitigation strategies is crucial to ensure the safe and 
effective deployment of AI in both IT and OT contexts.
    When considering risk assessments in each area, the following AI 
applications are applicable:
AI for IT Risk Assessment
   Data Security and Privacy.--AI applications might handle 
        sensitive information, and there is a risk of data breaches or 
        unauthorized access if proper security measures are not in 
        place.
   Cybersecurity Threats.--Assessing risks related to cyber 
        threats, including malware, phishing attacks, ransomware, and 
        vulnerabilities in AI-powered systems. AI may also be used to 
        detect and mitigate these threats.
   System Downtime and Reliability.--Risk assessment involves 
        evaluating the potential impact of AI failures or system down 
        time on IT infrastructure, services, and operations. Predictive 
        maintenance and monitoring help mitigate these risks.
   Regulatory Compliance.--Assessing risks related to non-
        compliance with data protection regulations (like GDPR, HIPAA) 
        or industry standards. Ensuring AI applications adhere to legal 
        and ethical guidelines is crucial.
AI for OT Risk Assessment
   Safety and Physical Hazards.--In OT, risk assessment often 
        focuses on the physical safety of workers and the environment. 
        AI deployment in industrial settings can pose risks if not 
        managed properly, such as machinery malfunctions or accidents.
   Operational Disruptions.--Evaluating risks associated with 
        disruptions in critical operations due to AI failures or 
        misconfigurations. Ensuring continuous operation and resilience 
        of AI-driven systems in industrial processes is essential.
   Supply Chain Risks.--Assessing risks related to the supply 
        chain, including potential vulnerabilities in AI-powered 
        systems used in production, transportation, or logistics.
   Regulatory and Compliance Challenges.--Similar to IT, OT 
        also faces regulatory challenges specific to industrial 
        standards and safety regulations. Risk assessment involves 
        ensuring compliance and adherence to these standards while 
        deploying AI.
    CISA, relying on NIST's AI Risk Management Framework, is in a 
perfect place to highlight where critical manufacturing needs to have 
more focus on possible cyber intrusions. This is important when CISA 
educates and works with critical manufacturers to help them understand 
the life cycle of AI systems and the need to validate the systems to 
avoid mission drift over time. For instance, making sure a manufacturer 
installing robotic systems to assist human production would be aided by 
CISA highlighting AI risk assessment needs related to safety of workers 
in proximity to the machines and possible vulnerabilities within the 
supply chain that might warrant higher scrutiny of an operating system 
for the robotic machine. CISA, as the agency leading the identification 
of possible vulnerabilities, can issue guidance and warnings when cyber 
vulnerabilities may be present and help manufacturers walk through 
mitigation steps to assist in quickly identifying any potential attacks 
on the systems. They can also encourage manufacturers to take steps 
that create redundancy in their operating systems so ransomware attacks 
are less impactful. One mitigation technique is system data backups 
stored in cloud applications, so if a ransomware attack occurs, the 
manufacturer can access this backup data instantly.
    To account for regulatory challenges, CISA must work collectively 
across the Federal Government. Understanding what various agencies are 
doing will allow CISA to augment those actions when appropriate, or be 
more prescriptive if a gap exists that CISA needs to fill with its own 
framework, testing, or guidance requirement.
    Question 2. You mentioned the Secure Software Development 
Framework: a key premise of that framework is to ensure a secure supply 
chain for software developed by industry. What should the Government 
consider as it applies this framework to AI applications specifically?
    Answer. The life cycle of OT and IT products is very important for 
the Government to understand so it can subsequently educate companies 
on the matter. OT AI applications have longer life cycles--thus, long-
term management of Software Bills of Materials (SBoMs) will need more 
updates for on-going maintenance. The main issue for long-term OT AI 
products is the potential for shortages or unavailable maintenance 
materials, and thus SBoMs are very important to help address on-going 
maintenance needs.
    In addition, understanding data provenance and data lineage 
tracking is important when it comes to the origin of a data asset and 
how it has been modified and manipulated over time, thus protecting 
against the introduction of bias into a system or manipulation by a bad 
actor. Similarly, AI models and the training/test/validation data used 
for hyperparameter tuning of the models and its associated lineage is 
important to track and helps understand the actors involved in the 
model development and retraining/evolution. Following the SSDF (and 
CISA working to promote its use) provides industries insight regarding 
where to assess, or monitor, possible interference by malicious actors.
    Question 3. Your testimony speaks about predictive maintenance 
solutions. Can you please elaborate how this is considered an AI 
application and how the Government should think about securing 
solutions like this?
    Answer. Predictive maintenance solutions at their very core rely on 
artificial intelligence to predict, with high levels of certainty, 
maintenance and future impacts. Predictive maintenance solutions are 
based on predictive analytics--machine learning (ML) algorithms. The ML 
algorithms are tuned to learn characteristics of the training (often 
including historical) data and formulate patterns and rules that can be 
applied to newer data and enable automation. In predictive maintenance, 
the ML algorithms learn the factors that can cause vessels, vehicles, 
and equipment to fail and monitor similar assets. Then, in the case of 
an issue, the piece of equipment, vehicle, etc. can be proactively sent 
for maintenance without impacting overall operations and associated 
efficiency.
    Predictive maintenance solutions ensure a higher level of 
availability for OT and IT technology and should not be overly 
restrictive in approaches to regulate. Security of solutions should 
follow existing security frameworks, as we've recommended for other AI 
applications, rather than exclusive regulatory frameworks targeted for 
predictive maintenance applications.
    Question 4. CISA is the SRMA for the critical manufacturing sector. 
How is CISA's relationship with the sector--what's working/not working 
and what needs to be improved with the SRMA structure as the White 
House revises PPD-21 and as AI gets added to the list of SRMA 
responsibilities?
    Answer. From Hitachi's standpoint, we appreciate CISA's on-going 
outreach campaign. As we stated in our testimony, CISA's stakeholder 
engagement is very proactive and productive. We would encourage other 
agencies to follow that same example and emulate this outreach method. 
CISA's threat identification and information sharing is of great help 
to all industries and we cannot speak highly enough of that work.
    CISA should continue to foster collaboration with the critical 
manufacturing sector via regular meetings, workshops, and joint 
exercises. This will help with the timely sharing of threat 
intelligence, vulnerabilities, and best practices. It can also allow 
CISA to educate small and medium-sized (SMEs) manufacturers on the role 
NIST's AI Risk Management Framework plays in the adoption of Industrial 
AI solutions, as well as promote good cybersecurity processes, that 
will help improve the overall identification of possible 
vulnerabilities, recognition of any type of cyber attack or potential 
AI mission drift. It also allows for better response and recovery from 
any cyber crime purported against a manufacturer.
    As AI becomes more prevalent in the manufacturing sector, it is 
crucial to update SRMA structures to address new cybersecurity 
challenges associated with AI technologies. We would encourage CISA to 
leverage social media as a core communication channel for faster, 
timely, and more effective information sharing. It is also very 
important for CISA to learn from SMEs, especially when it comes to the 
ways they need threat information conveyed to them so that they are 
receiving timely and current information, and how to mitigate 
vulnerabilities. This will help enhance their cybersecurity hygiene.
    For critical manufacturers, CISA must partner with other agencies 
to help conduct regular risk assessments that identify evolving threats 
and vulnerabilities within the critical manufacturing sector. CISA is 
the premier agency when it comes to cybersecurity; other agencies are 
well-known for identification of vulnerabilities or threats within the 
end-users of manufactured products. By working together, CISA and these 
other agencies can reduce the amount of testing and red-teaming to 
avoid duplication of these actions that might hinder implementation of 
AI technologies designed to protect or strengthen response actions.
    In some instances, CISA is already doing this, and we applaud those 
efforts.
Conclusion
    Hitachi appreciates the continued exploration of how AI will bring 
new growth opportunities for U.S. businesses, and the important role AI 
can play in cybersecurity.
      Questions From Chairman Andrew R. Garbarino for Alex Stamos
    Question 1a. I understand the AI EO placed a heavy emphasis on the 
usage of the NIST AI Risk Management Framework, including a requirement 
for DHS to incorporate the Framework into relevant guidelines for 
critical infrastructure owners and operators.
    In your opinion, is this Framework sufficient to ensure the secure 
usage of AI?
    Answer. Response was not received at the time of publication.
    Question 1b. How should the Federal Government work with the 
private sector to ensure any such frameworks evolve at pace with 
innovation of the technology?
    Answer. Response was not received at the time of publication.
    Question 2. In your written testimony, you said that we can't put 
the AI genie back in the bottle, so bad actors may be putting these 
technologies to use for nefarious purposes. How can we use AI on the 
defensive side? What are the security benefits of using AI for 
defensive purposes?
    Answer. Response was not received at the time of publication.
    Question 3. How can AI be useful in dealing with the severe cyber 
workforce shortage we're contending with? Can you explain how these 
tools may supplement the cybersecurity workforce?
    Answer. Response was not received at the time of publication.

                                 [all]