The Artificial Intelligence Safety and Security Board identified how each layer of the AI supply chain can ensure that AI is deployed safely and securely.
The US Department of Homeland Security (DHS) has released a set of recommendations for the safe and secure development and deployment of artificial intelligence (AI) in critical infrastructure.
The Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure is a first-of-its kind resource developed for each layer of the AI supply chain: cloud and compute providers, AI developers, and critical infrastructure owners and operators as well as the civil society and public sector entities.
The Artificial Intelligence Safety and Security Board, a public-private advisory committee established by DHS Secretary Alejandro Mayorkas, identified the need for clear guidance on how each layer of the AI supply chain can do their part to ensure that AI is deployed safely and securely in US critical infrastructure.
It is the culmination of considerable dialogue and debate among the board, composed of AI leaders representing industry, academia, civil society, and the public sector. The report complements other work carried out by the Administration on AI safety, such as the guidance from the AI Safety Institute, on managing a wide range of misuse and accident risks.
“The choices organisations and individuals involved in creating AI make today will determine the impact this technology will have in our critical infrastructure tomorrow”
“AI offers a once-in-a-generation opportunity to improve the strength and resilience of US critical infrastructure, and we must seize it while minimising its potential harms. The framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and more,” said secretary Mayorkas. “The choices organisations and individuals involved in creating AI make today will determine the impact this technology will have in our critical infrastructure tomorrow.
“I urge every executive, developer, and elected official to adopt and use this framework to help build a safer future for all.”
DHS said if adopted and implemented by the stakeholders involved in the development, use, and deployment of AI in US critical infrastructure, this voluntary framework will enhance the harmonisation of and help operationalise safety and security practices, improve the delivery of critical services, enhance trust and transparency among entities, protect civil rights and civil liberties, and advance AI safety and security research that will further enable critical infrastructure to deploy emerging technology responsibly. Despite the growing importance of this technology to critical infrastructure, no comprehensive regulation currently exists.
DHS identified three primary categories of AI safety and security vulnerabilities in critical infrastructure: attacks using AI, attacks targeting AI systems, and design and implementation failures. To address these vulnerabilities, the framework recommends actions directed to each of the key stakeholders supporting the development and deployment of AI in US critical infrastructure as follows:
Cloud and compute infrastructure providers play an important role in securing the environments used to develop and deploy AI in critical infrastructure, from vetting hardware and software suppliers to instituting strong access management and protecting the physical security of data centers powering AI systems. The framework encourages them to support customers and processes further downstream of AI development by monitoring for anomalous activity and establishing clear pathways to report suspicious and harmful activities.
AI developers develop, train, and/or enable critical infrastructure to access AI models, often through software tools or specific applications. The framework recommends that AI developers adopt a secure-by-design approach, evaluate dangerous capabilities of AI models, and ensure model alignment with human-centric values. The Framework further encourages AI developers to implement strong privacy practices; conduct evaluations that test for possible biases, failure modes, and vulnerabilities; and support independent assessments for models that present heightened risks to critical infrastructure systems and their consumers.
Critical infrastructure owners and operators manage the secure operations and maintenance of key systems, which increasingly rely on AI to reduce costs, improve reliability and boost efficiency. They are looking to procure, configure, and deploy AI in a manner that protects the safety and security of their systems. The framework recommends a number of practices focused on the deployment-level of AI systems, to include maintaining strong cybersecurity practices that account for AI-related risks, protecting customer data when fine-tuning AI products, and providing meaningful transparency regarding their use of AI to provide goods, services, or benefits to the public. It encourages critical infrastructure entities to play an active role in monitoring the performance of these AI systems and share results with AI developers and researchers to help them better understand the relationship between model behaviour and real-world outcomes.
Civil society, including universities, research institutions, and consumer advocates engaged on issues of AI safety and security, are critical to measuring and improving the impact of AI on individuals and communities. The framework encourages civil society’s continued engagement on standards development alongside government and industry, as well as research on AI evaluations that considers critical infrastructure use cases. It envisions an active role for civil society in informing the values and safeguards that will shape AI system development and deployment in essential services.
Public sector entities, including federal, state, local, tribal, and territorial governments, are essential to the responsible adoption of AI in critical infrastructure, from supporting the use of this technology to improve public services to advancing standards of practice for AI safety and security through statutory and regulatory action. The US is a world leader in AI; accordingly, the framework encourages continued cooperation between the federal government and international partners to protect all global citizens, as well as collaboration across all levels of government to fund and support efforts to advance foundational research on AI safety and security.
The framework is designed to help address these concerns and complements and advances existing guidance and analysis from the White House, the AI Safety Institute, the Cybersecurity and Infrastructure Security Agency, and other federal partners.
Mayor of Seattle Bruce Harrell was invited to participate in the board as part of his longstanding leadership in the field, including serving as chair of the US Conference of Mayors Standing Committee on Technology and Innovation.
“Artificial intelligence has incredible potential to create efficiencies and innovations, and this framework takes a thoughtful approach to balancing those opportunities with the risks and challenges it creates,” said Harrell.
“Partnership between the public and private sectors will be critical as we work to incorporate these advances into infrastructure and services while also taking steps to mitigate potential harm. This framework represents an important step towards fostering accountability, safety, and security while embracing this technology and the future.”
Seattle released its own generative artificial intelligence policy for responsible use AI in November 2023. The City’s policy aligns with the DHS framework in defining principles for AI use that emphasise innovation, accountability, and transparency.
Why not try these links to see what our SmartCitiesWorld AI can tell you.
(Please note this is an experimental service)
How can cloud providers enhance AI security in critical infrastructure?What secure-by-design practices should AI developers implement for safety?How can critical infrastructure operators monitor AI system performance effectively?In what ways can civil society influence AI safety standards development?How can public sector collaboration improve AI safety research and regulation?