Apple Joins White House AI Security Commitment
The company will commit to AI security measures and industry collaboration.
Teralyn Whipple
July 26, 2024 – The White House announced Friday that Apple signed onto a voluntary commitment to ensure America leads innovation and risk management of artificial intelligence.
Apple’s commitment comes a year after the White House secured commitments from seven leading AI companies including Microsoft, Google, Meta and OpenAI. In September, eight more firms, including Adobe, IBM and Nvidia, signed on. The commitments seek to uphold key principles that the White House believes are “fundamental to the future of AI,” namely safety, security and trust.
The companies committed to ensuring products are safe before introducing them to the public by running products through internal and external security testing of AI systems before their release. The testing will be carried out in part by independent experts and will protect the public against the most significant AI risks including biosecurity and cybersecurity. Included in this commitment is assurance that the company will share information across the industry, with government, and academia on best practices for AI safety, attempts to circumvent safeguards, and technical collaboration.
Furthermore, the companies committed to putting security first by investing in cybersecurity safeguards and facilitating third-party discovery and reporting of vulnerabilities in AI systems.
Finally, the companies committed to earning the public’s trust by developing robust technical mechanisms to ensure that users know when content is AI generated to reduce dangers of fraud and deception. The companies will also publicly report their AI systems’ capabilities, limitations, and appropriate uses to address bias and fairness. They will also prioritize research on the societal risks that the AI systems can pose and develop and deploy advanced AI systems to address society’s greatest challenges.
The commitments follow an executive order signed by President Joe Biden in October that establishes new standards for AI safety and security to protect American privacy and promote innovation and competition. It requires that developers of the most powerful AI systems share safety test results and other critical information with the federal government and instructs the National Institute of Standards and Technology to set “rigorous standards for extensive red-team testing to ensure safety before public release.”
The executive order launched a government-wide AI talent surge that is bringing hundreds of AI and AI-enabling professionals into the government, said the White House. The talent is working to increase AI capacity across the federal government for both national security and non-national security missions.
"Federal agencies reported that they completed all of the 270-day actions in the Executive Order on schedule, following their on-time completion of every other task required to date," reported the White House Friday.