The Ministry of Defense (MoD) has unveiled its defense artificial intelligence strategy outlining how the UK will work closely with the private sector to prioritize research, development and experimentation in artificial intelligence (AI) to “revolutionize the capabilities of our armed forces”.
Released on June 15, 2022 at London Tech Week’s AI Summit, the strategy aims to make the MoD “the most effective, efficient, trusted and influential defense organization for our size” in AI.
The four main objectives of the strategy are: to transform the MoD into an AI-ready organisation; adopt and leverage AI at pace and scale for defense advantage; strengthening the UK defense and security AI ecosystem; and to shape global AI developments to promote security, stability and democratic values.
A policy document on the Ambitious, safe and responsible The use of AI, developed in partnership with the government’s Center for Data Ethics and Innovation (CDEI), was released alongside the strategy, which sets out five principles to promote the ethical development and deployment of AI by the military.
These principles include human orientation, accountability, understanding, mitigating bias and prejudice, and reliability.
The Department of Defense previously released a Defense Data Strategy on September 27, 2021, which outlined how the organization would ensure data is treated as a “strategic asset, second only to people,” as well as how which would allow this to happen at pace and scale.
“We intend to fully leverage AI to revolutionize all aspects of MoD operations, from improved precision-guided munitions and multi-domain command and control to airspeed intelligence analysis. machinery, logistics and resource management,” said MoD Second Permanent Secretary Laurence Lee. , in a Blog released ahead of the AI Summit, adding that the UK government intends to work closely with the private sector to secure investment and drive innovation.
“For the MoD to maintain our technological edge over potential adversaries, we must partner with industry and accelerate the rate at which AI solutions can be adopted and deployed across defence.
“To realize these partnerships, the Department of Defense will establish a new AI Network for Defense and National Security, clearly communicating our requirements, intentions and expectations and enabling engagement at all levels. We will establish an industry engagement team in the Defense AI Center [DAIC] to enable a better understanding of defense and a better response to the AI sector. It will also foster the best and brightest talent and the exchange of expertise between defense and industry.
According to the strategy, overall strategic coherence will be managed by the Defense AI and Autonomy Unit (DAU) and DAIC, which will set policy frameworks and act as the focal point for AI research and development (R&D).
He added that the Department of Defense will also create a role of Head of the AI Profession which sits within the DAIC and is responsible for developing a competency framework, as well as recruitment and retention offers.
The DAIC will also lead the delivery of an engagement and exchange function to “encourage seamless exchange between the Department of Defense, academia, and the technology sector.”
He added that through secondments and placements, the Department of Defense will be “talented private sector AI leaders tasked with driving high-risk innovation and driving cultural change; create opportunities for external experts to support policy development; and develop programs that enable defense army leaders to gain experience in the technology sector”.
UK Defense Secretary Ben Wallace, writing in the foreword to the strategy, claimed that artificial intelligence technologies were key to modernizing defence, and further described various concepts that the MoD will explore through its R&D efforts and engagement with industry.
“Imagine a soldier on the front line, trained in highly developed synthetic environments, guided by wearable command and control devices analyzing and recommending different courses of action, powered by a database capturing and processing the latest information from hundreds of small drones capturing thousands of hours of footage,” he said.
“Imagine autonomous resupply systems and combat vehicles, delivering supplies and effects more efficiently without putting our people at risk. Imagine the latest directed energy weapons using lightning-fast target detection algorithms to protect our ships, and the digital backbone that supports it all using AI to identify and defend against cyber threats.
Wallace added that he also recognizes the “deep issues” raised by the use of AI by a military organization: “We take them very seriously – but think for a moment how many AI-enabled devices we you have at home and ask yourself if we shouldn’t use the same technology to defend ourselves and our values.
“We must be ambitious in our pursuit of strategic and operational advantage through AI, while upholding the norms, values and norms of the society we serve, and demonstrating reliability.”
Lethal Autonomous Weapons Systems
Regarding the use of Lethal Autonomous Weapons Systems (LAWS), the strategy affirmed that the UK is “deeply committed to multilateralism” and will therefore continue to engage with the UN Convention on Certain Conventional Weapons. (CCW).
“CCW discussions will remain at the heart of our efforts to shape international norms and standards, as will our support for wider government in forums such as the Global Partnership for Artificial Intelligence and the Council of Europe.” , did he declare.
“Our immediate challenge, working closely with allies and partners, is to ensure that ethical issues, related trust issues and the associated apparatus of policies, processes and doctrine do not impede our legitimate development. , responsible and ethical AI, as well as our collaboration and interoperability efforts.
This was the only explicit mention of LAWS in the entire 72-page strategy document.
In a Lords debate in November 2021, Defense Minister annabelle Goldie declined to rule out the use of LAWSbut said the UK would not deploy such systems without human oversight.
Asked about the government’s position on the CCW discussions at the time, Goldie added that there was no consensus on regulating LAWS: “The UK and our partners are not convinced by the calls for a new binding instrument. International humanitarian law provides a strong, principled framework for regulating the deployment and use of weapons.
In response, Liberal Democrat digital spokesman Timothy Clement-Jones said the stance put the UK ‘at odds with almost 70 countries and thousands of scientists in its reluctance to rule out lethal autonomous weapons’. .
The Campaign to Stop Killer Robots, a global civil society coalition of more than 180 organizations, has called for legally binding instruments to ban or restrict LAWS since its launch in 2013, and argues that the use of force must remain fully under human control.
“Killer robots change the relationship between people and technology by handing life or death decision-making to machines. They challenge human control over the use of force, and when they target people, they dehumanize us – reducing us to data points,” he said on his website. website.
“But technologies are designed and created by people. We have a responsibility to draw boundaries between what is acceptable and what is not. We have the ability to do this, to protect our humanity, and to ensure that the society we live in, that we continue to build, is one in which human life is valued – not quantified.
NATO strategy on AI
In October 2021, the North Atlantic Treaty Organization (NATO) released its own AI Strategywho outlined how the military alliance, of which the UK is a founding member, will approach the development and deployment of AI technologies.
Speaking at the 16 June 2021 AI Summit on transforming the data-driven organization, NATO AI and Data Policy Manager Nikos Loutas said that the four main objectives of the strategy were to promote the responsible use of AI; accelerate and generalize its use; protect and monitor the use of AI, as well as NATO’s innovation capacity; and to identify and guard against the use of malicious AI by state and non-state actors.
“What we also see is that artificial intelligence and data are also going to provide the foundation for a number of other emerging technologies within the alliance, including autonomy, quantum computing, biotechnology, you name it – so there’s also an element of building the foundation for others to work on,” Loutas said.
He added that NATO has already identified a range of use cases at different levels of maturity and is actively working with “industry, allies and partner nations” to develop them further.
“Some are purely experimental, some are for capability development, it’s all there, but what’s important is that all of these use cases meet the specific needs and specific operational priorities of the alliance,” did he declare.