• Wed. Jan 8th, 2025

A UK Drone Lawyer’s Perspective – sUAS News

Byadmin

Jan 8, 2025


On the 7th of January 2025, The Guardian published an article highlighting the British AI consultancy Faculty AI’s involvement in the development of drone technology for defence clients, prompting renewed questions about where legal, ethical, and regulatory boundaries should lie for AI-driven military applications.

Faculty AI, already prominent for its work with various UK government departments (including the NHS and the Department for Education) and advisory services for the AI Safety Institute (AISI), has reportedly developed and deployed AI models on unmanned aerial vehicles (UAVs) for military purposes. Although it remains unclear whether these drones are intended for lethal operations, the revelations have amplified concerns about how best to regulate or restrict the use of AI in weapon systems.

Below, I explore the key legal issues and examine how the recently adopted EU AI Act—as well as the evolving UK regulatory framework—may shape the future of this sector.________________________________________1. Faculty AI’s Defence Work: A Brief Overview1.1 Government and Public Sector TiesFaculty AI, known for its work with the Vote Leave campaign in 2016, was later engaged by Dominic Cummings to provide data analytics during the pandemic. Since then, it has won multiple government contracts worth at least £26.6m, extending its work into healthcare (via the NHS), education, and policy consulting with the AISI on frontier AI safety.1.2 UAV DevelopmentThe Guardian reports that Faculty AI has experience in deploying AI models on UAVs. Its partner firm, Hadean, indicated that the two companies collaborated on subject identification, tracking objects in movement, and exploring swarm deployment. While Faculty states that it aims to create “safer, more robust solutions”, details on whether these drones might be capable of lethal autonomous targeting remain undisclosed.________________________________________2. The EU AI Act: A New Regulatory Milestone2.1 Status of the EU AI ActIntroduced by the European Commission in 2021 as a proposed regulation, the EU AI Act has since been adopted via the EU’s legislative process. As of early 2025, it is recognised as a binding regulation designed to harmonise AI rules across all EU Member States. Although the UK is no longer part of the EU, any UK-based company offering AI products or services within the EU must ensure compliance with the regulation’s requirements.2.2 Risk-Tiered FrameworkThe EU AI Act operates on a tiered risk basis:• Unacceptable risk: Certain AI applications (e.g., social scoring) are outright banned.• High risk: This category includes critical infrastructure, healthcare, and—potentially—defence-related AI systems that could significantly affect people’s safety or fundamental rights. Such systems must meet strict transparency, oversight, and data governance requirements.• Limited or minimal risk: These uses are subject to fewer obligations, generally focused on transparency (e.g., disclosing AI usage to end users).For high-risk AI in military contexts, the EU AI Act demands robust human oversight, thorough documentation, and strict compliance obligations, particularly around accountability and the prevention of harm.2.3 Potential Impact on Military DronesWhile national security and defence largely remain the prerogative of individual EU Member States, the EU AI Act’s principles can still influence how companies and governments view the development of autonomous or semi-autonomous drones. Key considerations include:• Transparent Data and Design: Documenting data sets, development processes, and operational parameters.• Human in the Loop: Ensuring a human operator is always able to override or intervene in the AI’s decision-making. Other terms such as Human on the Loop and Human outside the Loop are also referred to.• Liability and Penalties: Breaches can incur hefty fines—up to 6% of global turnover—thus acting as a significant deterrent against unethical or unlawful AI deployment.________________________________________3. The UK’s Approach to AI Regulation and Military Drones3.1 Divergence from the EU?Post-Brexit, the UK has chosen a “pro-innovation” approach to AI regulation. Rather than adopting a single, all-encompassing statute akin to the EU AI Act, the UK is implementing a sector-by-sector and risk-based strategy, guided by existing regulators such as the Information Commissioner’s Office and the Competition and Markets Authority.3.2 AI Safety Institute (AISI)Established under former Prime Minister Rishi Sunak in 2023, the AISI focuses on frontier AI safety research. Faculty AI’s role in testing large language models and advising the AISI on threats like disinformation and system security places the company in a key position to influence UK policy. Critics argue that this may create potential conflicts of interest if the same organisation is also developing AI for military use.3.3 House of Lords RecommendationsIn 2023, a House of Lords committee urged the UK Government to clarify the application of International Humanitarian Law (IHL) to lethal drone strikes and to work towards an international agreement limiting or banning fully autonomous weapons systems. The Government response acknowledged the importance of maintaining “human control” in critical decisions but did not enact binding legislation banning lethal autonomous drones outright.________________________________________4. Legal and Ethical Concerns for AI-Enabled Drones4.1 International Humanitarian Law (IHL)IHL principles—distinction (separating combatants from civilians) and proportionality (limiting harm relative to military objectives)—are central to discussions on AI-driven drones. Fully autonomous UAVs, capable of selecting and engaging targets without human intervention, raise profound legal questions on accountability, particularly if biases or system errors result in wrongful casualties.4.2 Allocation of LiabilityTraditionally, accountability in military operations lies with commanders and operators. With increasingly autonomous systems, however, liability could extend to technology developers, programmers, or even the purchaser of the system. Clarifying how legal responsibilities are distributed may become a focal point for future litigation and regulatory reform.4.3 Export ControlsCompanies like Faculty AI must also comply with arms-export rules when providing AI-targeting systems or related software to foreign entities. In the UK, export licences for military-grade technology are subject to domestic legislation and international protocols, such as the Wassenaar Arrangement on dual-use goods.________________________________________5. Looking Ahead: Balancing Innovation, Safety, and Accountability5.1 Stronger National FrameworksAlthough the UK favours a pro-innovation stance, there is growing pressure from Parliament and civil society for more rigorous, enforceable rules on potentially lethal AI applications. The EU AI Act may serve as a reference point for the UK to consider stricter domestic regulations.5.2 International CollaborationCalls for global agreements—treaties or non-binding accords—to prohibit fully autonomous weapons continue to gain momentum. The House of Lords committee specifically recommended international engagement to ensure that lethal force remains under human control.5.3 Corporate AccountabilityOrganisations operating at the intersection of commercial defence contracts and government policy—such as Faculty AI—need transparent internal processes and robust ethics boards to mitigate conflicts of interest. Demonstrating genuine corporate responsibility will be vital for maintaining public trust.5.4 Ethical and Safety AuditsAs AI becomes more embedded in defence, mandatory ethical and safety audits may become standard practice. These would scrutinise algorithmic fairness, training data, and how effectively systems can identify and mitigate unintended harms.________________________________________6. ConclusionFaculty AI’s role in developing AI for military drones underscores how high the stakes are when cutting-edge technology meets defence applications. With the EU AI Act now in force as a binding regulation, Europe has provided a blueprint for tighter control over “high-risk” AI systems. In contrast, the UK’s approach still offers substantial flexibility for companies, potentially raising both legal and ethical concerns around autonomy, accountability, and conflicts of interest.From an IHL standpoint, keeping a human responsible for any life-and-death decision is imperative. As a UK drone lawyer, I urge policymakers, regulators, and industry stakeholders to keep asking: Where do we draw the line between legitimate defensive innovation and an unacceptable risk to civilians? Only by establishing clear, enforceable legal standards—anchored in international law and ethical scrutiny—can we ensure AI-powered drones serve to protect rather than endanger fundamental human values.

Bio – Richard Ryan, UK Drone Lawyer

Richard Ryan is a UK-based drone lawyer specialising in the regulatory, ethical, and commercial aspects of unmanned aerial vehicles (UAVs) and artificial intelligence (AI). Through a series of blogs, Richard Ryan has explored critical issues such as the EU AI Act, the UK’s evolving “pro-innovation” regulatory landscape, and the legal considerations surrounding military drones and lethal autonomous weapons systems.

Drawing on extensive experience in advising government bodies, technology companies, and public institutions, Richard Ryan brings a deep understanding of how international humanitarian law (IHL), export controls, and data protection obligations intersect in modern drone operations. Their writing emphasises the importance of maintaining human oversight in AI-driven systems, championing ethical development and transparent accountability mechanisms.

A trusted voice in the field, Richard Ryan regularly comments on emerging case law, parliamentary recommendations, and global discussions around frontier AI safety. The mission is to help stakeholders—from hobbyist drone operators to established aerospace firms—navigate the complexities of regulation, risk management, and innovation.

Related

Discover more from sUAS News

Subscribe to get the latest posts sent to your email.



Source link