The military departments and their contractors must address legal and ethical concerns early in development, from the concept of AI to its implementation, given the wide range of potential applications and challenges associated with it.
Developing system requirements
At an early stage of development, it is crucial to conduct a comprehensive analysis of the legal and ethical ramifications of AI-enabled autonomous systems. The designer must comprehend the system's function, its technical components, and the relevant ethical and legal issues in order to conduct a strengths and weaknesses analysis.
It is necessary to add ethical considerations to technical specifications for things like computer vision, image recognition, decision logic, and vehicle autonomy. This is required to take into account the laws of war, international humanitarian law, and pertinent regulations. International humanitarian law, for instance, mandates that unidentified individuals be treated as civilians and establishes a distinction between military personnel and civilians. The law states that before making any decisions, an AI system that has detected a person must inquire with the operator to determine the person's status. For just this one instance, the designers of such a system must take into account a wide range of trade-offs that might arise during AI operation.
Throughout the development process, developers will need to consult with specialists or even multidisciplinary teams. These specialists will not only highlight legal restrictions but also contribute to the development of other crucial elements of ethical analysis. System transparency and clarity are two of these essential components.
The emphasis is on system explainability and ethical documentation.
Ethical documentation requirements provide a simple method of capturing a system's explainability. Developers should document their systems in plain English, including critical dependencies, potential points of failure, and research gaps, so that non-technical audiences understand the legal and ethical risks associated with new AI-enabled systems. A thorough mission walkthrough can assist developers in identifying system decision points and designing user interfaces and other components. To identify and resolve technical issues in new systems, developers are already required to prepare mitigation documentation. Such risk documentation ensures that developers consider ethical issues during the design phase in a transparent manner.
Addressing research gaps and biases
AI bias can be reduced by conducting regular ethical reviews. This type of analysis can also be used to protect against the introduction of intentional or unintentional bias. Legal and ethical consultants can assist in thinking through datasets, system outputs, and human input to identify bias during the design process.
At first glance, applied legal, moral, and ethical considerations appear to be burdensome for military artificial intelligence developers. They may also necessitate a rethinking of the IT community's standard AI creation processes. Early and frequent analysis, including constant testing and prototyping, will, on the other hand, reduce the number of ethical quandaries that will impede subsequent stages of development and prevent the deployment of intelligent systems for military purposes.
Before joining the ranks of the army, the military learns the lessons of ethics, morality, and law in society. And why should we approach intelligent machines differently if we are asking them to take on responsibilities similar to ours?