Abstract
Would you walk on land declared safe by an unproven technology, developed by enthusiastic proponents who do not have long experience in the world of mine action? What if the system for locating hazards will be tested in only one or two trials, even though the type of machine learning system they are using is known to sometimes give false but completely plausible results (so-called hallucinations1)? Furthermore, in the proposed machine-learning system there will be no audit trail for analysis if a serious error occurs, and no way of knowing for sure how to prevent its repetition.
There is dangerously uncritical promotion on social media of unproven AI technology that is potentially hazardous, insufficiently tested, and unlikely to provide practical solutions in the field. Over twenty years ago, airborne sensors (balloon and drone), multi-sensor data fusion, thermal imaging, and many more technologies were promoted as practical solutions for mine clearance, but uptake has been near zero. A drone with sensors linked to an AI system can currently detect a few mine types that are visible on the surface of the ground, over 95 percent of the time. To get from this to near-perfect detection, for unknown mine types including improvised devices, with buried mines and a wide range of different backgrounds, is a monumental task. Separating the different causes of failure such as: sensor limitations, incorrect AI algorithms, or inadequate training data, is a pre-requisite for progress. Standardized AI training data, and defined success criteria agreed by researchers and mine action organizations, are essential if initial trials are to be more than an opportunity to publicize different approaches in carefully prepared scenarios. The use of AI also presents novel legal and liability issues in the event of failure.
The negative consequences of the misuse of machine learning and AI go further than the danger from overlooked hazards. Inappropriate use of AI on safety critical tasks—especially tasks that humans can already perform to a very high standard—may well prevent AI from being accepted for other uses in mine action where it can make an important difference to the effectiveness and efficiency of operations, and as a result, save lives and prevent injuries. Mine action needs to set out a clear path forward based on understanding of what AI can and cannot provide.
Recommended Citation
Gasser, Russell Ph.D.
(2024)
"What Can Artificial Intelligence Offer Humanitarian Mine Action?,"
The Journal of Conventional Weapons Destruction: Vol. 28
:
Iss.
2
, Article 3.
Available at:
https://commons.lib.jmu.edu/cisr-journal/vol28/iss2/3
Listen to the full article with this text-to-speech audio file.
Included in
Other Public Affairs, Public Policy and Public Administration Commons, Peace and Conflict Studies Commons