DARPA tests autonomous vehicles’ ability to tell friend from…

Unmanned Systems

DARPA tests autonomous vehicles’ ability to tell friend from foe

In contested or occupied urban areas, deciding whether a person is waiting for a bus or scoping out a potential target often comes down to intuition.

Current artificial intelligence technologies, such as machine vision, may be able to pull clues from the  complexity and ambiguity of city neighborhoods, but so far, they are unable to distinguish between threats and non-threats based solely on passive observation.

One way to elicit an indication of a threat is to stage an interaction and analyze the response. A guard might, for example, stop and question a person loitering near an installation to assess his intent.  In some situations, however, encounters between military personnel and locals escalate existing tensions and work against the overall mission.

An autonomous vehicle (AV), on the other hand, could block the loiterer’s view and note if he moved to a new location to continue his observations or stayed seated waiting for the bus. Such strategies can give commanders the direct evidence they need to make decisions.

To further that idea, the Defense Advanced Research Projects Agency is looking for ways AVs can make it easier for commanders to detect and track threats among civilians in complex urban environments without escalating tensions.

DARPA’s Non-Escalatory Engagement to reduce Dimensionality (NEED) program aims to build a library of engagements or scenarios, in which autonomous aerial or ground vehicles interact with urban residents and test the question of whether an individual or group poses a threat.

DARPA initially wants NEED applicants to describe 10 engagements that may indicate AV-detectable threats such as setting an ambush, stoking violence, smuggling or planting explosives. Descriptions should explain how each engagement could generate specific evidence of a threat.

For each engagement submitted, DARPA wants performers to spell out the action’s purpose, type of autonomous vehicle involved, estimated level of escalation, expected responses and how those responses would be detected by AVs equipped with perception technologies. Those tools include cross-sensor re-identification (where a person can be re-identified by several different sensors), entity tracking, pose detection and processing of simple observed behaviors such as running or standing as well as compliance with simple requests, such as a request to turn around.

The engagements will be tested at the Urban Reconnaissance through Supervised Autonomy testbed, where DARPA experiments with sensors, artificial intelligence, drones and human psychology to better protect troops with technologies that can distinguish between threats and noncombatants.  There, commanders will evaluate whether the engagements between perception-enabled AVs and urban residents elicit increased intelligence, surveillance and reconnaissance.

Rather than set an arbitrary target threshold for accuracy, DARPA said NEED solutions will be compared against baseline conditions in which humans are either entirely unaided, or aided only by passive machine detection.

More information on the opportunity can be found here, with more detailed discussion available on Ployplexis.

This article first appeared on GCN, a Defense Systems partner site. 

https://defensesystems.com/articles/2020/10/27/darpa-need.aspx

DroneLife.com

Previous BVLOS Flight Requires Rock Solid Connectivity for Drones
Next Opposition presses Trudeau on exemption to military drone te…

Check Also

Saab unveils technology incubator using Enforcer 3 as test b…

SAN DIEGO — When Saab’s Combat Boat 90 first entered service with the Swedish Navy …

Hi-tech drones comes as a rescue to control India’s malaria …

By Prathiba Raju and Abhijeet Singh New Delhi: Humanity’s oldest and deadliest vector-borne disease, malaria …