The discourse on artificial intelligence is a little skewed, with scary, half-baked research findings and scenarios getting disproportionate attention (which is of course a classic human bias). In particular, AI related to weapon systems is fueling people’s imaginations – most recently this week with the duck that a drone would have turned on its commander and attacked him. In fact, it was a kind of thought experiment. I have written down here what role these play in the uncanny part of AI research. This text by my colleagues Georg Mascolo and Andrian Kreye soberly examines the complex “AI in the military” and shows the problems using the Palantir software that the company presented to the reporters. She collates data, interprets satellite images, gives soldiers tips, but no orders. The human remains “in the loop”, only he can give the order to shoot at the end: The military AI does not make any decisions, so it does not give any orders to shoot. The battlefield demo assumes that a tank is spotted in the field. The AI is probing the situation, considering whether one of the artillery units or a task force called “Team Omega” would be best suited to launch an attack.The public’s focus on military AI developments is correct – after all, a misused autonomy or a Outsourcing the control of weapon systems can have catastrophic effects (although the objection also applies here that people in wartime situations also make decisions with the most brutal consequences again and again). However, I was amazed at the ease with which the drone story was shared by the end of the week, without the spreaders realizing that the story was unrealistic and “too good to be true”. An AI that optimizes people’s abilities to criticize sources, that would be something. The piqed article brings the reader up to date with regard to the abilities of military and surveillance AI, is clearly written and does not scaremonger. Thats how it is suppost to be.