An anonymous reader quotes a report from Motherboard: A new report authored by a group of independent U.S. scientists advising the U.S. Dept. of Defense (DoD) on artificial intelligence (AI) claims that perceived existential threats to humanity posed by the technology, such as drones seen by the public as killer robots, are at best “uninformed.” Still, the scientists acknowledge that AI will be integral to most future DoD systems and platforms, but AI that could act like a human “is at most a small part of AI’s relevance to the DoD mission.” Instead, a key application area of AI for the DoD is in augmenting human performance. Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, first reported by Steven Aftergood at the Federation of American Scientists, has been researched and written by scientists belonging to JASON, the historically secretive organization that counsels the U.S. government on scientific matters. Outlining the potential use cases of AI for the DoD, the JASON scientists make sure to point out that the growing public suspicion of AI is “not always based on fact,” especially when it comes to military technologies. Highlighting SpaceX boss Elon Musk’s opinion that AI “is our biggest existential threat” as an example of this, the report argues that these purported threats “do not align with the most rapidly advancing current research directions of AI as a field, but rather spring from dire predictions about one small area of research within AI, Artificial General Intelligence (AGI).” AGI, as the report describes, is the pursuit of developing machines that are capable of long-term decision making and intent, i.e. thinking and acting like a real human. “On account of this specific goal, AGI has high visibility, disproportionate to its size or present level of success,” the researchers say.
Read more of this story at Slashdot.