The Pentagon’s reliance on Silicon Valley for its artificial intelligence needs has proven to be a problem. This issue was highlighted when the Israeli internal security service, Shin Bet, developed its own generative artificial intelligence platform, similar to ChatGPT. Despite having extensive data on potential threats, the AI failed to predict a devastating attack by Hamas. The attack was carried out in plain sight, with training exercises openly reported and even posted online. However, the AI system was unable to correctly interpret this information, leading to a significant intelligence failure.
This incident underscores the limitations of AI in understanding human behavior and intentions. Despite having access to vast amounts of data, AI systems can still be misled by disinformation and false signals. This is particularly true when these systems are based on rigid assumptions and lack the ability to understand the nuances and complexities of human behavior.
It also discusses the growing divide between Silicon Valley and Washington, with tech companies increasingly reluctant to work with the Department of Defense. This has led to concerns that the U.S. could fall behind in the development and application of AI technologies, particularly in the field of national security.
read more > harpers.org