
Palantir, a company founded by billionaire Peter Thiel, has recently launched the Palantir Artificial Intelligence Platform (AIP), a software designed to run large language models on private networks. The software has been pitched with a demonstration of how it could be used to fight a war. In this demonstration, an “operator” uses a chatbot to order drone reconnaissance, generate attack plans and organize enemy communication jamming.
Palantir’s scenario features a “military operator responsible for monitoring activity within eastern Europe” who receives an alert from AIP that an enemy is amassing military equipment near friendly forces. They then ask the chatbot to show them more details and to guess what units may be present. After getting the AI’s best guess, the operator then asks the AI to take better pictures. They launch a drone to take photos, discovering a Soviet-era T-80 tank near friendly forces. Then, they ask the AIP what to do about it. AIP offers three possible courses of action, which include attacking the tank with an F-16, long-range artillery, or Javelin missiles.
However, critics have pointed out the potentially dangerous and abstracted nature of warfare in Palantir’s vision for the future. The video appears to show the “operator” doing little more than approving the chatbot’s actions. The use of drones has already raised concerns over the abstraction of warfare, and the consequences of such systems have already been well-documented.
Palantir is not selling a military-specific AI or language model, but is offering to integrate existing systems into a controlled environment. The AIP demo shows the software supporting different open-source language models, including GPT-4 and alternatives, as well as several custom plug-ins. However, AI and language models are not without issues such as “hallucinations” or making things up. Recently, a man died by suicide after talking with an AI chatbot created by a startup called EleutherAI.
What Palantir is offering is the illusion of safety and control for the Pentagon as it begins to adopt AI. “LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way,” the pitch said. According to Palantir, AIP provides control over what every language model and AI in the system can do. AIP generates a secure digital record of operations, which is crucial for mitigating significant legal, regulatory, and ethical risks in sensitive and classified settings.
However, AIP does not offer solutions to the problems of language models and AI in a military context beyond “frameworks” and “guardrails” to make the use of military AI ethical and legal. The consequences of warfare being reduced to actions taken at the push of a button remain unclear.