Developed in San Francisco by Foundation Future Industries, the humanoid robot Phantom MK1 combines telepresence in virtual reality, AI assistance, and “task for movement” architecture to execute support, surveillance, and logistics. With US$ 24 million in contracts, the company aims for operational testing and scale production by 2027 globally.
The humanoid robot Phantom MK1, created by Foundation Future Industries, is being designed to occupy a space that until recently seemed fictional: to act as a “soldier” in support and security missions, reducing the direct exposure of military personnel to dangerous scenarios. The central proposal is to transfer risky, repetitive, and dirty tasks to machines, without putting a life at risk.
At the same time, the initiative exposes difficult dilemmas: when an AI platform enters the military field, the debate shifts from being solely technological to also political, operational, and ethical. Among contracts with Armed Forces, testing in real environments, and an aggressive scaling plan, the growing question is what changes when “presence” can be remote and the decision is assisted by algorithms.
What is the Phantom MK1 and why does it draw attention

The Phantom MK1 was publicly revealed in early 2025 as a humanoid robotics platform with 1.80 m in height and about 80 kg, designed to operate in human environments from factories and disaster sites to defense scenarios. In terms of physical capability, the company describes that the humanoid robot can carry up to 20 kg and move at approximately 6 km/h, numbers that help to understand the type of mission envisioned: presence on the ground, support, and short movements with load.
-
China accelerates global science and may surpass the United States in 2 years with increased public investment, continuous growth, and direct impact on the global technological competition.
-
Scientific studies indicate that drought may be strengthening a much greater silent threat: more resistant superbugs.
-
Man builds functional 5-meter submarine in his garage using gas cylinders, PVC pipes, and a refrigerator motor, and navigates with the vessel on a lake in Colombia.
-
Millions of people have been eating yam for centuries without knowing that this humble tuber contains a compound called diosgenin, which scientists have now discovered can improve memory and help control blood sugar levels.
Behind the mechanical body, the ambition is for the humanoid robot to perform complex tasks through the integration of large language models (LLMs) with a “task for movement” architecture, in addition to the use of cycloidal actuators. The promise is to transform instructions and objectives into physical actions, in an attempt to bring the interaction with the robot closer to the way humans coordinate teams in the field.
Million-dollar contracts and the path to becoming a military supplier
Foundation Future Industries claims to already have research contracts worth US$ 24 million (R$ 127 million) with the Army, Navy, and Air Force of the United States. Among them, there is mention of a SBIR Phase 3, a stage associated with commercialization that positions it as a approved military supplier—an important detail because, in the defense universe, this usually signals a transition from prototype to applications with institutional traction.
This type of contract is not just money: it is validation of interest and opening doors within an ecosystem that demands requirements, testing, and compliance. When a company starts to be treated as a potential supplier, it enters the radar of real demands, where the conversation shifts from “what can be built” to “what works under pressure, with safety and predictability.”
How the humanoid robot is operated and what tasks it aims to assume
One point that the company emphasizes is that the Phantom is not described as an “autonomous combatant.” The operation would be done via telepresence through virtual reality, with assistance from artificial intelligence—a formulation that suggests a human commanding the platform remotely, while the AI assists in navigation, execution, and decision-making within parameters. This distinction is central because it changes the discussion about responsibility and control.
In the list of mentioned applications, the humanoid robot appears associated with functions such as surveillance, logistics, reconnaissance, bomb disposal, and operation in dangerous or contaminated environments. In other words, the declared focus is on activities where human presence is particularly vulnerable. The narrative is about “replacing risk,” not “replacing the soldier,” although, in practice, the line between support and protagonism may shift over time.
Testing in real scenarios and the message behind the “front line”
In February, according to a report from Time magazine, two Phantom robots were sent to Ukraine, initially for reconnaissance support on the front line. This detail changes the weight of the project: one thing is to test in a controlled field; another is to place the technology in a real scenario, where communication, terrain, interferences, and unpredictability are part of the package. The decision to test in a war environment communicates urgency and acceleration.
The company is also preparing to begin tests related to the Marine Corps’ “entry methods” course, training the Phantoms to handle forced entry procedures that may involve explosives at doors—a type of operation described with the aim of increasing troop safety during invasions. The key point here is not the method, but the intention: pushing the humanoid robot into tasks where danger is immediate and mistakes are costly.
A technological race, a strong phrase, and an investor with a political surname
The broader context, presented by Foundation itself, is the Pentagon’s ongoing interest in militarized humanoid prototypes that operate alongside combatants in complex and high-risk environments. The company argues that, in the face of adversaries like Russia and China developing defense-focused robots, the US and allies need to keep pace. It’s the classic logic of “if I don’t do it, someone else will.”
In this framework, CEO Sankaet Pathak told Time that a arms race of humanoid soldiers “is already happening.” And there is also an element that draws attention outside the laboratory: Eric Trump, son of President Donald Trump, is cited as an investor and strategic advisor. When military technology meets influence and politically recognizable names, the debate tends to gain another temperature and much more public scrutiny.
Scaling up to 50 thousand units: industrial ambition with military impact
Foundation states that it intends to scale the production of humanoid robots and that the goal is to build up to 50 thousand units of the Phantom by 2027, targeting both industrial and military use. It’s a huge leap, because scaling does not only depend on design: it involves supply chain, cost, maintenance, operator training, software updates, and reliability in large batches. Scale is where technological promises are often challenged by reality.
If this plan advances, the potential impact is not just “more robots,” but a change in operational standard: more available platforms mean more usage scenarios, more field data, more iterations, and possibly more functions delegated to the humanoid robot. At the same time, rapid growth also increases risks: failures, misuse, vulnerabilities, and incidents cease to be exceptions and become statistics. The larger the fleet, the greater the responsibility.
Ethical issues and operational risks that do not fit in the marketing of innovation
The use of “humanoid soldiers” raises recurring concerns: reduction of ethical barriers to initiate or prolong conflicts, doubts about accountability for abuses, and the possibility of dehumanization of war, when the human physical presence distances from the ground. Even when there is teleoperation, the psychological dynamics change: being far away can alter perception of risk, urgency, and consequence. Technology, here, manipulates incentives and incentives manipulate decisions.
There are also operational risks mentioned in the debate: vulnerability to cyber attacks and limits of AI in assessing complex situations. In high-risk environments, “intelligent assistance” can be useful—but it can also generate dependency, misinterpretation errors, and poor decisions if systems are deceived, interrupted, or used outside the intended context. In the end, the hardest question is simple: who is accountable when the human-machine chain fails?
The Phantom MK1 enters the military debate with a powerful promise: to reduce human exposure to danger in missions that mix risk, repetition, and unpredictability. But the same idea that protects can also push technological, legal, and moral limits towards a new normal, where physical presence and operational decision increasingly separate.
With information from the portal CNN Brasil
Now it’s your turn: is a humanoid robot in military operations a necessary advancement to save lives or a risky step that facilitates conflicts and dilutes responsibilities? Comment with your opinion and, if possible, state what the limit should not be crossed.

Seja o primeiro a reagir!