NASA and Google tested a new artificial intelligence assistant to make autonomous medical diagnoses during prolonged space missions, with initial tests showing the promise of reliable diagnoses. The project centered on the Crew Medical Officer Digital Assistant (CMO-DA), which the organizations developed to support astronauts when guidance from Earth was limited or unavailable.

The CMO-DA functioned as an automated clinical decision support system to help crews diagnose and treat illnesses during long missions to the Moon and Mars when no doctor was onboard or communication was lost. The assistant ran on Google’s Cloud Vertex AI environment, which provided access to Google and partner models, and drew on spaceflight literature. It offered text, image, and voice interaction. The system used open-source language models, including Llama 3 and Mistral-3 Small, and was trained on open-source data covering 250 common medical issues in space.

Assess health, provide real-time diagnostics and guide treatment until a medical professional is available, NASA said. Supporting crew health through space-based medical care is becoming increasingly important as NASA missions venture deeper into space, said Jim Kelly, vice president of federal sales for Google’s public sector arm. Kelly added, The system addresses whether remote care capabilities can deliver detailed diagnoses and treatment options if a physician is not onboard or if real-time communication with Earth is limited.

Early testing produced physician-rated diagnostic accuracy of 88% for ankle injuries, 74% for flank pain, and 80% for ear pain, according to NASA presentations.

In periods of delayed communication, the CMO-DA acted as an autonomous first responder to guide crews through emergencies, including injuries and cardiovascular problems. Missions beyond the International Space Station faced round-trip delays of up to about 20 minutes, and a journey to Mars took about nine months.

NASA planned to expand CMO-DA by ingesting data from medical devices and training the model to detect space-specific conditions, including those related to microgravity, enabling continuous assessment and timely alerts or treatment advice. The agency also intended to let the assistant operate onboard equipment, such as performing ultrasound exams or administering medications.

NASA and Google worked with physicians to refine the model for future missions. NASA held the application’s source code, helped refine the model, and said the model was owned by NASA; the agency planned to take part in finalizing it. Training with thousands of medical case studies was expected to improve diagnoses and recommendations. Regarding data protection, Google said sensitive health data would be used only for the mission and stored securely. It is important that the AI not only works technically but also incorporates the human factor, said a NASA scientist. The AI must not only be extremely reliable but also consider ethical questions, such as decisions about invasive procedures.

NASA planned initial tests on the International Space Station before using the system on later Mars missions.

With emergency evacuation in deep space off the table, the tool could prove vital when every second counted. “We are creating a technology that can save lives in an extreme situation,” said a Google representative. “The tool represents an important milestone for AI-assisted medical care and continuous exploration of the cosmos,” Google stated.

Researchers said the work could inform medical tools on Earth, especially for remote regions or disaster zones where access to doctors was limited. The possibility of a reliable AI medical assistant could transform emergency care, telemedicine, and routine diagnostics.

Produced with the assistance of a news-analysis system.