AI Processing of Earth Images Can Now Run In Space
AI image processing onboard satellites in space has been a goal of the Earth observation industry for years. Now, it has finally been achieved. California-based Planet Labs released an image captured by its Pelican-4 multi-spectral satellite showing an airport in Alice Springs, Australia. On the tarmac, more than a dozen aircraft are scattered, each highlighted in a neat green box, identified by an AI model running on board the satellite. Planet Labs’ engineers had worked 18 months to accomplish reliable autonomous object classification from space. They hope the technology will put Earth observation on steroids, enabling autonomous tasking and real-time sharing of insights with users on Earth. “The entire remote sensing industry has been known to put exotic sensors in space,” says Kiruthika Devaraj, vice president of engineering at Planet Labs. “We have very good eyes in space looking at everything that’s going on. But then, we collect so much data and have to wait six to 12 hours to get the information out. So, you’re essentially looking at the past.” Planet Labs currently operates a constellation of several hundred Dove and SuperDove cubesats, each only 30 centimeters long. These low-cost space cameras scan the entire surface of Earth multiple times a day at a resolution of around 5 meters. The company is also building up a fleet of 32 larger satellites, called Pelicans, which image the planet’s surface in a 30-centimeter detail. The fourth of these, deployed into orbit in 2025, ran the airplane recognition algorithm. All Planet’s satellites combined generate 30 terabytes of data per day — equivalent to 10,000 hours of high definition video, which gets beamed to the ground for processing and analysis via tens of radio stations scattered all over the world. Transferring the downloaded data into the cloud for processing and subsequent AI analysis takes hours, leading to delays, which could sometimes mean that a sparked wildfire only gets noticed when it’s too large to quickly contain. “Minutes matter in some sectors,” Devaraj said. “And real time insights really enable us to provide answers to problems as they’re unfolding.” The AI image recognition algorithms developed by Devaraj and her team analyzes a single Pelican image comprising 16,000 pixels in half a second, using on-board GPUs. The results can be in the hands of users in minutes from the moment the image was taken. Planet Labs So far, only the Pelican satellites are fitted with AI-capable processors — the Nvidia Jetson ORIN GPU modules frequently used in autonomous drones. But Devaraj said Planet plans to augment the SuperDove constellation with a new type of satellite, called the Owl, which will not only provide daily revisits with a higher resolution of up to 1 meter, but will, too, be fitted with NVIDIA Jetson’s processors capable of AI detection. The new fleet would enable the company to begin working on what Devaraj describes as “planetary intelligence.” Working as a single intelligent satellite network, the Owls would constantly monitor the planet and autonomously flag potential problems directly to the higher-resolution Pelicans to revisit without the need for human interference. “We want to put the brain, all the compute, right next to the sensors,” Devaraj said. “So that the system of satellites we build acts like a biological network that is responding to stimuli in real time.” In the future, the company wants to switch to more powerful NVIDIA Jetson Thor processors and eventually run Large Language Models (LLM) in space. “In 5 or 10 years, when we all get used to just accepting what Gemini and Claude and other LLMs give you, we may train some generic LLM on satellite imagery and just get text answers to what it sees,” said Devaraj. “You could just get a text message on your phone that says ‘Three minutes ago, I detected this ship without an AIS transmitter, so it’s an illegal ship, and these are the specific coordinates.’” The Earth observation industry has been talking about onboard AI processing for almost a decade. But until recently, the technology wasn’t ready to run AI algorithms in space speedily and reliably enough. “We started with the early NVIDIA Jetson processors, but until the ORIN iteration, they didn’t have enough compute power,” said Devaraj. To run on-board AI image analysis in space, the algorithms need to be able to handle unprocessed raw data that haven’t been smoothened out and corrected unlike data crunched by AI algorithms on Earth. “There’s a lot of satellite level uncertainties,” said Devaraj. “The satellite’s moving, the satellite’s wobbling, vibrating. On the ground, the processing takes hours to correct all of that.” It took Planet engineers 18 months to achieve 80% detection reliability with the AI onboard model, Devaraj said. The team hopes the next iteration of their algorithm will increase that accuracy to over 95%. The space-based real-time AI detection service will only be made available to customers in the next six to nine months. Devaraj thinks that when it comes to AI in space, this is only a start. Planet collaborates with Google on the Suncatcher project, which intends to deploy a vast constellation of data-processing satellites into Earth’s orbit. The project is one in a plethora of recently discussed ventures that envision moving Earth-based data-crunching infrastructure off the planet. Proponents, including tech giants SpaceX and Amazon, believe that in Earth’s orbit, power hungry computers will be able to run on free solar power and be easily cooled without straining water supplies. But critics question whether large-scale computing infrastructure could ever be launched cheaply enough to compete with technology on Earth. Google and Planet plan to fly two prototype satellites in 2027.
