After Christmas dinner 2021, our family was glued to the TV, watching the thrilling launch of NASA’s US$10 billion (A$15 billion) James Webb Space Telescope. There hasn’t been such a leap forward in telescope technology since Hubble’s launch in 1990.
En route to deployment, Webb had to successfully overcome 344 potential failure points. Fortunately, the launch went better than expected and we were finally able to breathe again.
Six months later, Webb’s first images were revealed, of the most distant galaxies ever observed. However, for our team in Australia, the work was only just beginning.
We would use Webb’s highest resolution mode, called Aperture Masking Interferometer or AMI for short. It is a small, precisely machined piece of metal that fits into one of the telescope’s cameras, improving its resolution.
Our results on careful testing and improvement of AMI are now published on the open access archive arXiv in two articles. We can finally present its first successful observations of stars, planets, moons and even black hole jets.
Working with an instrument a million miles away
Hubble began life seeing blurred – its mirror had been ground precisely, but incorrectly. By looking at known stars and comparing the ideal and measured images (just as optometrists do), it was possible to find a “prescription” for this optical error and design a lens to compensate.
The correction required seven astronauts to fly aboard the Space Shuttle Endeavor in 1993 to install the new optics. Hubble orbits Earth just a few hundred kilometers above the surface and can be reached by astronauts.
NASA/Chris Gunn
In contrast, Webb is about 1.5 million miles away – we can’t visit or maintain it, and need to be able to fix problems without modifying the hardware.
That’s where AMI comes in. It’s Australia’s only onboard hardware, designed by astronomer Peter Tuthill.
It was given to Webb to diagnose and measure any blur in his images. Even nanometers of distortion in Webb’s 18 hexagonal primary mirrors and many internal surfaces will make images blurry enough to hamper the study of planets or black holes, where sensitivity and resolution are essential.
AMI filters light with a carefully structured pattern of holes in a simple metal plate, making it easier to determine if there are optical misalignments.
Anand Sivaramakrishnan/STScI
Hunting for blurry pixels
We wanted to use this mode to observe the birthplaces of planets, as well as the matter sucked into black holes. But before all that, AMI showed that Webb wasn’t entirely working as hoped.
At very fine resolution – at the level of individual pixels – all images were slightly blurred due to an electronic effect: brighter pixels seeping into their darker neighbors.
This is not an error or defect, but a fundamental feature of infrared cameras that proved surprisingly serious to Webb.
This was an obstacle to seeing distant planets thousands of times fainter than their stars located a few pixels away: my colleagues quickly showed that its limits were more than ten times worse than expected.
So we decided to fix it.
How we refined Webb’s vision
In a new paper led by Louis Desdoigts, a doctoral student at the University of Sydney, we stargazed with AMI to simultaneously learn and correct optical and electronic distortions.
We built a computer model to simulate the optical physics of AMI, with flexibility in the shapes of the mirrors and apertures as well as the colors of the stars.
We connected this to a machine learning model to represent electronics with an “efficient detector model” – where we only care about how it can reproduce the data, not why.
After training and validation on some test stars, this configuration allowed us to calculate and unblur other data, thus restoring AMI to all its functions. This doesn’t change what Webb does in space, but rather corrects the data during processing.
It worked wonderfully: the star HD 206893 hosts a faint planet and the reddest brown dwarf known (an object between a star and a planet). They were known but beyond Webb’s reach before applying this correction. Now the two little dots appear clearly on our new system maps.
Desdoigts et al., 2025
This correction opened the door to using AMI to prospect unknown planets at previously impossible resolutions and sensitivities.
It doesn’t just work on points
In a companion paper by Max Charles, a PhD student at the University of Sydney, we applied this not only to observing points – even if those points are planets – but also to forming complex images at the highest resolution achieved with Webb. We revisited well-studied targets that push the limits of the telescope, testing its performance.
Max-Charles
With the new correction, we brought Jupiter’s moon Io into focus, clearly tracking its volcanoes as it rotated over a one-hour timelapse.
As seen by AMI, the jet launched from the black hole at the center of the NGC 1068 galaxy closely matched images from much larger telescopes.
Finally, AMI can accurately resolve a ribbon of dust around a pair of stars called WR 137, a faint cousin of the spectacular Apep system, which fits the theory.
The code built for AMI is a demonstration for much more complex cameras on Webb and its follow-up, the Roman Space Telescope. These tools require optical calibration so fine that it is only a fraction of a nanometer – beyond the capability of all known materials.
Our work shows that if we can measure, control and correct the materials we have to work with, we can still hope to find Earth-like planets at the edges of our galaxy.