Researchers capture star images from the ground and explain how they identify them
One of the most fascinating things about stars is how, since ancient times, they have been used by travelers as a reference point for navigation. Today, space scientists and engineers also use them to help identify where our satellites are in space at a given time.
Knowing the satellite’s orientation and location is crucial for Earth Observation microsatellites like Diwata-1 (decommissioned on April 6, 2020) and Diwata-2 (in orbit since October 29, 2019). To help us find exactly where they are in the Low Earth Orbit (LEO) they were released in, the satellites contain navigation control tools housed in their approximately 50-kilogram bodies. Among their fine estimation and control navigation tools is a Star Tracker Telescope (STT), which captures pictures of stars and then beams it down to our ground receiving station in DOST-ASTI along with the other data the satellites are tasked to capture in space (such as images of Earth).
On the ground, researchers from our Building PHL-50: Localizing the Diwata-1, 2 Bus System as the Country’s Space Heritage 50 kg Microsatellite Bus (PHL-50) and Optical Payload Technology, In-depth Knowledge Acquisition, and Localization (OPTIKAL) projects also use a star camera system that they developed for capturing and identifying stars such as the image of a portion of the Orion constellation above. These images then help give us more information about the satellites’ orientation and location in relation to the stars around it.
Using these images where stars appear as tiny specks to the human eye, how do we identify them? In this feature, we take a look at how star patterns are identified as broken down into the five steps shown below.
Step 1: Preparation of the star catalog to be used.
A star catalog is an astronomical list that contains all the stars and their properties. The catalog acts as a “map” or database for star trackers to determine the spacecraft’s orientation in space. A guide star catalog is made by removing dim stars and adding only the stars that are particularly visible to the star camera.
Step 2: Creation a Star Triangle Database from the star catalog
From the guide star catalog, a star triangle database is built. These star triangles are used as objects of comparison during star image matching. This method is also known as feature matching—much like how our ancestors assigned recognition to the star patterns according to the objects and phenomenon familiar to them.
We further break down the sequence above into the following steps.
The image below show that the sides of the triangle or “angular distances” serve as the features we use to compare during the matching process.
All measured star triangles or “triads” of stars are stored onto the database, including all possible combinations.
A list of triangles (left side of images) is also organized to make searching easier.
Once the star triads database is completed, we can now start capturing star images and utilize the database as reference to identify combinations of stars that are captured by the imaging sensor.
Step 3: Selecting, targeting, and capturing the image
For the ground tests, we can select a star pattern or constellation which can be easily located. For this image, we chose the Orion constellation as the target and pointed the camera towards Orion’s direction. As seen here, the imager was able to capture only a portion of the Orion constellation. We have to take note that the resulting image depends on the hardware configuration, and can be affected by the external factors such as stray lights coming from external light sources other than the stars (such as city lights).
Step 4: Performing star detection on the captured image
What comes next after acquiring the star image is the pre-processing that involves noise removal and star centroid extraction, since a practical image capture circuit usually results in images that contain noise. Given also that the test scenario is on the ground, the light pollution may also have an effect on the quality of our captured star image. This is also one of the reasons why cloud-free skies are ideal for stargazing and astrophotography. In this step, we should also take note of the lens’ properties which can result in distortions.
For us to identify which among these spots are stars or not, we’ll be performing image segmentation where the foreground is identified from the background. In our case, the foreground is where the star spot belongs to. Aside from identifying each star spot, these spots are distinguished from one another by Connected Components Labeling (CCL).
After the connected components (CC’s) are labeled accordingly and grouped, centroids can now be computed. The goal of extracting each star/s centroids is to determine the position (coordinates) of each star visible in the image plane. Basically, what is done here is we are trying to retrieve the center of each star spot from a star image by basing it on the pixel information contained in the image. Then these image coordinate pairs will be converted into their much useful form (when it comes to attitude determination using star sensors) that can be used in star identification. In this image, the computed centroids of each detected star are plotted on top of the masked image.
In the image below (Figure 4e), we just superimposed the computed centroids of each detected star on the original or raw star image for visualization of their locations.
Step 5: Proceeding with star identification of detected stars
As the stars are detected by centroiding, the next step is to utilize the star triad database that was created previously for matching.
Initially, the criteria for selecting the first triad is the brightness value of stars. The first three bright stars (Mintaka AB, Alnitak B, Betelgeuse) are used for the first matching process.
Using the same process, the angular distances of each star pair is calculated and used to form a triangle.
The feature matching is now initiated and the star triads measured from the image are compared to all available star triads in the database.
Typically, an efficient search and matching algorithm is used to make the matches as fast and as accurate as possible.
One way of determining if a match is found, is measuring each angular distance between the triangles, and computing the difference, the value difference with the least error then is therefore the closest match for the given image.
Once a match is found, the corresponding stars of that star triad is identified. The star triangle database also contains the star IDs of the star triads, as well as their celestial coordinates. Once the stars coordinates of the identified stars are collected, we can now proceed to use these coordinates to compute the current attitude of the spacecraft.
Hence, the final product showing the identified stars in the captured star image shared at the beginning of this article.
You can also check out the star identification simulation below.
 G. Zhang, Star Identification: Methods, Techniques and Algorithms, Springer Berlin Heidelberg, 2017.
 HIPPARCOS-2, www.cosmos.esa.int/web/hipparcos/hipparcos-2.