Key Takeaways:
- The initial magnitude scale was established by Greek astronomer Hipparchus around 135 B.C.E., categorizing approximately 850 stars into six ranges from 1st (brightest) to 6th (faintest) magnitude.
- Galileo Galilei's telescopic observations in 1610 revealed objects fainter than 6th magnitude and highlighted significant brightness variations among 1st-magnitude stars, necessitating an expansion of the scale.
- The need to assign magnitudes to objects brighter than Hipparchus's original 1st magnitude, such as planets, the Moon, and the Sun, led to the adoption of negative numbers within the expanded scale.
- The modern magnitude system was calibrated by Norman R. Pogson in 1856, establishing that a difference of 5 magnitudes corresponds to a 100-fold difference in brightness, and it distinguishes between apparent magnitude (observed brightness) and absolute magnitude (intrinsic luminosity at 10 parsecs).
What is the baseline for determining the magnitude scale of celestial objects? Why do brighter objects have negative numbers?
Dean Treadway
Knoxville, Tennessee
The first observer to catalog differences in star brightnesses was Greek astronomer Hipparchus. He created a catalog around 135 B.C.E. of roughly 850 stars divided into six ranges. He called the brightest 1st magnitude and the faintest 6th magnitude. Observers used this system for more than 1,500 years.

But then came Galileo Galilei. In addition to discovering the phases of Venus, Jupiter’s large moons, and more, he noted that his telescope did not simply magnify — it revealed the invisible. In 1610, Galileo coined a term that had not been used before when he called the brightest stars below naked-eye visibility “7th magnitude.”
The telescope, therefore, demanded an expansion of Hipparchus’ magnitude system, but not only on the faint end. Observers noted that 1st-magnitude stars varied greatly in brightness. Also, to assign a magnitude to the brightest planets, the Moon, and especially the Sun, scientists would have to work with negative numbers.
In 1856, English astronomer Norman R. Pogson suggested astronomers calibrate all magnitudes so that a difference of 5 magnitudes would equal a brightness difference of 100. (For example, a 1st-magnitude star is 100 times brighter than a 6th-magnitude one.) We still use Pogson’s formula today.
Astronomers routinely use two main divisions of magnitudes to describe the same object. “Apparent magnitude” describes how bright an object looks. Back in the day, observers measured apparent magnitudes by eye. Now ultrasensitive CCD cameras provide measurements with accuracies of 0.01 magnitude.
With “absolute magnitude,” astronomers indicate how bright an object really is (or its luminosity). Two things determine this number: apparent magnitude and distance. Absolute magnitude defines an object’s brightness if it were exactly 10 parsecs (32.6 light-years) from Earth. So any object closer than 32.6 light-years has an apparent magnitude brighter than its absolute magnitude. For any object farther away, the absolute magnitude is brightest.
Michael E. Bakich
Associate Editor
This question and answer originally appeared in the March 2012 issue.
