One of the core features of Canon’s EOS system is its fast, powerful AF (autofocus)system. In this era of mirrorless cameras, the task of maintaining reliable, high-performance AF rests on the Dual Pixel CMOS AF system, which uses an image plane phase detection technology unique to Canon. How does it work? What makes it special? Read on to find out how Dual Pixel CMOS AF achieves both highly accurate AF and excellent image quality in still photography and video.
1. What is Dual Pixel CMOS AF?
1. What is Dual Pixel CMOS AF?
Dual Pixel CMOS AF is the autofocus (AF) system used by Canon’s EOS mirrorless cameras and newer EOS DSLR cameras during Live View shooting. It uses a form of image plane phase detection technology that is unique to Canon. Under this technology, all pixels on the image sensor can conduct both phase detection and imaging. This results in the following benefits:
- Excellent image quality with no compromise to AF performance
- Wide AF coverage: up to 100% of the image area (read more here)
- Better maximisation of light information from fast (large aperture) lenses (read more here)
- Quick, seamless focusing and tracking during still and video shooting
To fully appreciate Dual Pixel CMOS AF, it helps to first understand how phase detection image-plane phase detection usually work. If you are familiar already, you can skip to 4. The Dual Pixel CMOS AF Architecture to learn more about what’s unique about this technology.
2. Technical background: What’s phase detection/image-plane phase detection?
2. What is phase detection?
A method that calculates how much to adjust focus by comparing the differences in light coming from two different locations
Before Dual Pixel CMOS AF existed, there were two main types of phase detection AF:
- Traditional phase detection AF
- Image-plane phase detection AF
The dynamics of both are different: Traditional phase detection AF, used on DSLRs, acquires information from a separate AF sensor, whereas image-plane phase detection uses information from phase detection pixels on the image sensor.
However, they both require parallax information: information from light that comes from two different locations (=forms two slightly different parallax images). The AF system uses this parallax information to make calculations and adjust the lens focusing elements to achieve focus.
2.1. How traditional phase detection works (DSLR cameras)
How traditional phase detection AF works
(Image is for illustration purposes only)
A: Pentaprism/pentamirror
B: Optical viewfinder (OVF) screen
C: Main mirror
D: Secondary mirror
E: AF sensor system
F: Path of light
On DSLR cameras, the light that enters through the lens (F) is diverted in two directions, upwards and downwards, by the main and secondary mirrors (C and D).
The light that is directed upwards goes through the penta-section (A) projected to the OVF screen (B) as the OVF image.
The light directed downwards goes to the AF sensor system located at the bottom of the camera (E).
The AF sensor system itself includes two micro-lenses. These divide the oncoming light again to form two parallax images on the AF sensor.
Learn more about phase detection sensors on DSLR cameras in:
What is the Difference Between a Line Sensor and Cross-type Sensor?
2.2. How conventional image plane phase detection works
How conventional image plane phase detection works
When phase detection is done on the image sensor instead of an AF sensor, it is called image plane phase detection. Traditionally, the image sensors of cameras that use this method have two kinds of pixels:
- Dedicated phase detection pixels
- Imaging pixels
The parallax information is acquired by phase detection pixels working in pairs. One pixel in the pair has a photodiode (light receptor that converts light into an electrical signal) located on the left while the other pixel has it on the right. Phase detection is conducted by analysing the difference in the information from these two pixels.
AF performance improves when there are more phase detection pixels. However, this involves a trade-off with the number of imaging pixels, which could adversely affect image quality.
3. The weaknesses of conventional image plane phase detection
3. The weaknesses of conventional image plane phase detection
The zero-sum game between reliably high image quality and AF performance
The phase detection pixels on image sensors that perform conventional image plane phase detection. This leaves gaps in the corresponding areas that need to be filled in by interpolation: making estimates from data captured by the surrounding imaging pixels.
Information gaps that need to be filled by interpolation
Image quality is linked to the amount of interpolation involved. Interpolation is a way of filling in missing information by using surrounding data to estimate what the missing details might be. While it might be accurate, there is still room for error compared to having information from the source itself.
The usual process of generating a colour image already involves interpolation. Image sensor pixels are “colour blind”: by themselves, they can only sense and record information on the amount of light reaching them. Colour information is captured by the RGB (red, green, and blue) colour filter (Bayer filter) in front of the sensor. With the help of the Bayer filter, information on either one of the three colours is recorded for each pixel. The information from each pixel is supplemented with that from the surrounding pixels to create a full-colour image.
With conventional image plane phase detection, the “gaps” in image information in the areas with phase detection pixels (instead of imaging pixels) also need to be filled by interpolation using information from the surrounding imaging pixels. This increases the chances of image quality being affected.
What else can be done and why wouldn't it work?
Under this system, there would always be a trade-off between image quality and AF performance:
- Reducing the number and density of phase detection pixels would reduce the impact on image quality, but AF performance would also suffer.
- Configuring the phase detection pixels in continuous lines (which involves increasing their number and density), would improve AF performance. However, this would mean larger missing “gaps” in imaging information that would need to be filled via interpolation.
- Increasing the number of AF points or the size of AF areas would always involve reducing the number of imaging pixels.
Dual Pixel CMOS AF is the solution that Canon developed to improve AF performance without affecting image quality: all pixels can perform both phase detection and imaging.
4. The Dual Pixel CMOS AF system architecture
4. The Dual Pixel CMOS AF system architecture
Dual Pixel CMOS AF: 2 photodiodes on every pixel; all pixels can perform both phase detection and imaging
On image sensors that are designed for Dual Pixel CMOS AF, all pixels have two photodiodes as seen in the illustration. During phase detection, the data from Photodiodes A and B are read separately and compared. During imaging, the data from both photodiodes are combined and read as one complete readout.
As Photodiodes A and B are separated, they each produce an image that has a different point of view from the other (“parallax images”). The AF system analyses the differences (amount of blur), quantifies them, and uses them to compute how to move the lens so that the images match (= the subject is in focus).
Know this: Phase detection vs contrast detection
Phase detection is faster than contrast detection, another widely used method that detects micro-contrast along edges. Unlike phase detection, contrast detection does not acquire information on distances. Instead, it analyses the image projected onto the image sensor for differences in contrast, moving the focusing unit until the contrast is the sharpest. This is much like how our brain works when we use unaided manual focus. It’s accurate, but slower. It also involves a lot more focus hunting, which can be an issue especially for video.
In comparison, phase detection can tell if the current focus point is in front of the subject (front-focused) or behind the subject (back-focused). This enables it to swiftly compute how much to move the lens’ focusing unit, resulting in quick, accurate autofocus.
5. Benefits of Dual Pixel CMOS AF
5. What are the advantages of Dual Pixel CMOS AF?
5.1. The best of both worlds: excellent image quality, high-performance AF
1. The benefits of both excellent image quality and high-performance AF
Here are the layouts of conventional image plane phase detection and Dual Pixel CMOS AF, compared. Notice how Dual Pixel CMOS AF fills in the “information gaps” in both phase detection and imaging.
|
Under Dual Pixel CMOS AF, all pixels on the image sensor can perform both phase detection and imaging. It paves the way for fast, precise, and flexible AF over a wide area of the image frame.
Know this: What came before Dual Pixel CMOS AF?
Before Dual Pixel CMOS AF, mirrorless cameras (including compact cameras and camcorders) used either contrast detection, traditional phase detection, or a mix of both. DSLR cameras also used contrast detection during video recording as phase detection was not possible with the mirror locked up.
Canon developed Dual Pixel CMOS AF as it foresaw growing demand for videography and that mirrorless cameras would become mainstream. It debuted in 2013 on the EOS 70D, bringing the speed and precision of phase detection AF to both still photography and video.
2. 100% AF coverage
When all image sensor pixels can perform phase detection, AF can be conducted over a larger area of the image. However, it cannot be done by one individual pixel alone. Subjects are detected when multiple pixels perform phase detection on image information within a given AF area. For this reason, every camera comes with a variety of AF area modes to cater to different situations.
Examples of AF area modes
For example,
- 1-point AF provides a small AF area that offers better precision over the subjects that you establish focus on, giving you greater control over your composition.
- The Expand AF area modes improve tracking performance when photographing moving subjects, as the system also uses phase detection information from the areas surrounding the 1-point AF area.
- Spot AF offers an AF area that is smaller than 1-point AF, ideal for scenes that require very precise focusing.
- Whole area AF divides the entire image area into dense AF frame zones for AF. For example, the EOS R6 Mark II has 1053 zones that cover approximately 90%×100% (horizontal × vertical) of the image frame.
Dual Pixel CMOS AF also works in tandem with the EOS iTR AF subject detection and tracking system. When a subject is detected, AF is possible over up to 100% of the image area (might vary depending on the camera model).
Also see:
Is Composition Easier on a Mirrorless Camera?
5.3. Take full advantage of fast lenses
3. Take full advantage of fast lenses
Let’s recap: Of the two parallax images used to calculate AF in the phase detection method, one is from the left photodiode and the other from the right photodiode. The images are different because the light that forms them took different paths to reach the left and right respectively.
Lenses with a large maximum aperture (“fast lenses”) have a larger aperture diaphragm (lens opening), so the gap between the path travelled to the left and right photodiodes is even bigger. This results in a greater difference between the parallax images, which contributes to more precise AF calculations.
Phase detection sensors on DSLR cameras may be described as f/2.8-sensitive, f/5.6-sensitive, or so on. This is because they are designed to work with lights of specific base lengths, and function only with a lens with a maximum aperture that is the stated f-number or faster. The smaller the f-number in the “f/number-sensitive”, the more accurate the sensor. Therefore, an f/5.6-sensitive sensor can work with an f/2.8 lens, but it won’t be as accurate as an f/2.8-sensitive sensor. Moreover, even if you’re using a lens that is faster than f/2.8 (e.g., f/1.8), the f/2.8-sensitive sensor would conduct phase detection at the f/2.8 threshold.
There are no such restrictions with Dual Pixel CMOS AF, which can use all the necessary information gathered by the image sensor pixels. The ample light information that fast lenses allow into the camera is used to its fullest advantage, contributing to faster, more accurate AF.
Also see:
Can A Fast Lens Really Make It Easier To See Through The Viewfinder?
6. How is Dual Pixel CMOS AF II different?
6. How is Dual Pixel CMOS AF II different?
Dual Pixel CMOS AF II is the version of Dual Pixel CMOS AF that supports EOS iTR AF X, Canon’s subject detecting and tracking system that utilises deep learning technology.
As mirrorless cameras increasingly replace DSLR cameras as the interchangeable lens camera of choice and more people shoot both photos and video, Dual Pixel CMOS AF is one of the key technologies necessary for improving a camera’s capabilities.
7. Summing up: Key features of Dual Pixel CMOS AF
7. Summing up: Key features of Dual Pixel CMOS AF
- Provides wide AF coverage, seamless focusing, and fast, excellent tracking during still and video shooting.
- Focuses seamlessly as it uses only phase detection all the way without switching to and from contrast detection.
- All pixels on the image sensor can conduct both phase detection and imaging. This achieves high-performance, high coverage AF alongside excellent image quality, with no need to compromise one for the other.
Canon Dual Pixel CMOS AF (YouTube)
Learn more about other Canon technologies in:
Canon Technology Explainer: What is DIGIC?
Nano USM: Fast and Smooth Focusing At Your Fingertips