Super-Resolution ­ Or Not

 

I am not alone in believing that super-resolution is possible in the optical domain. Joseph W. Goodman, in his book, Introduction to Fourier Optics (1967) states:

"Until rather recently it had been widely accepted that diffraction effects represent the fundamental limits to optical system performance. To resolve beyond the classical diffraction limit was believed to be a hopeless task. Even an infinitely large lens would be limited by the evanescent-wave phenomenon to resolutions of the order of an optical wavelength. More recent work has shown, however, that for certain types of objects, resolution beyond the classical limit may indeed be feasible. In fact, as we shall show in this section, for the class of spatially bound objects, it is in principle to resolve with infinite precision."

When the above was written, limited super-resolution had been demonstrated with acoustic and microwave technology, but not yet in the optical domain. Unfortunately, I have not kept up with optical achievements and cannot say with authority what has or has not yet been achieved in the years since. I can, however, pass along that in acoustics and radar there is usually a penalty in terms of increased noise and reduced signal level. I thus expect that a super-resolution optical lens, could it be built, will also have, in effect, a built-in neutral density filter as well as possibly a higher flare level than conventional lenses. I would expect realistic resolution improvements on the order of a factor of two or three to be practical. Whether or not it is possible or worth the expense remains to be seen.

There are at least three other methods that are practical for enhancing resolution. Whether or not these methods constitute super-resolution is a matter of interpretation. In order of practicality and ease of implementation, these are apodization, spatial filtering and de-convolution. These techniques are not mutually exclusive; it is quite possible to uses one, two or all three of these methods in consort. I personally believe that the combination of apodization and spatial filtering holds significant promise.

A photographer might be forgiven for thinking that "apodization" is some means to make a lens apochromatic and thus eliminate colour fringing. That is not what it means. The word derives from the Latin "a pod" which means "without foot". The term is used to describe any means that suppresses the level of the rings of the Airey function, thereby reducing lens flare and improving image contrast. It is accomplished quite simply by replacing the standard iris diaphragm (or other form of lens stop) by a suitably graduated neutral density filter. This filter is usually clear in the center but increases in density towards the edges. The consequence of this form of filtering is that the shape of the Airey function is changed. There is a broadening of the central portion of Airey function, but a suppression of the surrounding rings. The suppression can be quite dramatic: a factor of 100 is quite possible. The normal consequence of this is that the conventional limit of resolution is slightly reduced, but for resolution within the limit, contrast is vastly improved. It is absolutely routine in the radar and acoustic domains to use this process, although it is generally called "shading" or "windowing" in these fields. So far as I am aware there is only one commercially produced lens using this technique, although it used to improve "bokeh" (pleasant out-of-focus images) rather than to improve resolution or contrast. The one lens is the Minolta/Sony 135/2.8 STF.

As an aside, specially shaped apertures have been used for years. Back in the days when "screening a photograph" meant using a wire screen, special lens stops were used to make the process work better. And, many of us are familiar with the "sink-strainer" stops used in some soft-focus lenses. The extra holes not only help to soften the image, they also increase the flare level, reducing overall contrast.

In the context of resolution enhancement, apodization should make it possible to design a set of lens stops that retains the resolution of the optimal lens opening at smaller openings while at the same time improving contrast, depth-of-field, and bokeh relative to using the lens at that optimal open aperture. The gain in depth of field is not as great as for stopping down the lens conventionally, but since there is little or no penalty for stopping down; we can simply stop down further ­ suffering light loss in so doing, of course. There will also be some trade-offs among the factors resolution retention, contrast improvement and the quality of the bokeh. There is an important point here: the fact that diffraction blurring gets worse as we stop down is not a necessary consequence of stopping down a lens. Rather, it is a consequence of the way in which we do it. It is a phenomenon driven by the state of technology, as much as basic physics.

Spatial filtering generally means manipulating the images we already have in such a way as to improve them. In some ways we all do this ­ and have been doing it for decades. A simple example is the retouching of dust marks: dust marks are often too sharp or too small to have been produced by our lens, so we know they don't belong to the real image and so we spot them out. More sophisticated filtering operations are used to remove half-tone patterns and such. If we can define what should or should not be allowed in the structure of an image, it is probably possible to filter the unwanted details out and enhance the detail we do want. If detail has been suppressed by our lens, and if we know the precisely how the lens did this, it may well be possible to restore the image using a suitably designed "inverse filter". The nature of the Airey disk is that for spatial frequencies (lines per millimeter) above some threshold, there is a progressive reduction of image contrast for finer and finer-scale (higher spatial frequency) detail. We can analyse our image mathematically to determine its frequency content and then design an inverse filter to restore the higher frequency components to their "proper" level and then re-create the image. At some spatial frequency, the process will start to degrade because the high frequency "noise" in our image gets amplified and this added noise will spoil our image. Many of us have noticed this as we use "sharpening" in Photoshop (or other image editor) too much or too often. The optimal resolution enhancement filter will "know" when to stop enhancing the higher spatial frequencies. Spatial filtering also has potential pitfalls. The spatial filter inherent in the lens can have "zeros" within its passband. That is, there may be some specific spatial frequency or frequencies that the lens simply blocks. At these specific frequencies our image will contain only noise and any attempt to "restore" these frequencies will spoil the image.

Diffraction softening is a gradual, well-behaved process (it usually contains no zeros), and so it should be possible to undo degradation due to diffraction with spatial filtering, so long as noise has not masked any of the spatial frequencies used in the image. Digital images are noticeably 'cleaner' (freer of noise) than typical film images. Thus it should be easier to remedy diffraction-softened digital images than it was to correct film images. Using the unsharp mask tool is one way to increase the high spatial-frequency components in an image. It thus can be used to recover from at least some of the effects of diffraction. It is not necessarily the optimal filter, but it does appear to do at least part of the job.

It should be noted that "noise" derives from many sources: electrical noise in our digital image sensors and related electronics, incoherent light in the scene being photographed, defects in our lens and image sensor, dust on the sensor or in the lens, subject or camera movement etc.

De-convolution is another form of spatial filter that looks for prescribed shapes in an image and replaces them with some other prescribed shape. The filter might look for Airey disks, for example and replace them with white dots of equivalent total brightness. If successful, this process should sharpen an image. A significant difficulty is that a two-dimensional image of a three-dimensional scene is not unique. Different three-dimensional scenes can produce identical two-dimensional images. When we look at a TV newscaster standing in front of the White House, we often cannot tell if she really is standing in front of the real White House, or an image of the White House. It could also be a model of the White House. Similarly, if we ask a deconvolution filter to restore an out-of-focus picture of a man holding an unsharp photograph, the filter may try to restore that unsharp photograph as well as the image of the man. We get a sharp image, but it is not the correct image. We will get better results if we can tell the filter precisely what it is we want it to do. Even here there may be more than one possible interpretation of an image. De-convolution generally works best at tasks like removing the effects of camera movement, where the original degradation of the image was applied uniformly over the whole image. I personally do not hold great expectations for de-convolution in general, although there are relatively simple cases where it does appear to work well.

Joseph Goodman in the book cited previously notes that spatial filtering has been used since about the 1950 to "de-blur" photographs degraded by motion blur and focus errors. This is a minor subject for the book, however.

My best hope for image enhancement in the near future is a combination of apodization (of the right type) and spatial filtering. My proposal would be to use lens stops that emphazises high-frequency spatial frequencies in a controlled way and then after the photograph is taken, use a spatial filter to restore the proper frequency balance in the image. This is in some ways analogous to the "Dolby" noise reduction scheme for tape recorders. The Dolby scheme dynamically enhances high-frequency content in a sound signal. It does this in a way that a) does not distort the signal to the extent that it is disruptive to a potential listener, and b) in a way that can be completely undone dynamically by the right sort of signal processor. My proposal would be to use apodization to produce an image that is useable as is, but then apply a spatial filter that makes the image more faithful to the original scene.

Now let's reconsider whether we are surpassing the Rayleigh limit or not. I argue that the Rayleigh limit is a moving target: we can use technology to move the limit, but there always is a limit determined by the "noise" in our imaging system and the technology that can be applied to minimize its impact. The Rayleigh diffraction resolution limit relates the resolution that can be achieved to the shape of the image of a point source and the level of contrast that is needed to detect the presence of a second equal-intensity point source. The amount of noise in our imaging system as well as our ability to measure light intensity accurately sets the minimum level of contrast needed to permit the detection and location of that second point source. The amount of contrast required, together with the shape of the image of a single point source sets the shape and useful range of the frequency response of our imaging system, and hence what resolution can be achieved. Technology determines ­ within some ultimate limit determined by quantum mechanics ­ what can be achieved. The "Rayleigh limit" is not an absolute limit in a physical sense. Rather it tells us where the limit is given the state of our technology. I'm guessing here, but I expect quantum mechanics will in turn set the limits in terms of the amount of light energy available and the amount of time available to make the image.

 

References

Joseph W. Goodman, Introduction to Fourier Optics, McGraw-Hill, New York, 1968.

Athanasios Papoulis, Systems and Transforms with Applications in Optics, McGraw-Hill, New York, 1968.

 

Back to Resolution Intro

Back to Concerning Photographic Resolution

Onwards to Actual Resolution Tests

Return to HMbooks Table of Contents.