written 5.7 years ago by |
Intorduction :
Digital processing is a field described by the need for extensive experimental work
to establish the viability of advanced solutions to a given problem. In this chapter, we outline
how a theoretical foundation and state-of-the-art software can be integrated into a prototyping
environment whose objective is to provide a set of well-supported tools for the solution of a
broad class of problems in digital image processing.
The theoretical foundations of the material in the following chapters are based on the
leading textbook in the field: Digital Image Processing, by Gon zalez and Woods. The
software code and supporting tools are based on the leading software in the field:
MATLAB® and the Image Processing Toolbox™ from The MathWorks, Inc. Familiarity
spatial coordinates, and the amplitude off at any pair of coordinates (x,y) is called the
Because MATLAB is a matrix-oriented language, basic knowledge of matrix analysis is
helpful.
What is Digital Image Processing?
An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial coordinates, and the amplitude off at any pair of coordinates (x,y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term used most widely to denote the elements of a digital image. Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spectrum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate also on images generated by sources that humans do not customarily associate with images. These include ultrasound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications. There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vision, begin. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image would not be considered an image processing operation. On the other hand, there are fields, such as computer vision, whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI), whose objective is to emulate human intelligence. The field of AI is in its infancy in terms of practical developments, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in between image processing and computer vision.
The term gray level is used often to refer to the intensity of monochrome images. Colour images are formed by a combination of individual images. For example, in the RGB colour system, a colour image consists of three individual monochrome images, referred to as the red (R), green (G), and blue (B) primary (or component) images. For this reason, many of the techniques developed for monochrome images can be extended to colour images by processing the three component images individually.
Origin of Digital Image Processing
One of the first applications of digital images was in the newspaper industry when pictures were first sent by submarine cable between London and New York. In the 1920s, the Bartlane cable picture transmission system reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours. Specialized printing equipment coded pictures for cable transmission and then reconstructed them at the receiving end. The early Bartlane systems were capable of coding images in five distinct levels of gray. This Capability was increased to 15 levels in 1929. This history of digital image processing is intimately tied to the development of the digital computer. In fact, digital images require so much storage and computational power that progress in the field of Digital Image Processing has been dependent on the development of digital computers and of supporting technologies that include digital storage, display and transmission.
Applications & Fields that use Digital Image Processing
Today, there is almost no area of technical endeavour that is not impacted in some way by digital image processing. We can cover only a few of these applications in the context and space of the current discussion. However, limited as it is, the material presented in this section will leave no doubt in the reader's mind regarding the breadth and importance of digital image processing. We show in this section numerous areas of application, each of which routinely utilizes the digital image processing techniques developed in the following Today, there is almost no area of technical endeavour that is not impacted in some way by digital image processing. We can cover only a few of these applications in the context and space of the current discussion. However, limited as it is, the material presented in this section will leave no doubt in the reader's mind regarding the breadth and importance of digital image processing. We show in this section numerous areas of application, each of which routinely utilizes the digital image processing techniques developed in the following chapters. Many of the images shown in this section are used later in one or more of the examples given in the book. All images shown are digital.
The areas of application of digital images processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g. visual X-ray and so on). The principal energy sources for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic and electronic (in the form of electron beams used in electron microscopy). Synthetic images used for modelling and visualization, are generated by the computer. In this section, we discuss briefly how images are generated in these various categories and the areas in which they are applied. Methods for converting images into digital form are discussed in the next chapter.
Images based on radiation from the EM spectrum are the most familiar, especially images in the X-ray and visual bands of the spectrum. Electromagnetic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths or they can be thought of as a stream of massless particle each travelling in a wavelike pattern and moving at the speed of light.
Few of the applications are as follows:
1. Gamma Ray Imaging: Major uses of imaging based on gamma rays include nuclear medicine and astronomical observations. In nuclear medicine, the approach s to inject a patient with a radioactive isotope that emits gamma rays as it decays. Images are produced from the emissions collected by gamma ray detectors.
X-Ray Imaging: X-rays are among the oldest sources of EM radiation used for imaging. The best-known use of X-rays is medical diagnostics but they also are used extensively in industry and other areas like astronomy. X-rays for medical and industrial imaging are generated using an X-ray tube which is a vacuum tube with a cathode and anode. In digital radiography, digital images are obtained by one of two methods: (1) by digitizing X-ray films or (2) by having the X-rays that pass through the patient fall directly onto devices (such as phosphor screen) that convert X-rays to light. The light signal, in turn, is captured by a light-sensitive digitizing system. Angiography is another major application in an area called contrast-enhancement radiography. This procedure is used to obtain images of blood vessels.
Imaging in Ultraviolet Band: Ultraviolet Light is used to fluorescence microscopy, one of the fastest growing areas of microscopy. Fluorescence microscopy is an excellent method for studying materials that can be made to fluoresce, either in their natural form or when treated with chemicals capable of fluorescing.
- Imaging in the Visible and Infrared Band: Visual band of the Electromagnetic Spectrum is the most familiar in all our activities, it is not surprising that imaging in this band out-weighs by far all the others in terms of scope of applications. The infrared band often is used in combination with visual imaging, so we have grouped the visible and infrared bands.
- Imaging in Microwave Band: The dominant application in the microwave band is radar. The unique feature of imaging radar is its ability to collect data over virtually any region at any time, regardless of weather or ambient lighting conditions.
- Imaging in the Radio Band: Radio Waves are used in magnetic resonance imaging (MRI)
Fundamental Steps in Digital Image Processing
The organization of fundamental steps in Digital Image Processing are summarized by the diagram below. The diagram does not imply that every process is applied to an image. Rather, the intention is to convey an idea of all the methodologies that can be applied to images from different purposes and possibly with different objectives.
Image acquisition is the first process shown in the figure. Image acquisition is converting an image to digitalized form. However, the acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves pre-processing, such as scaling.
Image Enhancement is among the simplest and most appealing areas of digital image processing. The idea behind enhancement techniques is to bring out details that are obscured or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an image because it looks better.
Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a good enhancement result.
Colour image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet.
Wavelets are the foundations for representing images in various degrees of resolution.
Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it. Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. Image compression is familiar to most users in the form of image file extension such as .jpg file extension uses the JPEG (Joint Photographic Experts Group) image compression standard.
Morphological processing deals with tools for extracting image components that are useful in representation and description of the shape.
Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing.
Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e. the set of pixels separating one image region from another).
Recognition is the process that assigns a label (e.g. "vehicle") to an object based on its descriptors.
Concepts of an Image Processing System
Numerous models of image processing systems being sold throughout the work were rather substantial peripheral devices that attached to an equally substantial host computer. Late in the 1980s and early in the 1990s, the market shifted to image processing hardware in the form of single boards designed to be compatible with industry standard buses and to fit into engineering workstation cabinets and personal computers.
The figure below shows the basic components comprising a typical general purpose system used for digital image processing. The function of each component is discussed below starting with Image Sensing.
With reference to Image Sensing, two elements are required to acquire digital images. The first is a physical device that is sensitive to the energy radiated by the object we wish to capture as an image. The second, called a digitizer, is a device for converting the output of the physical sensing device into digital form. For instance, in a digital video camera, the sensors produce an electrical output proportional to light intensity. The digitizer converts these outputs to digital data.
The computer in an image processing system is a general-purpose computer and can range from a PC to a supercomputer.
Software for image processing consists of specialized modules that perform specific tasks. A well-designed package also includes the capability for the user to write code that, as a minimum, utilizes the specialized modules.
Mass Storage capability is a must in image processing applications. An image of size 1024 x 1024 pixels, in which the intensity of each pixel is an 8-bit quantity, requires one megabyte of storage space if the image is not compressed. When dealing with thousands or even millions, of images, providing adequate storage in an image processing system can be a challenge.
Image displays in use today are mainly colour TV monitors. In some cases, it is necessary to have stereo displays and these are implemented in the form of headgear containing two small displays embedded in goggles worn by the user.
Hardcopy devices for recording images include laser printers, film cameras, heat-sensitive devices, inkjet units and digital units such as optical and CD ROM disks.
Networking is almost a default function in any computer system in use today. Because of the large amount of data inherent in image processing applications, the key consideration in image transmission is bandwidth.
Elements of Visual Perception
Although the digital image processing field is built on a foundation of mathematical and probabilistic formulations, human intuition and analysis play a central role in the choice of one technique versus another, and this choice often is made based on subjective, visual judgements. Hence, developing a basic understanding of human visual perception as the first step in our journey through this book is appropriate. Given the complexity and breadth of this topic, we can only aspire to cover the most rudimentary aspects of human vision. In particular, our interest lies in the mechanics and parameters related to how images are formed in the eye. We are interested in learning the physical limitations of human vision in terms of factors that also are used in our work with digital images. Thus, factors such as how human and electronic imaging compare in terms of resolution and the ability to adapt to changes in illumination are not only interesting they also are important from a practical point of view.
Structure of Human eye
The diagram below shows a simplified horizontal cross-section of the human eye. The eye is nearly a sphere, with an average diameter of approximately 20 mm. Three membranes enclose the eye: the cornea and sclera outer cover; the choroid and the retina. The cornea is a tough, transparent tissue that covers the anterior surface of the eye. Continuous with the cornea, the sclera is an opaque membrane that encloses the remainder of the optics globe.
The choroid lies directly below the sclera. This membrane contains a network of blood vessels that serve as the major source of nutrition to the eye. Even superficial injury to the choroid, often not deemed serious, can lead to severe eye damage as a result of inflammation that restricts blood flow. The choroid coat is heavily pigmented and hence helps to reduce the amount of extraneous light entering the eye and the backscatter within the optical globe. At its anterior extreme, the choroid is divided into the ciliary body and the iris diaphragm. The latter contracts or expands to control the amount of light that enters the eye. The central opening of the iris (the pupil) varies in diameter from approximately 2 to 8 mm. The front of the iris contains the visible pigment of the eye, whereas back contains a black pigment.
The lens is made up of concentric layers of fibrous cells and is suspended by fibres that are attached to the ciliary body. It contains 60% to 70% water, about 6% fat and more protein than any other tissue in the eye. The lens is coloured by a slightly yellow pigmentation that increases with age. In extreme cases, excessive clouding of the lens, caused by the affliction commonly referred to as cataracts, can lead to poor colour discrimination and loss of clear vision. The innermost membrane of the eye is the retina, which lines the inside of the wall's entire posterior portion. There are two classes of receptors: cones and rods. The cones in each eye number between 6 and 7 million. The number of rods is much larger: Some 75 to 150 million are distributed over the retinal surface.
Cones are most dense in the center of the retina (in the center area of the fovea). Whereas, Rods increases in density from the center out to approximately 20 degrees off axis and then decrease in density out to the extreme periphery of the retina.
Image Formation in Eye
The principal difference between the lens of the eye and ordinary optical lens is that the former is flexible. The distance between the center of the lens and the retina (called the focal length) varies from approximately 17mm to about 14 mm, as the refractive power of the lens increases from its minimum to its maximum. When the eye focuses on an object farther away than about 3 m, the lens exhibits its lowest refractive power. When the eye focuses on a nearby object, the lens is most strongly refractive. This information makes it easy to calculate the size of the retinal image of any object.