Lecture Notes
Why studying the human eye is important:
- Mimicking visual processes
- Image interpretation and design
- Quality
- Color Processing
- Avoiding Artifacts
- Human-Computer Interaction
Key Components of the Eye:
- Lens and Muscles - Focuses light onto the retina
- Retina - Contains photoreceptors cells (rods and cones). It is a light-sensitive layer that is in the back of the eye. Rods and cones are essentially the ‘sensors’ that receive the light.
NOTE: The iris is the main reason as to why we see the types of colors in light.
- Cones - it is responsible for color vision.
- Rods - aids to see in the dark/dim environment (scotopic vision). They do not perceieve color.
- Fovea - focuses on the main object. surrounding environment of the object is blurred out.
Photopic Vision - aids to see in a very bright environment.
What is needed for Image Acquision?
- Illumination (light) Source
- A scene
- Imaging System
- Projection of the scene onto the image plane
- A digitized image
Process of Image Sensing
- Sensors capture the scene’s visual information
- Sensor absorbs light energy. Sensor then creates electrical charges.
- Sensor then converts the charges into a voltage signal.
- Voltage is then converted into digital data through a Analog-to-Digital Converter (ADC)
NOTE: IMPORTANT TO REMEMBER!!!
Components of Image Formation Function
- Illumination - represents the amount of light on the scene
- Reflection - represents the illumination that objects reflect.
Formula: $f(x,y) = i(x,y) * r(x,y)$
Converting Analog to Digital: Analog Image -> Sampling(x,y) -> Quantization(f) -> Digital Image
NOTE: Sampling converts a contunous image into a discrete medium. Quantization is the mapping of continuous pixel values into a finite value (quantization is also known as the gray/intensity level).
As in the name, when in the sampling stage, several samples are taken from the image to get their intensity. These samples will then be mapped/represented on a grid to convert it into a digital image.
NOTE: The more samples that you get in an image, the better the quality of the digital image will be.
Reading
The front of the iris contains visible pigment of the eye; back contains a black pigment
Whenever the eye is fixed on focusing onto an object, the light from the object would be imaged on the retina.
A retina has receptors, and there are two types of receptors:
- cones
- rods
fovea - the central portion of the retina. is sensitive to color
Photopic/bright-light vision - aids vision in bright environment (is called cone vision)
Scotopic/dim-light vision - aids vision in dim/dark environments. lack of stimulation of rods–which results having objects to appear colorless.
Perceived brightness is perceived by two phenomena:
- Visual system undershoots or overshoots around the boundary of regions of different intensities levels.
- Simultaneous contract (region’s perceived brightness does not depend ONLY on its itensity.). Surrounding areas of an object is what perceives the brightness.
Colors perceived in an object is determined of how the object reflects the light.
Monochromatic (achromatic) light - light that is void of color. Colors include blac, white, and gray.
Chromatic light - spans the electromagnetic energy spectrum. Four quantities to describe chromatic light:
- radiance - total amount of energy that flows from the light source
- luminance - measure of amount of energy an observer perceives from a light source
- brightness - descriptor of light perception. embodies the achromatic notion of intensity
The term ‘gray level’ denotes the monochromatic intensity.
Image Sensing Process:
- Incoming energy is transformed into voltage via combo of input electrical power and sensor material that is responsive to the type of energy being detected.
- Output voltage waveform is the response of the sensor; digital quantity is obtained by digitizing that response
Image Formation Model
Images are denoted by two-dimensional functions of the form f(x,y)
Spatial coordinaes (x,y) is a scalar quantity. The physical meaning is determined by the source of the image, and whose values are proportional to energy radiated by a physical source.
Function f(x,y) is characterized by two components:
- illumination - amount of source illumination denoted by i(x,y)
- reflectance - amount of illumination reflected by the objects in the scene denoted by r(x,y)
The two components combine as a product to form f(x,y).
f(x,y) = i(x,y)r(x,y) where: $0 \leq i(x,y) < \infty $ and $ 0 \leq r(x,y) \leq 1$
NOTE: Reflectance is bounded by 0 (total absorption) and 1 (total reflectance)
Image Sampling and Quantization
To created a digital image, we need to convert the continuous sensed data into a digital format, and it requires two processes:
- sampling
- quantization
Before digitizing an image, an image may be continuous w/ respect to the x and y coordinates, as well as in amplitude.
To digitize an image, the function $f$ must be sampled in both coordinates as well as in amplitude.
NOTE: Digitizing the coordinate values is called sampling. Digitizing the amplitude is called quantization
Summary
Components of the eye:
- Lens and Muscles - Focuses light onto the retina
- Retina - Light-sensitive layer that is at the back of the eye, and it has two types of receptors: cones and rods
NOTE: a receptor contains photoreceptor cells
- Cones - is in the Fovea and is responsible for color vision–aids to see in bright environment (photopic vision).
- Rods - aids to see in dark/dim environment (scotopic vision). DOES NOT perceive color. very sensitive to light
- Fovera - focuses on the main object–surrounding environment is blurred
How the eye perceives objects (steps)
- object reflects the light–light enters cornea (transparent front layer) and passes thru the lens.
- lens bends the light to focus on object. ciliary muscles adjust lens' curvature for focus. lens' convex shape flips the image of object upside down and is reversed left to right while it is projected onto the retina
- retina reads inverted image; cones grab color and detail of the object; rods get the shape of object
- photoreceptors convert light into electrical signals. signals are sent to the brain thru optic nerve.
- brain reads and reorients image, perceiving object right side up. processing of object in various brain regions leads to recognition and interpretation of object features
Process of Image Sensing
- Sensors expose to light to cpature scene’s visual info.
- Sensor elements absorb light, creating electrical charges
- Sensor converts charges into a voltage signal
- Voltage converts digital data via Analog-to-Digitcal Converter (ADC)
Image Formation Model in DIP
There are two components to the image formation function:
- Illumination (i) - amount of source illumination; ranges from 0 (no light) to inf (very bright light)
- Reflection (r) - amount of illumination reflected by objects in scene; ranges from 0 (complete absorption) to 1 (full reflection)
The two components form the image formation equation: $f(x,y) = i(x,y) * r(x,y)$
Digitizing requires two processes:
- sampling
- quantization
NOTE: before digitizing an image, an image is continuous in x, y, and amplitude values.
Digitizing an image requires the function $f$ to be sampled in both coordinates, as well as the values. TLDR; basically mapping continuous values to discrete values
Analog -> Sampling -> QUantization -> Digital Image