Lecture
Boundary Extraction - can be obtained by eroding A (set of foreground pixels) by a suitable structuring element, B. Performs the set difference between A and its erosion. $\beta(A) = A - (A \ominus B)$
$(A \ominus B)$ = erosion $A$ = object
To increase the boundary, you use a larger structuring element.
Hole Filling
It is a morphological process that is used to identify and fill the background regions that are surrounded by foreground regions in a binary image.
Thinning
Reduces binary objects or shapes in an image to strokes that are a single pixel wide.
GrayScale Morphology
Grayscale Erosion - replaces the pixel’s value w/ the minimum value of all the pixels in the neighborhood defined by the structuring element.
Grayscale Dilation - replaces the pixel’s value w/ the maximum value of all the pixels in the neighborhood defined by the structuring element.
Image Segmentation
It is the process of partitioning a digital image into multiple segments (sets of pixels), often
Fundamentals of Image Segmentation
- Connectivity - Focuses on idenitifying abrupt changes in intensity
- Similarity -
Finite Differences
- First-order Derivatives - produces thick edges in images.
- Second-order Derivatives - stronger response to fine details, such as thin lines, isolated points and noise.
In second-order derivatives, the signs indicate whether the image is dark or not. For instance, if a value goes from + to -, then the image has converted from dark to light (or the other way around).
NOTE: when using laplacian, the sum of values in an image must always equal to 0.
Point Detection
Requires the use of the Laplacian operator. Steps:
- Compute Laplacian response at every pixel.
- Take absolute value of Laplacian response
- If the result is greater than a threshold, set the pixel to 1. Otherwise, 0.
Line Detection
Uses the second-order derivative. Second-order derivatives result in a stronger filter response and thinner lines in comparison to first-order derivatives. We can definte filter kernels to detect lines in specified directions.
Edge Detection
Is a common approach for image segmentation based on local intensity changes and discontinuities.
There are three types of edges:
- Step edge - transition between two intensity levels occurring ideally over the distance of one pixel
- Ramp edge - gradual transition between two intensity levels
- Roof edge - intensities on both sides of the rood are similar, with different intensity in the middle.
If the image is noisy, you must first eliminate the noise (by smoothing the image). After the noise is eliminated, then edge detection can be applied.
NOTE: it is mandatory that noise must be elimininated before applying edge detection, as the first and second derivatives are sensitive to noise.
You can use spatial and frequency domain to reduce the amount of noise.