Skip to content
Surf Wiki
Save to docs
general/image-segmentation

From Surf Wiki (app.surf) — the open knowledge base

Otsu's method

In computer vision and image processing

Otsu's method

In computer vision and image processing

An example image thresholded using Otsu's algorithm
Original image

In computer vision and image processing, Otsu's method, named after Nobuyuki Otsu, is used to perform automatic image thresholding.{{cite journal

Otsu's method is a one-dimensional discrete analogue of Fisher's discriminant analysis, is related to Jenks optimization method, and is equivalent to a globally optimal k-means performed on the intensity histogram. The extension to multi-level thresholding was described in the original paper, and computationally efficient implementations have since been proposed.

Otsu's method

Otsu's method visualization

Let, H be the normalised histogram of the pixels in an image (s.t. it becomes the probability distribution of pixel intensities) with L bins. There are two classes of this histogram: C_0 for background pixels, and C_1 for foreground pixels. The primary disciminator of pixels (to assort them into classes) is the threshold t. C_0 includes pixels from 0 to (t-1) , and C_1 includes from t to (L-1) .

The algorithm is then global search for an optimal threshold t^* such that intra-class variance (variance of pixels intensities in C_0 or C_1 ) is minimised.

Let, \omega_0 denote the cumulative probability of C_0 , and \omega_1denote of C_1 . \begin{align} \omega_0(t) & =\sum_{i=0}^{t-1} P(i), \ \omega_1(t) & =\sum_{i=t}^{L-1} P(i). \end{align} For a classes C_0 and C_1 , the conditional probability of selecting the i -th pixel in those classes is P(i | C_0) and P(i | C_1) respectively.

Now, let \mu_0(t) and \mu_1(t) be the mean (pixel intensity) of C_0 and C_1 respectively.

\begin{align} \mu_0(t) &= \sum^{t-1}{i=0} iP(i | C_0) = \sum^{t-1}{i=0} \frac{iP(i)}{\omega_0(t)} = \frac{\sum^{t-1}_{i=0} iP(i)}{\omega_0(t)}. \end{align}

Similarly,

\mu_1(t) = \frac{\sum^{L-1}_{i=t} iP(i)}{\omega_1(t)}.

Now, let \sigma^2_0(t) and \sigma^2_1(t) be the (pixel intensity) variance of C_0 and C_1 respectively.

\begin{align} \sigma_0^2(t) &= \sum^{t-1}{i=0} (i - \mu_0)^2 P(i | C_0) = \sum^{t-1}{i=0} \frac{(i - \mu_0)^2 P(i)}{\omega_0} = \frac{\sum^{t-1}_{i=0} (i - \mu_0)^2P(i)}{\omega_0(t)}. \end{align}

Similarly,

\sigma_1^2(t) = \frac{\sum^{L-1}_{i=t} (i - \mu_1)^2 P(i)}{\omega_1(t)}.

Let, \sigma_b^2(t) be the inter-class (pixel intensity) variance, which is defined as the weighted sum of variances of aforementioned two classes.

\begin{align} \sigma^2_b(t) &= \sigma^2_T - \left[\omega_0(t) \sigma^2_0(t) + \omega_1(t) \sigma^2_1(t)\right] \ &= \omega_0(\mu_0 - \mu_T)^2 + w_1 (\mu_1 - \mu_T)^2 \ &= \omega_0\omega_1(\mu_0 - \mu_1)^2. \end{align}

Where, \sigma^2_T(t) variance of the total histogram.

\begin{align} \sigma^2_b(t) &= \omega_0(\mu_0 - \mu_T)^2 + w_1 (\mu_1 - \mu_T)^2 \ &= \omega_0\mu_0^2 + \omega_1\mu_1^2 - 2 \mu_T(\omega_0\mu_0 + \omega_1\mu_1) + \mu_T^2(\omega_0 + \omega_1) \ &= \omega_0\mu_0^2 + \omega_1\mu_1^2 - 2 \mu_T^2 + \mu_T^2 \ &= \omega_0\mu_0^2 + \omega_1\mu_1^2 - \mu_T^2 \ &= \omega_0\mu_0^2 + \omega_1\mu_1^2 - (\omega_0\mu_0 + \omega_1\mu_1)^2 \ &= \omega_0\mu_0^2 - \omega_0^2\mu_0^2 + \omega_1\mu_1^2 - \omega_1^2\mu_1^2 - 2\omega_0\mu_0\omega_1\mu_1 \ &= \omega_0\mu_0^2 (1-\omega_0) + \omega_1\mu_1^2(1-\omega_1) - 2\omega_0\mu_0\omega_1\mu_1 \ &= \omega_0\mu_0^2 (1-\omega_0) - \omega_0\mu_0\omega_1\mu_1 + \omega_1\mu_1^2(1-\omega_1) - \omega_0\mu_0\omega_1\mu_1 \ &= \omega_0\omega_1\mu_0^2 - \omega_0\omega_1\mu_0\mu_1 + \omega_0\omega_1\mu_1^2 - \omega_0\omega_1\mu_0\mu_1 \ &= \omega_0\omega_1(\mu_0^2 - \mu_0\mu_1) + \omega_0\omega_1(\mu_1^2 - \mu_0\mu_1) \ &= \omega_0\omega_1(\mu_0^2 - 2\mu_0\mu_1 + \mu_1^2) \ &= \omega_0\omega_1(\mu_0 - \mu_1)^2. \end{align}}}

The algorithm is now to maximise \sigma_b^2(t), i.e. inter-class variance. This standpoint is motivated by a conjecture that well-thresholded classes would be separated in pixel intensities, and conversely a threshold t^* giving the best separation of classes in pixel intensities would be the best threshold.

Formally, this problem is summarised as the following.

Algorithm

  1. Compute histogram and probabilities of each intensity level.
  2. Set up initial \omega_0(0), \mu_0(0) and \omega_1(0) and \mu_1(0).
  3. Step through all possible thresholds from t = 1 to maximum intensity.
  4. Update \omega_0(0), \mu_0(0) and \omega_1(0) and \mu_1(0).
  5. Compute \sigma^2_b(t).
  6. Desired threshold t^* corresponds to the maximum \sigma^2_b(t).

MATLAB implementation

histogramCounts is a 256-element histogram of a grayscale image different gray-levels (typical for 8-bit images). level is the threshold for the image (double).

function level = otsu(histogramCounts)
total = sum(histogramCounts); % total number of pixels in the image 
%% OTSU automatic thresholding
top = 256;
sumB = 0;
wB = 0;
maximum = 0.0;
sum1 = dot(0:top-1, histogramCounts);
for ii = 1:top
    wF = total - wB;
    if wB > 0 && wF > 0
        mF = (sum1 - sumB) / wF;
        val = wB * wF * ((sumB / wB) - mF) * ((sumB / wB) - mF);
        if ( val >= maximum )
            level = ii;
            maximum = val;
        end
    end
    wB = wB + histogramCounts(ii);
    sumB = sumB + (ii-1) * histogramCounts(ii);
end
end

Matlab has built-in functions graythresh() and multithresh() in the Image Processing Toolbox, which are implemented with Otsu's method and multi-Otsu's method respectively.

Python implementation

This implementation requires the NumPy library.

import numpy as np


def otsu_intraclass_variance(image, threshold):
    """
    Otsu's intra-class variance.
    If all pixels are above or below the threshold, this will throw a warning that can safely be ignored.
    """
    return np.nansum(
        [
            np.mean(cls) * np.var(image, where=cls)
            #   weight   ·  intra-class variance
            for cls in [image >= threshold, image < threshold]
        ]
    )
    # NaNs only arise if the class is empty, in which case the contribution should be zero, which `nansum` accomplishes.


# Random image for demonstration:
image = np.random.randint(2, 253, size=(50, 50))

otsu_threshold = min(
    range(np.min(image) + 1, np.max(image)),
    key=lambda th: otsu_intraclass_variance(image, th),
)

Python libraries dedicated to image processing such as OpenCV and Scikit-image provide built-in implementations of the algorithm.

Limitations and variations

Otsu's method performs well when the histogram has a bimodal distribution with a deep and sharp valley between the two peaks.

Like all other global thresholding methods, Otsu's method performs badly in case of heavy noise, small objects size, inhomogeneous lighting and larger intra-class than inter-class variance. In those cases, local adaptations of the Otsu method have been developed.

Moreover, the mathematical grounding of Otsu's method models the histogram of the image as a mixture of two normal distributions with equal variance and equal size. However, Otsu's thresholding may yield satisfying results even when these assumptions are not met, in the same way statistical tests (to which Otsu's method is heavily connected) can perform correctly even when the working assumptions are not fully satisfied.

Several variations of Otsu's methods have been proposed to account for more severe deviations from these assumptions, such as the Kittler–Illingworth method.

A variation for noisy images

A popular local adaptation is the two-dimensional Otsu's method, which performs better for the object segmentation task in noisy images. Here, the intensity value of a given pixel is compared with the average intensity of its immediate neighborhood to improve segmentation results.{{cite book

At each pixel, the average gray-level value of the neighborhood is calculated. Let the gray level of the given pixel be divided into L discrete values, and the average gray level is also divided into the same L values. Then a pair is formed: the pixel gray level and the average of the neighborhood (i, j). Each pair belongs to one of the L \times L possible 2-dimensional bins. The total number of occurrences (frequency) f_{ij} of a pair (i, j), divided by the total number of pixels in the image N, defines the joint probability mass function in a 2-dimensional histogram: P_{ij} = \frac{f_{ij}}{N}, \qquad \sum_{i=0}^{L-1} \sum_{j=0}^{L-1} P_{ij} = 1.

And the 2-dimensional Otsu's method is developed based on the 2-dimensional histogram as follows.

The probabilities of two classes can be denoted as \begin{align} \omega_0 &= \sum_{i=0}^{s-1} \sum_{j=0}^{t-1} P_{ij}, \ \omega_1 &= \sum_{i=s}^{L-1} \sum_{j=t}^{L-1} P_{ij}. \end{align}

The intensity mean-value vectors of two classes and total mean vector can be expressed as follows: \begin{align} \mu_0 &= [\mu_{0i}, \mu_{0j}]^T = \left[\sum_{i=0}^{s-1} \sum_{j=0}^{t-1} i \frac{P_{ij}}{\omega_0}, \sum_{i=0}^{s-1}\sum_{j=0}^{t-1} j \frac{P_{ij}}{\omega_0} \right]^T, \ \mu_1 & =[\mu_{1i}, \mu_{1j}]^T = \left[\sum_{i=s}^{L-1}\sum_{j=t}^{L-1} i \frac{P_{ij}}{\omega_1}, \sum_{i=s}^{L-1}\sum_{j=t}^{L-1} j \frac{P_{ij}}{\omega_1} \right]^T, \ \mu_T & =[\mu_{Ti}, \mu_{Tj}]^T = \left[\sum_{i=0}^{L-1} \sum_{j=0}^{L-1} i P_{ij}, \sum_{i=0}^{L-1}\sum_{j=0}^{L-1} j P_{ij}\right]^T. \end{align}

In most cases the probability off-diagonal will be negligible, so it is easy to verify \omega_0 + \omega_1 \cong 1, \omega_0 \mu_0 + \omega_1 \mu_1 \cong \mu_T.

The inter-class discrete matrix is defined as S_b = \sum_{k=0}^1 \omega_k[(\mu_k - \mu_T)(\mu_k - \mu_T)^T].

The trace of the discrete matrix can be expressed as \begin{align} \operatorname{tr}(S_b) &= \omega_0[(\mu_{0i} - \mu_{Ti})^2 + (\mu_{0j} - \mu_{Tj})^2] + \omega_1[(\mu_{1i} - \mu_{Ti})^2 + (\mu_{1j} - \mu_{Tj})^2] \ &= \frac{(\mu_{Ti} \omega_0 - \mu_i)^2 + (\mu_{Tj} \omega_0 - \mu_j)^2}{\omega_0(1 - \omega_0)}, \end{align} where \mu_i = \sum_{i=0}^{s-1} \sum_{j=0}^{t-1} iP_{ij}, \mu_j = \sum_{i=0}^{s-1} \sum_{j=0}^{t-1} jP_{ij}.

Similar to one-dimensional Otsu's method, the optimal threshold (s, t) is obtained by maximizing \operatorname{tr}(S_b).

Algorithm

The s and t is obtained iteratively, which is similar with one-dimensional Otsu's method. The values of s and t are changed till we obtain the maximum of \operatorname{tr}(S_b), that is

max, s, t = 0;

for ss: 0 to L - 1 do
    for tt: 0 to L - 1 do
        evaluate tr(S_b);
        if tr(S_b) > max
            max = tr(S, b);
            s = ss;
            t = tt;
        end if
    end for
end for

return s, t;

Notice that for evaluating \operatorname{tr}(S_b), we can use a fast recursive dynamic programming algorithm to improve time performance.{{cite book

If summed area tables are used to build the 3 tables sum over P_{ij}, sum over i P_{ij}, and sum over j P_{ij} then the runtime complexity is \max\big(O(N_\text{pixels}), O(N_\text{bins}^2)\big). Note that if only coarse resolution is needed in terms of threshold, N_\text{bins} can be reduced.

MATLAB implementation

Function inputs and output: : hists is a 256 \times 256 2D histogram of grayscale value and neighborhood average grayscale value pair. : total is the number of pairs in the given image, determined by the number of the bins of 2D histogram at each direction. : threshold is the threshold obtained.

function threshold = otsu_2D(hists, total)
maximum = 0.0;
threshold = 0;
helperVec = 0:255;
mu_t0 = sum(sum(repmat(helperVec',1,256).*hists));
mu_t1 = sum(sum(repmat(helperVec,256,1).*hists));
p_0 = zeros(256);
mu_i = p_0;
mu_j = p_0;
for ii = 1:256
    for jj = 1:256
        if jj == 1
            if ii == 1
                p_0(1,1) = hists(1,1);
            else
                p_0(ii,1) = p_0(ii-1,1) + hists(ii,1);
                mu_i(ii,1) = mu_i(ii-1,1)+(ii-1)*hists(ii,1);
                mu_j(ii,1) = mu_j(ii-1,1);
            end
        else
            p_0(ii,jj) = p_0(ii,jj-1)+p_0(ii-1,jj)-p_0(ii-1,jj-1)+hists(ii,jj); % THERE IS A BUG HERE. INDICES IN MATLAB MUST BE HIGHER THAN 0. ii-1 is not valid
            mu_i(ii,jj) = mu_i(ii,jj-1)+mu_i(ii-1,jj)-mu_i(ii-1,jj-1)+(ii-1)*hists(ii,jj);
            mu_j(ii,jj) = mu_j(ii,jj-1)+mu_j(ii-1,jj)-mu_j(ii-1,jj-1)+(jj-1)*hists(ii,jj);
        end

        if (p_0(ii,jj) == 0)
            continue;
        end
        if (p_0(ii,jj) == total)
            break;
        end
        tr = ((mu_i(ii,jj)-p_0(ii,jj)*mu_t0)^2 + (mu_j(ii,jj)-p_0(ii,jj)*mu_t1)^2)/(p_0(ii,jj)*(1-p_0(ii,jj)));

        if ( tr >= maximum )
            threshold = ii;
            maximum = tr;
        end
    end
end
end

A variation for unbalanced images

When the levels of gray of the classes of the image can be considered as normal distributions but with unequal size and/or unequal variances, assumptions for the Otsu algorithm are not met. The Kittler–Illingworth algorithm (also known as "minimum-error thresholding") is a variation of Otsu's method to handle such cases. There are several ways to mathematically describe this algorithm. One of them is to consider that for each threshold being tested, the parameters of the normal distributions in the resulting binary image are estimated by maximum likelihood estimation given the data.

While this algorithm could seem superior to Otsu's method, it introduces nuisance parameters to be estimated, and this can result in the algorithm being over-parametrized and thus unstable. In many cases where the assumptions from Otsu's method seem at least partially valid, it may be preferable to favor Otsu's method over the Kittler–Illingworth algorithm, following Occam's razor.

Triclass thresholding tentatively divides histogram of an image into three classes, with the TBD class to be processed at next iterations.

Iterative triclass thresholding based on the Otsu's method

One limitation of the Otsu’s method is that it cannot segment weak objects, as the method searches for a single threshold to separate an image into two classes, namely, foreground and background, in one shot. Because the Otsu’s method looks to segment an image with one threshold, it tends to bias toward the class with the large variance.{{cite journal Iterative triclass thresholding algorithm is a variation of the Otsu’s method to circumvent this limitation.{{cite journal Given an image, at the first iteration, the triclass thresholding algorithm calculates a threshold \eta_1 using the Otsu’s method. Based on threshold \eta_1, the algorithm calculates mean \mu_\text{upper}^{[1]} of pixels above \eta_1 and mean \mu_\text{lower}^{[1]} of pixels below \eta_1. Then the algorithm tentatively separates the image into three classes (hence the name triclass), with the pixels above the upper mean \mu_\text{upper}^{[1]} designated as the temporary foreground F class and pixels below the lower mean \mu_\text{lower}^{[1]} designated as the temporary background B class. Pixels fall between [\mu_\text{lower}^{[1]}, \mu_\text{upper}^{[1]}] are denoted as a to-be-determined (TBD) region. This completes the first iteration of the algorithm. For the second iteration, the Otsu’s method is applied to the TBD region only to obtain a new threshold \eta_2. The algorithm then calculates the mean \mu_\text{upper}^{[2]} of pixels in the TBD region that are above \eta_2 and the mean \mu_\text{lower}^{[2]} of pixels in the TBD region that are below \eta_2. Pixels in the TBD region that are greater than the upper mean \mu_\text{upper}^{[2]} are added to the temporary foreground F. And pixels in the TBD region that are less than the lower mean \mu_\text{lower}^{[2]} are added to the temporary background B. Similarly, a new TBD region is obtained, which contains all the pixels falling between [\mu_\text{lower}^{[2]}, \mu_\text{upper}^{[2]}]. This completes the second iteration. The algorithm then proceeds to the next iteration to process the new TBD region until it meets the stopping criterion. The criterion is that, when the difference between Otsu’s thresholds computed from two consecutive iterations is less than a small number, the iteration shall stop. For the last iteration, pixels above \eta_n are assigned to the foreground class, and pixels below the threshold are assigned to the background class. At the end, all the temporary foreground pixels are combined to constitute the final foreground. All the temporary background pixels are combined to become the final background. In implementation, the algorithm involves no parameter except for the stopping criterion in terminating the iterations. By iteratively applying the Otsu’s method and gradually shrinking the TBD region for segmentation, the algorithm can obtain a result that preserves weak objects better than the standard Otsu’s method does.

References

References

  1. (2009). "2009 Ninth International Conference on Hybrid Intelligent Systems".
  2. Liao, Ping-Sung. (2001). "A fast algorithm for multilevel thresholding". J. Inf. Sci. Eng..
  3. Huang, Deng-Yuan. (2009). "Optimal multi-level thresholding using a two-stage Otsu optimization approach". Pattern Recognition Letters.
  4. (September 1985). "On threshold selection using clustering criteria". IEEE Transactions on Systems, Man, and Cybernetics.
  5. Lee, Sang Uk and Chung, Seok Yoon and Park, Rae Hong. (1990). "A comparative performance study of several global thresholding techniques for segmentation". Computer Vision, Graphics, and Image Processing.
  6. (October 1992). "Maximum likelihood thresholding based on population mixture models". Pattern Recognition.
  7. (August 2011). "''t''-Tests, ''F''-Tests and Otsu's Methods for Image Thresholding". IEEE Transactions on Image Processing.
  8. (1986-01-01). "Minimum error thresholding". Pattern Recognition.
Info: Wikipedia Source

This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.

Want to explore this topic further?

Ask Mako anything about Otsu's method — get instant answers, deeper analysis, and related topics.

Research with Mako

Free with your Surf account

Content sourced from Wikipedia, available under CC BY-SA 4.0.

This content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.

Report