Image segmentation and image edge detection

The connection between edge detection and image segmentation:

Edge detection is the detection of the gradient changes in the image by the gradient of the image, for the edge information. Image segmentation is to segment the target, aiming at the target object. Edge detection is a method of spatial domain image segmentation, which belongs to the inclusion relationship.

The image after edge detection is a binary image. For binary images, the morphological operation can be used to segment the target, so edge detection is a prerequisite for image segmentation. But segmentation does not have to use edge detection.

Image segmentation:

concept:

Image segmentation is the process of dividing an image into a number of mutually disjoint small regions. A so-called small region is a connected set of pixels having a common attribute in a certain sense.

From the point of view of the set: it should be a set of points with the following properties, the set R represents the whole region, and the segmentation of R can be regarded as dividing R into N non-empty subsets R1, R2,... , RN:

purpose:

Whether it is image processing, analysis, understanding and recognition, its basic work is generally based on image segmentation;

Extracting meaningful features in the image or feature information required by the application;

The final result of image segmentation is to decompose the image into some units with certain characteristics, called primitives of the image;

This image primitive is easier to process faster than the entire image.

Principle of image segmentation

The research of image segmentation has been highly valued by people for many years, and various types of segmentation algorithms have been proposed so far. Pal divides the image segmentation algorithm into six categories: threshold segmentation, pixel segmentation, depth image segmentation, color image segmentation, edge detection, and fuzzy set-based methods. However, in this method, the contents of each category overlap. In order to cover the emerging new methods, some researchers have divided the image segmentation algorithms into the following six categories: parallel boundary segmentation, serial boundary segmentation, parallel region segmentation, serial region segmentation, and segmentation with specific theoretical tools. Technology and special image segmentation techniques.

Image segmentation features:

The divided regions have similarities to certain properties such as grayscale and texture, and the interior of the region is connected without too many small holes.

Regional boundaries are clear

Adjacent areas have significant differences in the nature of the segmentation

Image segmentation method:

1. Segmentation method based on pixel gray value: threshold (threshold) method

Second, the region-based segmentation method: the boundary method of segmentation is realized by directly determining the boundary between regions;

Third, the edge-based segmentation technique: firstly detect the edge pixels, and then connect the edge pixels to form a boundary to form a segmentation.

Image segmentation contains content:

Edge detection

Edge tracking:

Starting from an edge point in the image, and then searching for the next edge point according to some discriminant criterion to track the target boundary.

Threshold segmentation:

Original image - f(x, y)

Gray threshold - T

The threshold is calculated as a binary image - g(x, y)

Area segmentation:

Threshold segmentation method limits multiple threshold selection because there is little or no consideration of spatial relationships

The segmentation method of the region can make up for this deficiency. It utilizes the spatial nature of the image. The method considers that the pixels belonging to the same region should have similar properties, and the concept is quite intuitive.

The traditional region segmentation algorithm has a region growing method and a regional splitting method. This kind of method can segment images with a priori knowledge such as complex scenes or natural scenes without prior knowledge, and can also achieve better performance. However, space and time overhead are relatively large.

The region growing method mainly considers the relationship between pixels and their spatial neighborhood pixels.

Initially determining one or more pixel points as seeds, then growing the region according to some similarity criterion, gradually generating a spatial region with some uniformity, and merging adjacent pixels or regions with similar properties to gradually grow Area until there are no points or other small areas that can be merged.

The measure of similarity of pixels within an area may include information such as average gray value, texture, color, and the like.

Regional growth:

Mainly consider the relationship between pixels and their spatial neighborhood pixels

Start by determining one or more pixels as seeds, then growing the region according to some similarity criterion, gradually generating a spatial region with some uniformity, and merging adjacent pixels or regions with similar properties to gradually grow the region. Until there are no points or other small areas that can be merged.

The measure of similarity of pixels within an area may include information such as average gray value, texture, color, and the like.

The main steps:

Choose the right seed point

Determining similarity criteria (growth criteria)

Determining growth stop conditions

Regional division:

Condition: If certain characteristics of the region do not meet the consistency criteria

Start: Start with the largest area of ​​the image, usually starting with the entire image

note:

Determine the splitting criteria (consistency criteria)

Determining the splitting method, that is, how to split the region so that the characteristics of the split sub-region satisfy the consistency criterion value as much as possible

Edge detection:

In the theoretical framework of visual computing, extracting the basic features such as edges, corners, and textures on a two-dimensional image is the first step in the overall system framework. The map consisting of these features is called the primitive map.

The edge points in the sense of different "scales" contain all the information of the original image under certain conditions.

definition:

• Currently, there is a descriptive definition of the edge, ie the boundary of two uniform image regions with different gradations, ie the boundary reflects the local gradation change.

• Local edges are small areas where the local gray levels in the image are transformed very quickly in a simple (ie, monotonous) manner. This local variation can be detected by an edge detection operator with a certain window operation.

Description of the edge:

1) Edge normal direction - the direction in which the gray level changes most sharply at a certain point, perpendicular to the edge direction;

2) Edge direction - perpendicular to the edge normal direction, is the tangent direction of the target boundary;

3) Edge Strength - A measure of the intensity of the local variation of the image along the normal direction of the edge.

The basic idea of ​​edge detection is to determine whether the pixel is on the boundary of an object by detecting the state of each pixel and its neighborhood. If a pixel is located on the boundary of an object, the gray value of its neighboring pixels changes relatively. If an algorithm can be applied to detect this change and quantify it, then the boundary of the object can be determined.

The edge detection algorithm has the following four steps:

Filtering: The edge detection algorithm is mainly based on the first and second derivatives of the image intensity, but the calculation of the derivative is very sensitive to noise, so filters must be used to improve the performance of the noise-related edge detector. It should be pointed out that most filters also reduce the edge intensity while reducing the noise. Therefore, there is a trade-off between enhancing the edge and reducing the noise.

Enhancement: The basis for enhancing the edge is to determine the change in the intensity of the neighborhood of each point of the image. The enhancement algorithm can highlight points where the neighborhood (or local) intensity values ​​change significantly. Edge enhancement is generally done by calculating the gradient magnitude.

Detection: There are many points in the image with large gradient amplitudes, and these points are not all edges in specific application areas, so some method should be used to determine which points are edge points. The simplest edge detection criterion is the gradient magnitude threshold criterion.

Positioning: If an application requires determining the edge position, the position of the edge can be estimated at the sub-pixel resolution, and the orientation of the edge can also be estimated.

In the edge detection algorithm, the first three steps are very common. This is because in most cases, only the edge detector is required to indicate that the edge appears near a pixel of the image, and it is not necessary to indicate the exact position or orientation of the edge. The edge detection error usually refers to the edge misclassification error, that is, the false edge is discriminated as an edge and retained, and the true edge is discriminated as a false edge and removed. The edge estimation error is a probabilistic statistical model to describe the position and direction error of the edge. We distinguish the edge detection error from the edge estimation error because their calculation methods are completely different and the error model is completely different.

Three common criteria for edge detection:

• Good detection results, or the rate of false detection of edges, should be as low as possible, that is, there should be no detection results at the edge of the image; on the other hand, do not have false edges;

• The positioning of the edges should be accurate, that is, the edge position we mark should be close enough to the center of the real edge on the image;

• The lowest possible number of responses to the same edge, ie the detection response is preferably single-pixel.

Several commonly used edge detection operators include Roberts edge detection operator, Sobel operator, Prewitt operator, Krisch edge operator, and Gauss-Laplacian operator.

Image features:

• Image features are attributes that can be used as markers in an image. They can be divided into statistical and visual features.

• The statistical characteristics of an image refer to some artificially defined features that can be obtained by transformation, such as the histogram, moment, spectrum, etc. of the image;

• The visual characteristics of an image are natural features that can be directly perceived by human vision, such as the brightness, texture or contour of a region.

Contour extraction:

The algorithm for binary image contour extraction is very simple, which is to hollow out the internal points: If there is a point in the original image that is black, and its 8 neighbors are all black, it means that the point is an internal point, and the point is deleted (set The white pixel value is 255). This operation is performed on all the pixels in the image to complete the extraction of the image outline.

Template matching:

Template matching refers to comparing a template with a source image to determine whether there is an area in the source image that is the same or similar to the template. If the area exists, the position can be determined and the area extracted. .

Shape matching:

Shape is also an important feature for describing image content. There are three issues to consider when using shape matching. First, shapes are often associated with targets, so shape features can be viewed as higher-level image features relative to color. To get the shape parameters of the target, it is often necessary to segment the image first, so the shape features are affected by the image segmentation effect. Second, the description of the shape of the target is a very complex problem, and no exact mathematical definition of the shape of the image that is consistent with the human perception has been found. Finally, the shape of the target in the images obtained from different perspectives may vary greatly. To accurately shape the shape, it is necessary to solve the problem of invariance of translation, scale and rotation transformation.

The shape of the target can often be represented by the outline of the target, which is composed of a series of boundary points. It is generally believed that at a large scale, it is often possible to more reliably eliminate false detections and detect true boundary points, but positioning the boundaries at large scales is not easy to be accurate. Conversely, the positioning of true boundary points is often accurate at smaller scales, but the proportion of false detections at small scales increases. Therefore, it is conceivable to detect the true boundary points at a large scale and then to accurately locate the true boundary points at a smaller scale. As a multi-scale and multi-channel analysis tool, wavelet transform and analysis are more suitable for multi-scale boundary detection of images.

Vehicle Diagnostic Cables

We make OBD connector with terminal by ourselves, soldering type and crimping type are both available. Also 12V and 24V type. OBD1, OB2, J1939, J1708, J1962, etc. Also molded by different type, straight type or right-angle type. The OBD connector cables used for Audi, Honda, Toyota, BWM, etc. We have wide range of materials source , also we can support customers to make a customized one to replace the original ones.

Vehicle Diagnostic Cables,Diagnostic OBD Cable,OBD2 Splitter Y Cables,OBD2 Diagnostic Adapters,OBD Heavy Vehicle Cables

ETOP WIREHARNESS LIMITED , https://www.etopwireharness.com