26
NGHIÊN CỨU KHOA HỌC
Tạp chí Nghiên cứu khoa học - Đại học Sao Đỏ, ISSN 1859-4190 Số 3(62).2018
THE IMAGE STITCHING METHOD FOR OVERLAPPING
PAPER COUNTING
PHƯƠNG PHÁP ĐẾM TẬP GIẤY XẾP CHỒNG NHIỀU LỚP
BẰNG CÁCH GHÉP HÌNH ẢNH
Pham Thi Dieu Thuy1, 2, Ha Minh Tuan1, 2, Nguyen Trong Cac1 , Changyan Xiao 2
Email: dieuthuy303@gmail.com
1Sao Do University, Viet Nam
2Ho Nam University, China
Date received: 28/5/2018
Date received after correction: 20/9/2018
Release date: 28/9/2018
A
8 trang |
Chia sẻ: huong20 | Ngày: 18/01/2022 | Lượt xem: 360 | Lượt tải: 0
Tóm tắt tài liệu Phương pháp đếm tập giấy xếp chồng nhiều lớp bằng cách ghép hình ảnh, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
bstract
Stack paper counting has a huge industrial demand in the field of printing and packaging. According
to the requirement of real-time high precision for the multi camera system, a new method of image
stitching between overlapping papers is presented. The solution helped to deal with the rotation angle
deviation,the scale differences and unobvious characteristics in actual images of the counting instrument.
Firstly, the ridge two images are acquired. Next the rotating angle is corrected by improved Hough
transform method. Afterward, the correcting angle is analyzed in the frequency domain eliminate the
effects of scale differences. Finally, using Zero Mean Cross Correlation (ZNCC) in the corrected images
to get the relevance of local signals between images to locate the overlapping images. In industrial
production lines, the images of the overlapping papers are very similar. General algorithms can not be
applied to it, while the counting error will be caused by adding the characteristics with external labels.
Traditional image fusion is replaced by locating the overlapping papers in our algorithm, which can not
only improve the stitching precision without the help of the external tag, but also guarantee the real-time
performance of the algorithm by using the method of stitching and line detection at the same time. It is
verified in experiments that the algorithm can fully meet the requirements of the precision, universal and
timeliness of the multiple types of paper.
Keywords: Image stitching; overlapping paper location; Hough transform; frequency domain analysis;
computer vision.
Tóm tắt
Đếm giấy xếp chồng là một yêu cầu lớn trong công đoạn in ấn và đóng gói trong công nghiệp. Nhằm
đáp ứng yêu cầu về thời gian thực và độ chính xác cao của hệ thống nhiều camera, bài báo này đề xuất
phương pháp ghép ảnh giấy xếp chồng mới. Các giải pháp được đưa ra giải quyết ba thách thức chính
là độ chênh góc nghiêng, sai lệch tỷ lệ và những đường nét hiển thị không rõ nét trong ảnh thu được của
thiết bị đếm. Trước tiên, xác định sườn của hai ảnh của tập giấy được thu bởi hai camera. Sau đó, sử
dụng phương pháp điều chỉnh góc nghiêng dựa theo biến đổi Hough cải tiến. Tiếp theo, phân tích góc
nghiêng đã được hiệu chỉnh trong môi trường tần số và khử ảnh hưởng của sai lệch tỷ lệ. Cuối cùng,
áp dụng phép đối chiếu chéo trung bình 0 (ZNCC) trên ảnh đã hiệu chỉnh để tìm ra những điểm tương
đồng giữa hai ảnh dùng cho việc xác định vùng ảnh chồng chéo. Ảnh giấy xếp chồng trong công nghiệp
rất tương đồng, các phương pháp cũ không giải quyết được sai số đếm vì sự hiện diện của đường nét
không mong muốn. Do vậy, phương pháp xác định vùng chồng chéo trong ảnh giấy xếp chồng tốt hơn
phương pháp ghép ảnh truyền thống vì không những cải thiện độ chính xác ghép nối mà không cần
thêm nhãn phụ mà còn đảm bảo hiệu năng thời gian thực bằng cách đồng thời ghép ảnh và tìm đường
thẳng trong ảnh. Thực nghiệm đã cho thấy phương pháp được đề xuất đáp ứng đầy đủ yêu cầu độ
chính xác, tính vạn năng và thời gian thực thi trên nhiều loại giấy khác nhau.
Từ khóa: Ghép ảnh tập giấy; xác định vùng ảnh chồng chéo; biến đổi Hough cải tiến; phân tích ảnh
trong miền tần số; thị giác máy tính.
Người phản biện: 1. PGS.TSKH. Trần Hoài Linh
2. TS. Đặng Thúy Hằng
27
LIÊN NGÀNH ĐIỆN - ĐIỆN TỬ - TỰ ĐỘNG HÓA
Tạp chí Nghiên cứu khoa học - Đại học Sao Đỏ, ISSN 1859-4190 Số 3(62).2018
1. INTRODUCTION
In the packaging and printing industry, the
counting of paper products is a very important and
indispensable task. If the count is not accurate, it
will cause direct economic losses to the company.
Similar applications also include the measurement
of the number of thin-film products such as solar
wafers [1], PCB boards [2], and cardboards.
However, traditional physical measurement
methods such as thickness and weighing
measurement have defects such as large counting
error and low efficiency. Mechanical paper based
on pneumatic suckers may cause paper damage
[3]. With the rapid development of computer
technology, machine vision methods have been
widely used in production practice. For the task
of counting super-laminated and ultra-thin papers,
since the field of view and the resolution of the
single camera are constrained by each other, only
a plurality of images can be counted. Therefore, the
visual algorithm for ultra-high stacking paper count
is mainly divided into two parts: stitching and line
detection [4]. At the same time, the performance
of industrial cameras, lenses, and light sources
has increased dramatically in recent years, and
prices have continued to decrease. In order to
ensure the stability, real-time, and accuracy of
the instruments, a counting instrument based on
camera arrays has been designed.
The traditional image stitching technique
combines the image sequences of overlapping
regions with each other to form a complete image.
Common image stitching algorithms are divided
into three major categories: (1) region-based
stitching algorithms; (2) feature-based stitching
algorithms; and (3) integrated stitching algorithms.
Region-based image stitching algorithms mainly
match by calculating the correlation between local
image blocks, such as Zero mean normalized
cross-correlation (ZNCC) [5]. This method can
eliminate the effects of background drift and
has high real-time performance, but it is easily
affected by problems such as image rotation and
scale changes. The feature-based image stitching
method has high robustness and stability, but has
large amount of calculation results in very poor
real-time performance and requires the images
to have sufficient feature points. For example, the
SIFT operator [6], the SURF operator [7], and the
latest ORB operator [8] all have strong robustness
to translation, rotation, and other transformations,
but the amount of calculation is large. Even if Lan
Hong [9] and other united information projection
entropy optimize the SIFT operator, Zhu Lin et al.
[10] use Relief-F algorithm to reduce the SURF
descriptor dimension, but the algorithm complexity
is still too high. In addition, the feature-based
stitching algorithm requires that the image has
a sufficient number of feature points with large
differences. However, the actual paper image
feature points are not obvious, the number is
small and they are all very similar. Therefore, this
type of algorithm is not suitable for the counting
instrument in this paper. The third kind of stitching
algorithm is an integrated algorithm designed for
the characteristics of the actual application image.
The general stitching algorithm has high accuracy
and good real-time performance, but it is too
targeted and is not suitable for other scenarios.
For example, Estrada et al. [11] used a stitching
algorithm for the eye video designs collected by
the head-mounted detection camera, and the
assembled panoramas can assist doctors in
diagnosis. Wang et al. [12] completed the mosaic
task for the domain in the Hough space, which has
a good mosaic effect on the workpiece image with
straight edges.
This paper proposes a high-precision, high-real-
time tier stack from the perspective of the Huff
domain, frequency domain, and correlation,
aiming at the stitching problems such as angle
deviation, scale difference, and illumination
difference among actual stacked paper images, an
overlapping linear positioning algorithm between
paper images.
2. INSTRUMENT AND STITCHING PROPLEMS
2.1. Instrument Introduction
For thousands of ultra-thin paper (thinest
0.08mm) counting tasks, to ensure counting
accuracy, each piece of paper occupies at least
8 pixels, so 5 million pixels (2592*1944) industrial
cameras count 324 at most and the field of view is
approximately 26mm*19mm. Obviously, a single
camera cannot complete its task independently
due to the constraints of its own field of view and
resolution. Moreover, as shown in Figure 1(a),
the instrument uses the smallest MINI industrial
camera with a cross-sectional dimension of
29mm*29mm parallel to the field of view. With
the field of view and resolution that guarantee the
measurement accuracy, even if the two cameras
are closely arranged, the captured image still has
no overlapping area.
In summary, this article first designed a staggered
array of camera arrays and applied to paper
counting instruments. This form can ensure that
there is enough overlap between camera fields of
view. The instrument structure is shown in Fig. 1(b):
28
NGHIÊN CỨU KHOA HỌC
Tạp chí Nghiên cứu khoa học - Đại học Sao Đỏ, ISSN 1859-4190 Số 3(62).2018
the industrial camera, the lens, and the camera
stand together constitute the camera array 1; the
laminated paper 5 is stacked on the base and in
front of the camera array 1 while the light source
4 provides the light. When the camera array 1 is
installed, it is ensured that sufficient overlapped
areas are reserved between the cameras, and
then the industrial computer 2 can complete the
image collection by sending acquisition signals to
the camera array 1.
Figure 1. Schematic diagram: (a) Schematic view
of the overlapping view area: (1) paper;
(2) mini industrial camera; (b) Instrument internal
structure diagram: (1) camera array; (2) industrial
computer; (3) power supply; (4) light source;
(5) paper stack.
Compared with general image stitching tasks, the
image acquisition environment of the instrument
is more simple and stable, but the accuracy
requirements are more demanding. As far as the
acquisition environment is concerned, the general
stitching task needs to process images collected
by visual devices of different models or even
hardware aging, and the stitching interference
problem in the images is more serious. However,
the laminated paper counting machine in this
paper has a good image acquisition environment,
and image distortion and other strong interference
problems cannot be considered (as shown in the
high quality image in Figure 2(a)). In terms of
accuracy requirements, the general stitching task
can allow error of a dozen pixels, and then image
fusion [11] can ensure that the spliced image meets
the requirements visually. However, the ultimate
purpose of this instrument is paper counting. The
paper peaks are separated by about seven pixels.
Therefore, the required error is controlled at about
3 pixels, and image fusion is not suitable.
The images of the paper collected by the
instrument is shown in Fig. 2(a). The similarity
between the images in each area is very high,
and there is no significant difference between the
overlapping area and the non-overlapped area. To
better illustrate the problem, feature-based surf
operators [7] are used to stitching (a) graphs. The
result is shown in figure 2(b): The matching feature
points (indicated by circles) in the two figures are
connected by a straight line (only the groups of
feature points with the highest matching degree
are displayed). The feature point matching of the
surf operator needs to be matched several times,
and the calculation volume is too large to meet the
requirements of real-time detection in the industry.
For the problem that the characteristic is not
obvious, the auxiliary point method (such as
the label at A in Figure 2(c)) is often used in
the industry to increase the image feature. This
method is very effective for image stitching of
thicker laminated paper (such as a hard cigarette
box with a thickness of 0.23 mm). However, the
purpose of this counting instrument is to measure
paper with a thickness of 0.08 mm or more. This
method of labeling does not meet the development
requirements. It is mainly due to the thickness of
the label itself. Even if the paper is completely
attached to the label, after image formation (as
shown in Figure 2(d) at the label edge B and at
the label shadow C) will result in a larger label and
paper end surface Gap. Coupled with the problem
of each camera position and the incident angle of
the light is not the same, a paper-based stitching
algorithm will result in a sheet-to-sheet error. At
the same time, external labels are susceptible to
external disturbances, such as problems caused
by long-term work such as stains, positional
deviations, and even dropping off.
Figure 2. Problem illustration: (a) Two images to
be stitched; (b) Surf operator matching results;
29
LIÊN NGÀNH ĐIỆN - ĐIỆN TỬ - TỰ ĐỘNG HÓA
Tạp chí Nghiên cứu khoa học - Đại học Sao Đỏ, ISSN 1859-4190 Số 3(62).2018
(c) Image with label; (d) Partially enlarged image.
Figure 3. The image stitching based on improved Hough transform algorithm
2.2. Stacking paper image stitching algorithm
For the particularity of industrial paper images
and the requirements for stitching accuracy, we
first propose a high-precision real-time stitching
method that uses overlay papers instead of image
fusion. This method uses the improved Hough
transform to solve the angle deviation problem,
the frequency domain analysis to solve the scale
difference between the images and the correlation
between the corrected images to find the hidden
image features, and finally determine the precise
position of the overlapping paper between images.
The complete algorithm flow chart shown in Figure
3, is mainly divided into the following three steps:
I. Rotation angle correction: Firstly, the original
2D image and the line detection [4] are used
to obtain the ridge image sum. Then use the
modified Hough Transform to obtain the sum of
the reference angles, respectively, and derive the
rotational difference angle accordingly. Finally, an
affine transformation of the angle is performed to
obtain an angle-corrected image.
II. Scaling correction: The 1D profile signal
is extracted from the angle-corrected image
and the original 2D image respectively, and a
fundamental frequency frequency sum is obtained
by performing a Fast Fourier Transform (FFT).
Then, the scale ratio between the two images is
calculated according to the fundamental frequency
and the image is corrected by affine scaling of
the angle-corrected image to obtain the finally
corrected image.
III. Positioning overlapping papers: Select the 1D
profile signal from the original 2D image, then use
the ZNCC operator to calculate the correlation
of the corresponding overlapping region of the
signal in the corrected image, and finally fuse
the correlation measure result to determine the
position of the overlapping straight line in the
baseline graph.
Since this article is mainly for the stitching of the
count of the laminated paper, after the overlap
line with the image II is detected in the image I,
the stitching of the image is completed. When
counting, when the overlapping line of image I
is detected, it automatically jumps to image II to
continue counting. Compared with the traditional
stacking paper counting algorithm, this paper
does not need to mark the image to be stitched,
in order to increase its feature points, and realizes
the marker-free stitching, which improves the real-
time performance of the algorithm.
2.2.1. Rotation angle correction
Rotation angle correction is divided into three
steps: one is to extract a straight line for finalizing
paper orientation information; the other is to
use an improved Hough transform algorithm to
obtain the angle difference between the images
to be stitched; and thirdly, to perform affine
transformation on the stitched image.
The two images collected by the paper counting
instrument are grayed out. The results are shown
in Figure 4.
(a) (b)
Figure 4. The original image to be stitched:
(a) Original image I; (b) Original image II.
30
NGHIÊN CỨU KHOA HỌC
Tạp chí Nghiên cứu khoa học - Đại học Sao Đỏ, ISSN 1859-4190 Số 3(62).2018
From the above two figures, there is an angle
difference between the original image I and the
original image II. So do a rotation angle correction.
Figure 5. Extracts the straight line used to
finalize the paper orientation information: (a)
Original image I; (b) The line-detected ridge
image corresponding to the original image; (c)
The Hough space diagram of the ridge image,
abscissa range; (d) The straight line after the
Hough transform extracted from the rough line (e).
As shown in Fig 4(a), the paper in the edge image
of the laminated paper is placed next to the stack
and arranged neatly, and the paper end surface has
obvious and abundant directional characteristics,
so it can be used as a reference source for image
stabilization. In order to accurately extract the
reference direction information of the image, it is
necessary to perform line detection [4] to obtain
the ridge line diagram shown in (b).
In the actual measurement, due to the quality of
the paper and the limitations of the line detection
algorithm, the resulting ridgeline diagram often
suffers from disturbances such as breakage,
misdetection, so the ridgeline chart needs to be
further processed. In this paper, the improved
Hough transform is used to obtain accurate
direction information and the interference in
the ridge diagram can be well eliminated. The
traditional Hough transform formula is as follows:
p x y *cos( ) *sin( )
(1)
The abscissa angle representing the Hough
field in the formula represents the ordinate
distance. The basic principle is to use the dot-
line correspondence between the domain and the
domain, so as to find the point where the lines
intersect most in the domain to determine the
straight line in the domain.
According to the actual situation of the laminated
paper counting instrument, the Hough transform is
first improved, namely searching for the range by
restricting the angle and intercepting the necessary
Transform area to speed up the algorithm. This is
because the position of the camera in the paper-
measuring instrument is fixed, so that the angle
difference and the overlapping area between the
images are constrained within a certain range.
Therefore, the angle is constrained between
and the partial images are appropriately cut and
subjected to the Hough transform. At the same
time, according to the accuracy requirement, the
step length is set to 0.5 and the step length is set
to 0.1. The Huff domain obtained for (b) is shown
in (c), where each line in the ridge corresponds to
a bright spot in the Hough domain.
Because the step accuracy is too high, the bright
spot area difference in the Hough domain (figure
(c)) is too small. If the line is directly determined
by the threshold value, the result is shown in (d),
which results in a real straight line location. After
the Hough transformation, there are a number
of straight lines with little difference in angle.
Inspired by Wang et al. [12] optimization of Hough
test results, this paper uses special judgment
conditions to optimize and refine the Hough
transform results and extract the straight lines
used to determine the direction information. The
judgment conditions are as follows:
(1) Delete the nearest neighbor line, leaving only
the line with the largest accumulator value in the
Hough transform.
(2) To prevent interference, remove the three lines that
are the largest and the smallest in the straight line.
After optimizing and purifying, several sets of
straight lines are obtained as shown in (e), and
then the average angle of the remaining straight
lines is taken as the reference direction of
the image.
Using the above-described improved Hough
transform algorithm, the reference angle sums
of the image to be stitched are respectively
obtained, and then the rotational differential angle
is calculated using the formula. Finally, using
the affine transformation pair rotation angle, the
formula is as follows:
(2)
Where
31
LIÊN NGÀNH ĐIỆN - ĐIỆN TỬ - TỰ ĐỘNG HÓA
Tạp chí Nghiên cứu khoa học - Đại học Sao Đỏ, ISSN 1859-4190 Số 3(62).2018
And a = a1 – a2, where a1 and a2 are rotation
deviation between the ridge and x coordinate in
the ridge image I and II respectively.
Among them, the affine transformation obtained
and eliminated the angle difference.
2.2.2. Positioning overlapping paper
As shown in Fig. 5(a), the 2D image feature points
are very few, but the 1D profile signal (Fig. 5(b))
still shows more obvious features. Therefore,
the correlation of one-dimensional profile signals
between images can be found at this implicit
image features.
Figure 6: Positional alignment of coincident lines:
(a) Original ridges to be stitched I; (b) Corrected
ridges I; (c) Original ridges to be stitched II;
(d) Correlation metrics; (e) Image stitching of
alignment lines I; (f) Image stitching of
alignment lines II.
Figure 6 (a) and (c) are the line-detected ridge
images to be stitched, Fig (a) obtained after
rotation, scale correction (b). Intercept the 1D
signal in the first half of figure (c) (put the initial
coordinate at the valley between the papers), and
then use the ZNCC operator [5] to calculate the
correlation in the second half of figure (b). The
middle and latter half are preliminarily approximate
overlapping areas. For ease of understanding, the
correlation result is processed and displayed in
gray scale (as shown in (d)), where 0.5 or less is
set to 0, 0.5 ~ 0.8 is 80, 0.8 ~ 0.9 is 150, and 0.9
or more is set to 255.
In Fig 6(d) the ellipse position at A is the same
as the ellipse at A in Fig (b). It can be seen that
only the correlation near the true coordinate is
higher than 0.9, and the correlation of 0.8 ~ 0.9
all appears on the straight line whose angle is
the reference angle, which is all on the left side of
the same paper position. This is due to the high
similarity of the 1D signal in the vertical section
of the paper end image along the paper end face.
Obviously, due to too many high correlation results
and small differences, it is not conducive to stably
finding precise matching coordinates.
(1) Traverse the correlation metrics to find the
coordinates with a correlation greater than 0.8.
(2) Place all coordinates in the new binary image,
and use Hough transform to find the line G with
the highest number of votes.
(3) Convert the straight G coordinate to the
figure (b), and then perform the affine inverse
transformation of the result of 2.1 and 2.2 in the
straight line G to convert the coordinate into the
baseline map (e). At this time, the nearest straight
line is searched to the right based on the straight
line G coordinate position after the conversion, so
that the overlapping straight line B is determined.
Through the above method, the overlapping
straight line B of the to-be-constructed image I is
determined, and at the same time the template
signal start coordinate in the to-be-joined image
II is searched to the right, and the overlapping
straight line B in the ridge map (f) is further
positioned. As shown in the overlapped line B
in (e) (f), this algorithm can easily guarantee the
precision requirement of the post-counting task.
3. EXPERIMENTS AND ANALYSIS
In this section, we will conduct extensive
experiments and testing to verify the performance
of the system based on our proposed algorithms.
The image data were captured from a diversity of
substrate samples. The image stitching software
was developed with a hybrid programming of
LabView and C++ language. The configuration
of the industrial computer is 2.7 GHz dual-core
CPU and 4 GB RAM, and the execution time
(including image acquiring time) is about 100ms
for the whole image stitching algorithm.
To verify that our algorithm can satisfy the
requirement of various kinds of substrates
measurement, a diversity of samples with
different material and surface properties
32
NGHIÊN CỨU KHOA HỌC
Tạp chí Nghiên cứu khoa học - Đại học Sao Đỏ, ISSN 1859-4190 Số 3(62).2018
Figure 7. Four types of samples with different characteristic.
substrate color, brightness, and contrast. Without
further preprocessing, our proposed method was
directly applied to these stack images using the
same fixed parameters. An exception is with the
printed paper stack, where the indistinguishable
interval between neighboring substrates makes it
difficult to recognize the stripes. This is reflected
by the broken or incomplete ridge lines in Fig.
7. However, the missed detection of stripes only
rarely happened in local regions.
In this paper, batch on-line inspections were carried
out for 0.08mm, 0.1mm, 0.11mm, and 0.23mm
thick stacks of different samples. The statistical
instrument can ensure that the recall rate reaches
99.9% when the accuracy rate is 100%, and
there is no need to adjust the parameters when
switching between different thickness laminate
products. Therefore the instrument greatly reduces
the input of manpower and material resources and
improves the production efficiency, fully meeting
the requirements of industrial applications.
To verify the performance of the algorithm,
we tested the images of four different types of
paper collected by a laminated paper counting
apparatus. Experimental results for 50 sets of
images show that they all quickly and accurately
locate the overlapping lines between the images.
4. CONCLUSION
This algorithm is mainly applied to the stacking
paper counting instruments in engineering
practice. Its primary purpose is to accurately count
paper products. Therefore, the requirements for
stitching are high real-time and high precision.
In terms of real-time performance, since the
counting equipment is formed and the parameters
between cameras are relatively fixed, and in order
to eliminate long-term vibration disturbances, we
can perform the correction of the rotation angle
and scale only at each start-up and then directly
carry out the subsequent calculations. Enter the
same rotation difference angle and scale the two
parameters. It can greatly accelerate the real-time
performance of the algorithm under the premise
of ensuring accuracy. The guarantee of accuracy
lies in the fact that the algorithm abandons
the traditional idea of stitching and merging
into a panorama and then counts it. Instead, it
first performs line detection and then locates
overlapping lines from the ridge line. This method
can guarantee the maximum degree of post-paper
counting accuracy.
are used for testing. As shown in Fig. 7, a
series of stacked substrates is arranged
sequentially at the top row and the stitched
images derived by performing our proposed
method are at the bottom row. It can be seen
that these stacked substrates obviously
vary in geometric and photogrammetric
parameters such as stripe width, inter stripe gap,
substrate color, brightness, and contrast. Without
further preprocessing, our proposed method was
directly applied to these stack images using the
same fixed parameters. An exception is with the
printed paper stack, where the indistinguishable
interval between neighboring substrates makes it
difficult to recognize the stripes. This is reflected
by the broken or incomplete ridge lines in Fig.
7. However, the missed detection of stripes only
rarely happened in local regions.
33
LIÊN NGÀNH ĐIỆN - ĐIỆN TỬ - TỰ ĐỘNG HÓA
Tạp chí Nghiên cứu khoa học - Đại học Sao Đỏ, ISSN 1859-4190 Số 3(62).2018
REFERENCE
[1]. Fang Chao, Tan Wei, Du JianHong. Research
on Solar Wafer Counting Based on Texture
Features[J]. Information and Electronic
Engineering, 2011, 9(2): 185-189.
[2]. Wu P H, Kuo C H. A counting algorithm and
application of image-based printed circuit boards
[J]. 2009.
[3]. Uchida I, Hirata A. Suction head in a paper sheet
counting machine: U.S. Patent 4, 262, 896[P].
1981-4-21.
[4]. Harba R, Berthe B, Perdoux D, et al. Card-
counting device: U.S. Patent Application 12/597,
678[P]. 2008-4-23.
[5]. Di Stefano L, Mattoccia S, Tombari F. An algorithm
for efficient and exhaustive template matching[M]//
Image Analysis and Recognition. Springer Berlin
Heidelberg, 2004: 408-415.
[6]. Lowe D G. Distinctive image features from scale-
invariant keypoints[J]. International journal of
computer vision, 2004, 60(2): 91-110.
[7]. Bay H, Ess A, Tuytelaars T, et al. Speeded-up
robust features (SURF)[J]. Computer vision and
image understanding, 2008, 110(3): 346-359.
[8]. Rublee E, Rabaud V, Konolige K, et al. ORB: An
efficient alternative to SIFT or SURF[C]//2011
International conference on computer vision.
IEEE, 2011: 2564-2571.
[9]. Lan Hong, Hong YuHuan, Gao XiaoLin. Research
and Application of Image Registration Technology
for Panoramic Mosaic[J]. Computer Engineering
and Science, 2016, 38(02): 317-324.
[10]. Zhu Lin, Wang Ying, Liu ShuYun, et al. Fast image
stitching algorithm based on improved fast robust
features[J]. Journal of Computer Applications,
2014, 34(10): 2944-2947.
[11]. Estrada R, Tomasi C, Cabrera M T, et al. Enhanced
video indirect ophthalmoscopy (VIO) via robust
mosaicing[J]. Biomedical optics express, 2011,
2(10): 2871-2887.
[12]. Wang K, Shi T, Liao G, et al. Image registration
using a point-line duality based line matching
method[J]. Journal of Visual Communication and
Image Representation, 2013, 24(5): 615-626.
Các file đính kèm theo tài liệu này:
- 6_0923_1_2230045.pdf