13
LIÊN NGÀNH ĐIỆN - ĐIỆN TỬ - TỰ ĐỘNG HÓA
Tạp chí Nghiên cứu khoa học, Trường Đại học Sao Đỏ, ISSN 1859-4190, Số 3 (70) 2020
A novel system of drowsiness detection for drivers using Raspberry
Một hệ thống chống ngủ gật cho các lái xe sử dụng Raspberry
Pham Viet Hung 1, Nguyen Trong Cac2
Email: phamviethung@vimaru.edu.vn
1Vietnam Maritime University, Vietnam
2Sao Do University, Vietnam
Date received: 11/7/2020
Date of review: 30/9/2020
Accepted date: 30/9/2020
Abstract
In this paper
9 trang |
Chia sẻ: huongnhu95 | Lượt xem: 465 | Lượt tải: 0
Tóm tắt tài liệu Một hệ thống chống ngủ gật cho các lái xe sử dụng Raspberry, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
, a novel algorithm and a novel system of drowsiness detection is proposed. We used the
infrared wave and Raspberry in order to design an efficient and user-friendly sleep detection system. The
designed system could detect the human face in day or night light and reduce traffic accidents, regain a
life for many people. The testing worked well in many cases.
Keywords: Drowsiness detection system; interactive system; image processing; human face recognition;
driver drowsiness.
Tóm tắt
Bài báo đề xuất 1 thuật toán và 1 hệ thống chống ngủ gật. Hệ thống sử dụng tia hồng ngoại và thực hiện
trên nền tảng Raspberry để phát hiện ngủ gật hiệu quả và thân thiện với người dùng. Hệ thống được thiết
kế để nhận diện khuôn mặt trong điều kiện ánh sáng cả ban ngày lẫn ban đêm nhằm phát hiện khi lái xe
có biểu hiện ngủ gật để giảm thiểu tai nạn giao thông; cứu mạng cho nhiều người. Các kết quả thử nghiệm
cho thấy hệ thống hoạt động hiệu quả trong nhiều điều kiện khác nhau.
Từ khoá: Hệ thống phát hiện ngủ gật; hệ thống tương tác; xử lý ảnh; nhận dạng khuôn mặt; lái xe ngủ gật.
1. INTRODUCTION
A traffic accident is a serious threat to human
life. Association for Safe International Road
Travel(ASIRT) pointed out that: in many causes of
human death today traffic accident ranked in 9th
place (after the epidemic, the war, etc) and if the
situation does not improve, it is in 5th place by 2030
[1]. One of the main causes of traffic accidents
is drowsy drivers. According to the estimation
by the United States International Traffic Safety
Administration, each year, about 328.000 traffic
accidents occur due to driver drowsiness and
fatigue, resulting in about 6.400 deaths, causing $
109 billion in annual damage [2]. Research by the
agency also found that 52% of crashes in heavy
trucks were caused by driver drowsiness, and 37%
of adults surveyed said they were sleepy when
driving at least once.
Drowsiness is a common expression when tired,
such as focusing on driving continuously for long
periods and then the driver’s ability to observe,
react is greatly reduced do not promptly reflect
to avoid dangerous situations when approaching
obstacles or other means of transport. Therefore,
drowsiness seriously affects the ability to drive and
just a few seconds of drowsiness, the accident can
occur and cause serious consequences. In the
face of increasingly complicated traffic accidents,
traffic safety issues for many countries around
the world have become extremely important and
urgent issues.
Recently, methods of detecting drowsiness have
been paid special attention by many researchers
to create smart cars. Specifically, the methods of
detecting drowsiness can be divided into three
main groups:
(1) Based on means;
(2) Based on the physiology of the driver;
(3) Based on the driving behavior;
To detect drowsiness, the media-based method
uses several measurements such as deviations
from lane position, the distance between driver’s
vehicle and the vehicle in front of it, steering wheel
movement, pressure on the accelerator pedal, etc.
Reviewers: 1. Prof. Dr. Than Ngoc Hoan
2. Dr. Do Van Dinh
14
NGHIÊN CỨU KHOA HỌC
Tạp chí Nghiên cứu khoa học, Trường Đại học Sao Đỏ, ISSN 1859-4190, Số 3 (70) 2020
These quantities are monitơred continuously by
placing sensors on vehicle components such as the
steering wheel, accelerator pedal and analyzing the
data obtained from these sensors. Any change that
exceeds the allowable level signals the likelihood
of drowsiness. However, because these systems
are so dependent on the quality of the road and
lighting, they can only work on highways and work
in limited situations. Another drawback of these
systems is that they cannot detect drowsiness
when it has not affected the vehicle’s condition.
When driving in a drowsy state but the vehicle is
still in the appropriate lane, these systems cannot
detect it [3]. The method uses physiological
signals, using electroencephalography (EEG) and
electrocardiogram (ECG) to detect drowsiness
[4, 5]. EEG is the most commonly used
physiological signal in the detection of drowsiness.
The energy spectrum of EEG brain waves is also
used as an indicator to detect drowsiness. From
the ECG, two indicators commonly identified and
used to detect drowsiness are heart rate (HR)
and heart rate variability (HRV). Studies show that
heart rate changes significantly when people are
awake or tired. So heart rate can be used to detect
drowsiness. Many other researchers measure
sleepiness with changes in heart rate variability. But
the main drawback of the physiological detection
methods is that they cause inconvenience to the
driver by requiring the driver to wear a body meter
(such as an EEG cap, etc) when driving.
The method based on the driver’s behavior and
in particular, the computer vision group is more
widely used because it is not annoying and does
not cause any disturbance to the driver [6, 7].
Therefore, to contribute to the reduction of traffic
accidents due to the drowsiness of the driver, A
Real-time Drowsiness Detection System with the
infrared system based on Raspberry is proposed
in this paper to use in light conditions day and
night. Because an efficient and user-friendly sleep
detection system can help reduce traffic accidents,
regain a life for many people, and make the world a
safer and better place.
For image recognization using Raspberry, in [8] the
authors used Raspberry as framework in order to
detect the face. In [9], an emotion recognition at
real time on Raspberry Pi II is implemented.
The rest of the paper is organized as follows:
Section 2 indicates the research about Python,
Open CV and Section 3 provides the main
research relating to the Raspberry. The proposed
algorithm is described in section 4. In Section 5,
the experimental results and some discussion are
presented. Finally, some conclusions are shown in
Section 6.
2. LEARN ABOUT PYTHON, OPENCV, DLIB
AND SOME RELATED LIBRARIES
2.1. Python
Python is an object-oriented programming language
and very popular. Created by Guido van Rossum
in Amsterdam in 1990. Python is developed in an
open-source project, managed by the non-profit
Python Software Foundation. Python is a powerful
and easy to learn programming language. Python is
highly efficient with simple data structures and very
useful in object-oriented programming languages.
Python is a simple but very effective programming
language.
- Compared to the Unix shell, Python supports
larger programs and provides more structure.
- Compared to C, Python provides more error
checking mechanisms. It also has advanced data
types, such as flexible arrays and dictionaries,
which will take a long time to build in C.
2.2. OpenCV
OpenCV is a leading open-source library for
computer vision, image processing, and machine
learning, and GPU acceleration features in real-
time operation. OpenCV is released under the BSD
license, so it is completely free for both academic
and commercial purposes. It has C++, C, Python,
Java interfaces and supports Windows, Linux,
Mac OS, iOS and Android. OpenCV is designed
to calculate efficiency and focus more on real-time
applications. Programming in C/C++ optimization,
the library can take advantage of multi-core
processing.
2.3. Dlib
In addition to the OpenCV library, we used Dlib,
it’s an open-source library for system installation.
Dlib was created in 2002 by author Davis King,
written in the C++ programming language.
Different from OpenCV’s purpose of providing
algorithmic infrastructure for computer vision and
image processing applications, Dlib is designed
for machine learning and artificial intelligence
applications with the following main sub-libraries.
2.4. Some related libraries
- NumPy library: support the calculation, data
processing, matrix calculations.
15
LIÊN NGÀNH ĐIỆN - ĐIỆN TỬ - TỰ ĐỘNG HÓA
Tạp chí Nghiên cứu khoa học, Trường Đại học Sao Đỏ, ISSN 1859-4190, Số 3 (70) 2020
- SciPylibrary: Library of algorithms and
mathematical tools for Python.
- Pygamelibrary: Pygame is written in the SDL
library (Simple Direct Media Layer) and is a work
support library developing 2D games. Including the
music player support that this paper used.
- Imutilslibrary: Support convenient functions for
basic image processing such as rotate, translate,
resize, frame.
- Argparselibrary: Analysis of parameters and
options on the command line.
Fig 1. Hardware structure
- Threading library: Used to run multiple threads
(tasks, function calls) at the same time. Note that
this does not mean that they are executed on
different CPUs.
- Time library: Provides multiple ways of
representing time in code (objects, numbers, and
strings). It also provides other functions besides
timing, such as waiting while executing code and
measuring the effectiveness of the code.
- GPi.GPIO library: Help control (programming) the
rio GPIO pins.
3. LEARN ABOUT RASPBERRY COMPUTER
The Raspberry Pi was first developed in 2012.
Originally, the Raspberry Pi was a card plugged
into a computer board developed by developers in
the UK. The Raspberry Pi was later developed into
a single board that functions as a mini-computer for
high school teaching. Developed by the Raspberry
Pi Foundation - a non-profit organization to build
a system that many people can use in different
customization jobs. Raspberry Pi is manufactured
by 3 OEMs: Sony, Qsida, and Egoman. And
distributed mainly by Element14, RS Components
and Egoman. Although slower than modern laptops
and computers, the Raspberry Pi is still considered
a complete Linux computer and can provide all
the capabilities that users expect, with power
consumption [10, 11].
In Fig 1, Raspberry Pi 3B+ (is the 3rd generation
model) launched in 2018 with a processing speed
of 1,4GHz and 1GB Ram. This Raspberry is applied
in the proposed system.
4. THE ALGORITHM OF DROWSINESS
DETECTION
The face-tracking camera will be set up. To detect
whether a person is sleepy or not, we only need
the eye area. Once the eye area is determined,
the edge of the eye is applied to determine if the
eye is closed. If the eyes have been closed for
long enough, it can be assumed that the driver is
at risk of falling asleep and alert in time to attract
the driver’s attention and force them to interact to
reduce drowsiness.
The algorithm flowcharts of drowsiness detection is
shown in Fig 2.
Begin
Import Library
Initialize
variables,
GPIO pins and
functions
Enable live video
stream and Process
each frame
Identify facial
landmarks
Extract left and right
eye coordinates and
eye ratio
If the ratio of
eye is < 0.3
COUNTER= 0
If alarm
Enabled
Increase
COUNTER to 1
If COUNTER >=8
Check and turn on
the alarm
Interaction
between driver to
turn off the sound
Interaction of driver to
turn off the program?
End
Y
N
Y
Y
Y
N
N
N
Fig 2. The algorithm flowcharts of drowsiness detection
4.1. Image from camera
To access the camera, the Imutils library is used
- a set of image processing functions that make
working on OpenCV easier [12]. Initially, the
program will establish a camera connection and
take each photo frame for processing as Fig 3.
16
NGHIÊN CỨU KHOA HỌC
Tạp chí Nghiên cứu khoa học, Trường Đại học Sao Đỏ, ISSN 1859-4190, Số 3 (70) 2020
Fig 3. Pictures are extracted from the camera
4.2. Pretreatment
The infinite loop of frames in the video is started
[13]. Then proceed to the preprocessing step by
resizing the frames accordingly and converting
them to gray.
4.3. Face detection
This paper, we selected HOG face detector in Dlib
to identify the human face in the image [14].This
is a widely used face recognition model, based on
HOG and SVM features [15, 16]. The model is built
out of 5 HOG filters - front view, left view, right view,
front view but left rotation, and front view but right
rotation. The dataset used for training, including
2.825 images taken from the LFW dataset and
manually annotated by Davis King, author of Dlib.
The method starts by using a training set of facial
markers that are labeled on an image.These
images are manually labeled, specifying the specific
coordinates (x, y) of the regions around each face
structure. More specifically, the probability of the
distance between pairs of input pixels, with this
training data, a group of regressors are trained
to estimate facial marker positions directly from
pixel intensities (no “extracting features” is taking
place). The result is a face marker detector that
can be used to detect in real-time with high-quality
predictions as Fig 4.
Fig 4. Facial landmarks are used to label and identify
key facial attributes in an image
- Advantages:
High precision method; Works very well for the
front face and not the front slightly; And basically,
this method works in most cases.
- Disadvantages:
The big drawback is that it does not detect small
faces because it is trained for a minimum face size
of 80×80. Therefore, make sure that the face size
is larger than the minimum size. However, we can
train our own face detectors for smaller sized faces.
Does not work for sides and non-front faces, such
as looking down or up too large.
With the above advantages and disadvantages
and application in this paper, we choose HOG
Face Detector.
4.4. Facial Landmarks
The next step is to apply a structural marker
algorithm with 68 points on the Dlib face area to
locate each important area on the face. Such areas
include eyebrows, eyes, nose, mouth and face
contour. Then convert the result to a NumPy array
as Fig 5.
Fig 5. Visualizing the 68 facial landmark coordinates
from the iBUG 300-W dataset
Fig 6. Area of eye in image
17
LIÊN NGÀNH ĐIỆN - ĐIỆN TỬ - TỰ ĐỘNG HÓA
Tạp chí Nghiên cứu khoa học, Trường Đại học Sao Đỏ, ISSN 1859-4190, Số 3 (70) 2020
4.5. Extract the Eye Area
Because the paper focuses on the state of the eye
so that only needs to focus on the eye area. Using
the NumPy array cutting method can extract the
coordinates (x, y) of the left and right eyes shown
in Fig 6.
4.6. Calculate eye ratio
According to the coordinates of (x, y) for both eyes
will calculate the eye ratio, recommend using both
eye-edge ratios to get a good estimate. Each eye
is represented by 6 coordinates (x, y), starting at
the left corner of the eye (as if you were looking at
that person), and then working clockwise around
the rest of the area as Fig. 7.
Fig 7. The eye ratio
In Fig 7, The top-left is A visualization of eye
landmarks when then the eye is open. The top-
right is eye landmarks when the eye is closed. The
bottom is plotting the eye aspect ratio over time.
The dip in the eye aspect ratio indicates a blink.
According to the [16] and [17], the eye ratio equation
that reflects the relationship between horizontal
and height of eye coordinates called eye ratio:EAR = ||p2|| − ||p6|| + ||p3|| − ||p5|| 2||p1 − p4|| (1)
In which: p1 to p6 is the positions marking the eyes.
At the top left of the image, we have one completely
open eye - the eye ratio here will be large and
relatively stable over time.
However, once the eye is blinked or closed (in
the upper right of the image), the eye ratio drops
significantly to almost zero. And apply to detect
drowsiness based on that.
4.7. Detect Drowsiness
We begin to detect drowsiness by pre-setting
values:
- Set eye threshold: EYE_AR_THRESH = 0,26; To
determine whether the eyes are closed or open.
The threshold for eye ratio is determined from the
test procedure in Table 1.
- The COUNTER count variable is the total
number of consecutive frames that person has
closed their eyes.
Frame number:
- EYE_AR_CONSEC_FRAMES = 6; to tell if the
driver is awake or is falling asleep (Due to the actual
conditions of this paper, the number of frames we
can handle in 1 second is 3-8 frames. So I chose
the number of closed eyes frames that is 6 which is
equivalent to 2s of closed eyes).
Initially, the alarm was turned off:
- ALARM_ON = False;
Table 1. The ratio of eyes when closed and open in
some people
Experimental person Open eyes Close eyes
Wear glasses
1 0.29 0.19
2 0.33 0.18
3 0.38 0.23
4 0.30 0.16
5 0.27 0.10
No glasses
6 0.32 0.25
7 0.27 0.16
8 0.34 0.22
9 0.33 0.14
10 0.26 0.13
Next, we check whether the calculated eye ratio
(EAR) is below the EYE_AR_THRESH threshold
to determine if the eye is closed or open.
- If the specified EAR is less than the threshold
EYE_AR_THRESH, increase the COUNTER
counting variable.
- If COUNTER exceeds the pre-set EYE_AR_
CONSEC_FRAMES, we assume that the person
is dozing off and start the warning. Conversely, if
the eye ratio is greater than the threshold or the
total number of consecutive closed eyes is not
greater than.
EYE_AR_CONSEC_FRAMES, reset COUNTER
to = 0 and turn off the warning.
- Repeat that task during the recording to detect
drowsiness.
4.8. Warning
After determining the driver is dozing off, the
ALARM_ON will be allowed to be ON and turn
18
NGHIÊN CỨU KHOA HỌC
Tạp chí Nghiên cứu khoa học, Trường Đại học Sao Đỏ, ISSN 1859-4190, Số 3 (70) 2020
on the sound to alert. The Pygame library is used
to turn on/off the alarm. The Pygame library is
conveniently installed via the pip command “pip
install Pygame”.
4.9. Interactive
To partially overcome the drowsiness of the
driver, we proceeded to ask the driver to interact
with the warning device via the push button to
turn off the alarm.
Requirement: The driver must open their eyes and
press the push button. If the eye is closed, the alarm
will not turn off. And the push button will be set for
the easiest driver to press and does not affect the
driver’s driving process.
5. EXPERIMENTIAL RESULTS
The fabricated system is tested in various
conditions, including good lighting conditions and
low light conditions with open eyes and closed
eyes as Fig 8, Fig 9, Fig 10, Fig 11, Fig 12 and we
also check the accuracy and warning speed of the
device through Table 2, Table 3 and Table 4.
Fig 8. Experimental results in good light conditions
(for people without glasses)
Fig 9. Experimental results in good lighting conditions
(for spectacles wearers)
Fig 10. Experimental results in low light conditions
(for people without glasses)
Fig 11. Experimental results in low light conditions
(for spectacles wearers)
Table 2. Detection time and drowsiness alert for people without glass(unit: sec)
Experimental person No glasses Average of total1 2 3 4 5
Good light
1st 3,64 2,07 3,30 2,02 3,21
2nd 2,01 3,11 2,92 2,11 3,33
3rd 3,02 2,54 3,19 2,10 2,32
4th 2,20 2,46 2,99 2,94 2,87
5th 3,25 2,86 3,22 2,10 2,46
Average 2,82 2,61 3,12 2,25 2,84 2,73
Low light
1st 2,93 2,30 2,05 2,72 3,02
2nd 2,59 2,47 2,18 2,74 2,36
3rd 2,73 3,51 2,32 2,61 2,12
4th 2,85 2,65 3,18 2,73 2,87
5th 2,65 2,40 3,10 3,01 2,39
Average 2,75 2,67 2,57 2,76 2,55 2,66
19
LIÊN NGÀNH ĐIỆN - ĐIỆN TỬ - TỰ ĐỘNG HÓA
Tạp chí Nghiên cứu khoa học, Trường Đại học Sao Đỏ, ISSN 1859-4190, Số 3 (70) 2020
Table 3. Detection time and drowsiness alert for people with glass (unit: sec)
Experimental person Wear glasses Average of total1 2 3 4 5
Good light
1st 2,02 2,20 3,37 2,06 2,57
2nd 2,65 2,84 2,97 2,23 2,49
3rd 2,92 3,69 2,86 2,60 3,15
4th 2,32 2,28 2,03 3,18 3,23
5th 3,31 2,10 2,65 3,12 2,78
Average 2,64 2,62 2,78 2,64 2,84 2,70
Low light
1st 3,79 3,30 3,56 3,01 2,63
2nd 2,26 2,79 2,79 3,12 3,02
3rd 2,14 2,78 3,51 2,68 3,28
4th 3,01 2,79 3,19 2,10 2,89
5th 2,72 2,93 2,41 2,56 2,62
Average 2,78 2,92 3,09 2,69 2,89 2,87
Table 4. Equipment accuracy test table
Experimental person Good light Low light1st 2nd 3rd 1st 2nd 3rd
No glasses
1 T T T T T T
2 F T T T T T
3 T T T T T F
4 T T T F T T
5 T T T T F T
6 T T F T T T
7 T T T T T T
8 T T T F T T
9 F T T T T T
10 T T T T T T
Percentage 90% 86,87%
Wear glasses
11 T T T T T F
12 T T F T T T
13 F T T T F T
14 T T T T T T
15 F T T F T T
16 T F T T T F
17 T T F T T T
18 F T T T F T
19 T T F T T F
20 F T T T T T
Percentage 73,33% 80%
Percentage of total 82,5%
20
NGHIÊN CỨU KHOA HỌC
Tạp chí Nghiên cứu khoa học, Trường Đại học Sao Đỏ, ISSN 1859-4190, Số 3 (70) 2020
Test results show that the accuracy and warning
time of the device has not reached the highest
accuracy due to a number of factors such as
hardware configuration, camera view. But those
factors can be overcome by setting the driver’s
camera view and stronger hardware configuration.
Currently, the device’s average detection and the
alert rate is 2,74 sec, relatively suitable for detecting
drowsy alerts.
The device’s accuracy checks table and charts
on certain people with ‘T’ being the correct
identification. ‘F’ is an incorrect identification.
Show us that the accuracy of the device is in the
range of 82,5% and this accuracy reduces more in
people who wear glasses, especially in high light
conditions. The exact element of the device is not
perfect for those who wear transparent glasses but
it is perfect for people who do not wear glasses.
And the real-time sleep detection system works
well in both good light and low light. The team will
try to improve the accuracy of the device for people
wearing transparent glasses to a better level.
Below is a picture of the device after finishing:
including Raspberry Pi 3B + hardware, Infrared
Camera Pi, Speaker for alarm, the button to turn
off the alarm, power button. Both alarm and central
hardware use 5 V power with 2-3 A current, which
can be easily converted for use directly in cars.
Fig 12. Drowsiness Detection System
6. CONCLUSION
This paper was built and simulated on the
Raspbian operating system and Raspberry Pi 3
with the support of OpenCV and Dlib open source
libraries which have worked well in real conditions
complicated and not too cumbersome, its can
operate 24/24. And due to the interaction between
the driver and the device, the driver can reduce
drowsiness.
According to the analytical parameters, the rate
of driver drowsiness at night accounts for a very
large proportion. Comparing the actual situation
with several articles on the same topic, have not
any specific real-time parameters of the alarm
detector, the accuracy of the device for different
conditions and especially the ability to operate
at night (daytime and night conditions are very
different). This research has somewhat met these
requirements.
However, this paper still has some shortcomings.
When drivers wear glasses (colored glasses), the
identification of eyes is impossible, with transparent
glass, the accuracy is also reduced due to reflection
of sunlight. If the driver tilts, swings left-right rotate
an angle greater than 45 degrees, heads up, heads
over 30 degrees, the device detects the moving
eye may not accuracy.
REFERENCES
[1] Association for Safe International Road
Travel (2015), Road crash statistics (update
01/8/2020).
[2] M.J.Flores, J.M. Armingol, A. D. Escalera
(2010), Driver drowsiness warning system
using visual information for both diurnal and
nocturnal illumination conditions, EURASIP
Journal on advances in signal processing,
pp. 01-23.
[3] V. Triyanti, H. Iridiastadi (2017), Challenges
in detecting drowsiness based on driver’s
behavior, Materials science and engineering,
Vol. 277, pp. 012-042.
[4] C. D. Hoang , P. K. Nguyen, V. D. Nguyen
(2013), A Review of Heart Rate Variability and
its Applications, APCBEE Procedia, Vol 7, pp.
80-85.
[5] U.R. Acharya, K.P. Joseph, N. Kannathal, C.M.
Lim, J.S. Suri (2007), Heart rate variability:
A review, Medical & Biological Engineering
&Computing, pp. 1031-1051.
[6] T. H. Lam, V. L. V, M.T. Ha, N. T. Do
(2012), Modeling the Human Face and its
Application for Detection of Driver Drowsiness,
International Journal of Computer Science
and Telecommunications, Vol. 3, pp. 56-59.
21
LIÊN NGÀNH ĐIỆN - ĐIỆN TỬ - TỰ ĐỘNG HÓA
Tạp chí Nghiên cứu khoa học, Trường Đại học Sao Đỏ, ISSN 1859-4190, Số 3 (70) 2020
[7] T. H. Lam (2017), Develop some techniques for
detecting drowsy driving based on eye states
and head nodding behavior, Doctoral Thesis.
[8] Gupta, Ishita, et al (2016), Face detection
and recognition using Raspberry Pi, IEEE
International WIE Conference on Electrical
and Computer Engineering (WIECON-ECE),
pp. 83-86.
[9] Suja, P., and Shikha Tripathi (2016), Real-time
emotion recognition from facial images using
Raspberry Pi II, 3rd International Conference
on Signal Processing and Integrated
Networks (SPIN), pp. 666-670.
[10] T.D. Orazio, M.Leo, A. Distante (2004), Eye
detection in faces images for a driver vigilante
system, IEEE Intelligent Vehicles Symposium,
University of Parma, Italy, 4 page.
[11] M. Simon (2013), Programming the Raspberry
Pi Getting Started with Python, Preston, UK.
[12] H. Joseph, J. Prateek, B. Michael (2016),
OpenCV: Computer Vision Projects with
Python, Packt Publishing.
[13] T. H. Nguyen (2013), Curriculum: Image
processing, National University of Ho Chi
Minh City.
[14] T. Rajeev (2018), Real-Time Face Detection
and Recognition with SVM and HOG Features
(update 01/8/2020).
[15] U. Mehmet, Jen. Sheng (2018), MAKER:
Facial Feature Detection Library for Teaching
Algorithm Basics in Python, 2018 ASEE
Annual Conference & Exposition, Board 137,
4 page.
[16] G. Vikas (2018), Face Detection – OpenCV,
Dlib and Deep Learning (update 01/8/2020).
[17] S. Tereza, C. Jan (2016), Real-Time Eye
Blink Detection using Facial Landmarks, 21st
Computer Vision Winter Workshop, Rimske
Toplice, Slovenia, 6 page.
AUTHORS BIOGRAPHY
Nguyen Trong Cac
- Nguyen Trong Cac is a Ph.D. student in Hanoi University of Science and Technology
(HUST), Vietnam, where he has been since 2011. He received M.Sc. degree from
the HUST in 2005. From 2006 until now he has been working at Saodo University,
Vietnam.
- Areas of interest: Industrial Informatics and Embedded Systems, Networked
Control Systems (NCS).
- Email: cacdhsd@gmail.com
- Mobile: 0904369421
Pham Viet Hung
- Pham Viet Hung received B. Eng and M.Sc degree in Electronics and
Telecommunications from Hanoi University of Science and Technology in 2003 and
2007, respectively. From 2003 until now he has been working at Faculty of Electric
and Electronics, Vietnam Maritime University. His research interests include signal
processing in Global Navigation Satellite Systems, digital transmission, maritime
communications.
- Email: phamviethung@vimaru.edu.vn
- Mobile: 0916.588.889
Các file đính kèm theo tài liệu này:
- mot_he_thong_chong_ngu_gat_cho_cac_lai_xe_su_dung_raspberry.pdf