WE ARE CEE

CENTER ELECTRICAL ENGINEERING

Mobile, Embedded System, PCB Layout, Robotics and UAV

Nghiên cứu & Chuyển giao CN

A 3D image capture system using a laser range finder for autonomous mobile robots.

 1. Introduction

An important task of autonomous mobile robot is to  build the map of the environment based on sensory data.  Using the co-ordinate data of the surrounding objects in  the map, the robotic navigation or other operations can be  solved. There are some of distance measuring sensors  available which can be used to detect obstacles and path  planning of robot. Each sensor has advantages and  limitations. The ultrasonic sensor is a low-cost device and  has an advantage of producing results faster than other  devices. However, ultrasonic range measurements suffer  from some fundamental drawbacks which limit the  usefulness of these devices in mapping at the indoor  environment. In these cases, the accurate result is affected  by such phenomena like the beam spread or cross-talk  reflection of the sonar beam [1]. Nowadays, video  cameras are widely used in mobile robots but the image  data is much dependent on the lighting conditions and the  surface texture of objects. Moreover, ordinary vision  systems can not directly measure such a geometric  parameter like distance of the objects. Stereo camera can  overcome this problem partly but requires a big  computing process with not so high accurate results. An  alternative is a laser scanner, or a laser range finder  (LRF). One advantage of the laser scanner is that it has  the ability to collect distance measurements at a high rate  and with high accuracy. Another advantage is that the  result is not so much dependent on environmental  condition [2][3]. A laser scanner is a sensor that bases on a  time-of-flight measurement principle (Laser Radar). A  single laser pulse is emitted out and reflected by an object  surface within the range of the sensor. The lapsed time  between emission and reception of the laser pulse is used  to calculate the distance between the object and the sensor.  By an integrated rotating mirror the laser pulses sweep a  radial range in front of the laser scanner so that a  2-dimentional measurement field/area is defined as shown  in figure 1.

 

Figure 1. 2D- Laser beam scanning plane

However, because the pitching angle of the scanning  plane is fixed, the information of this 2D image may  cause problems with overhanging objects as shown in the  figure. Here, there are only legs of table that could be  detected but not its flat surface and base. In this case, a  3D image is necessary [2]. As the 2D scanner is popular  and a low-cost, some groups try to build a 3D laser range  finder based on the 2D laser scanner. The most popular  solution is to use a standard 2D scanner and a scanning  mechanical actuator to reach the third dimension. Some of  scanning methods have been used, namely pitching scanrolling scan, yawing scan, etc [4][6].  In this work, we report a 3D image capture system using the model of pitching scan with an improvement on  the mechanical and electronic designs. The detail of  building the 3D-laser range finder based on the 2D-laser  scanner is presented in section 2. The hardware and  software for interfacing between the laser scanner and the  computer in order to receive an accuracy data are reported  in sections 3 and 4. The experimental results are  presented in section 5. Section 6 deals with the idea to  use the pipelining technique with a FPGA chip in order to  increase the data processing performance of the system.

2. Building the 3D-laster range finder

A 2D-laser range finder LMS-221 was used in our  system [8]. The LMS has a view angle of 100°, scanned  with angular resolutions of 0.25°, 0.5° and 1°Though our system is also a kind of the pitching scan  one but its mechanism is different from that Oliver Wulf  et al. [4] and Alastair Harrison et al. [5] used. In their  works, the base of LMS is controlled to turn continuously  in order to reach a constant pitching angle speed.  Consequently, electric cables must be replaced by a slip  ringer which is used for the continuous contact of power  and signal transfer. This may arise the unstableness for  the system due to the problem of electric contact. On the  other hand, in our system the base of LMS is designed to  turn up and down with a limited angular range as shown  in figure 2 so that electric wires can be fixed without  using slip rings.  

 

Figure 2.  Pitching scan method with turning up and down

These half rotations are called a pitching-up scan or pitching down one. During the time of pitching scan, the horizontal scanning plane is pitched. The measured  distance data create a cloud of measured points, they  are distributed on a virtual sphere surrounding the 3D-laser scanner. In our design, the data is acquisited during the  time of pitching-up scan, other processes are carried out  during the time of pitching-down scan The difficulty to get the pitching speed stabilization is overcome by using a PID electronic controller which will be described in the next section. Technical drawing of the mechanical system is introduced in figure 3 and the picture of this system is shown in figure 4. 

Figure 3. Drawing of the turning mechanism of the LRF’s base

The base of LRF is attached on a steel plane with size of 337 mm ´ 50 mm which is welded to an end of a link. The other end of the link is a joint (ball bearing) which is attached at the position in a diameter of a f120  turning disk. This position defines the range of scanning angle.  As the disk continuously rotates, the base will turn up and down.

 

Figure 4. The turning mechanism and servo motor.

During the measurement time, two value set of the deflect angle b of laser beam and distance R are received from LRF. Each data set of one horizontal scan (b, R) is combined with a pitching angle a. The pitching  speed is defined by experiment.  Based on these data, we can define the Cartesian coordinates of an image point as follows (Fig.5):

 

Figure 5. Definition of the co-ordinate of a 3D point

The picture of mobile robot is shown in figure 6.

 

Figure 6. Picture of the mobile robot with a LRF

  1.  The electronics for controlling the pitching scan and data acquisition                                                                                                                        In the model of pitching scan, in order to get a linear image in the z-dimension, the pitching speed must be constant. Due to our mechanical system is designed with the pitching up and down that creates an  unsymmetrical moving system. During moving, factors like weight and friction of mechanical details may affect to the stable of the pitching speed. In order to overcome this drawback we used an electronic servo drive which guarantees a stable shaft speed of motor. This is a microprocessor-based electronic circuit with an embedded firmware, which permits to control the dc motor speed by  a PID algorithm (Proportional-Integral- Derivative Control). The controlling routine is assigned as the low-level operation in the total program of robot so that the speed of the motor can be controlled  independently. Those mean that this action does not appropriate the time of robot operating system. The adjustment of PID controlling coefficients and speed measurement are carried out with KP = 6000, KI = 35 and KD = 20. The speed is defined by counting the number of pulses of an optical decoder which is attached to the motor shaft. In order to get the reality, one measured resulting value is the average of 64 velocity measurements each which takes 5 ms. The stability of system is checked by a measuring program, which is written by LABVIEW language and the  result is shown in figure 7. While the  speed of a motor without PID controlling is unstable (especially at the point of time for changing the moving direction up and down of the base of LRF), its speed with PID controlling is more stable with a  variable is only approximately ± Omni camera 5%.



                                                                                                                              Figure 7. The result of the PID control
  2. Development of the program for data acquisition and process from the laser scanner                                                                                                                                                                           

    The program for receiving and processing data from LRF is developed in the programming environment of  Microsoft Visual C++. carried out with KP = 6000, KI = 35 and KD = 20. The speed is defined by  counting the number of pulses of an optical decoder which is attached to the motor shaft. In order to get the reality, one measured resulting value is the average of 64 velocity measurements each which  takes 5 ms. The stability of system is checked by a measuring  program, which is written by LABVIEW language and the result is shown in figure 7. While the speed of a motor  without PID controlling is  unstable (especially at the point of time for changing the moving direction up and capturing data is as follows:                                                                        

    Figure 9. Flow chart for data acquisition

    The program needs to determine the start of an LRF out data string by identifying a specific header in the incoming data stream. The 7 bytes string header is different for each measurement mode.

    5. Results for 3D-laser image capturing

    Figure 10a shows a vision image from a camera for comparison. Figure 10 is a 2D-laser image with a horizontal scan plane at the pitching angle of 0°. Due to  the laser scanner is laid at 40 cm of high level to the ground, only two legs of a man can be detected. On the other hand, figure 10c proves the presence of all of the body of the tall man.  Some of the 3D laser image capturing experiments  are carried out at the indoor environment with the size of room is approximately 8 m. The number of horizontal scanning line in one image frame  (in the pitching angle range from -5° to 20°) depends on the pitching speed.

Figure 10.  The vision image , 2D-laser image and the 3-D laser image

Depending on the application, a specific pitching time  (corresponding to pitching speed) is selected. For  example, the mode of long pitching time with high  angular resolution (42 s / 0.25°) gives 100 scans/frame.  This mode is used for the image capturing in the static  environment. In this case, the robot stops at a position  and collects data in 42 s in order to build a map of the  room. The figure 11a is the vision image picked up from a  camera and figure 11b is the 3D-laser image received  from data of our system. The result shows that the  accuracy, resolution and linearity are reasonable. Because  the laser image point cloud is distributed on a sphere  space so that the point density (related to the image  resolution) is reduced proportionally to the distance from  objects to LRF.

6. Pipelining process by FPGA technology

The LRF is used in the mobile robot with several tasks  such as: obstacle avoidance, navigation, localization.  Common operations present in these tasks are the  processes related to filtering, segmentation or feature  extraction. With a large amount of data for a 3D-laser  image (e.g. 80,000 bytes for a cloud of 40,000 points with  a resolution of 100°×0.25°), it would take a lot of time for  the completing those processes. Moreover, there are some  additional processes such as the PID controlling for the  pitching motor and wheel motors. All this operations  require to be realized in the real time. In a normal  PC-based system, these processes are programmed into  two phases in sequenced as shown in figure 14b. In order  to reduce the effective processing time of the system, a  pipelining technique has been used in our design. In this  idea, we used the PC as a buffer for data acquisition from  LRF. After that the data were transferred to a second  processing unit, which was connected serially with the  PC. 

                                                                                                                                                TS. Trần Thuận Hoàng

BÀI VIẾT LIÊN QUAN:

Energy-Efficient Unmanned Aerial Vehicle (UAV) Surveillance Utilizing Artificial Intelligence (AI)

Recently, unmanned aerial vehicles (UAVs) have enhanced connectivity and ...

Preparation of Papers in Two Column Format for the ICSES Transactions and Conferences

Today, airports are quickly deploying self-service technologies as a ...

Robot Navigation Using FPGA Based Moving Object Tracking System

The paper describes an object tracking robot system implemented on FPGA. The ...

Trajectory Tracking Control of the Nonholonomic Mobile Robot using Torque Method and Neural Network

This paper deals with the problem of tracking control of the mobile robot with ...