Sensor-Fusion Based Navigation for Mobile Robot in Outdoor Environment

Autonomous navigation of the vehicles or robots is very challenging and useful task used by many scientists and researchers these days. By keeping this fact in mind, an algorithm for autonomous navigation of mobile robot in outdoor environment is proposed. This navigation track consists of colored border containing obstacles and some unplanned surfaces along with some specific points for GPS (Global Positioning System) alarms. The main goal is to avoid colored border and obstacles. For the vision problem webcam is used. First the colored border is detected by using OpenCV library by following HSI (Hue Saturation Intensity)technique. The Canny edge algorithm is used to find both edges of the border, then for detecting straight lines on both sides of track, Hough transformation is used. Finally, the closest border line is detected and its center point is calculated for which the mobile robot has to steer to avoid it. Second step is to avoid the obstacles, which is done by LRF (Laser Range Finder), first the range of LRF is defined, because not all obstacles have to be avoided, only those obstacles are detected and avoided which are in specified range as defined before, and finally GPS receiver is used to make alarms at some specific points. As a result, a successful navigation of the mobile robot in the outdoor environment is implemented.

INTRODUCTION [1]. Some authors adopted vision system and encoder to localize the target approximately by introducing the concept of decision-making space, forward passageway [2]. Some propose a novel-based navigation method in contrast with appearance-based approaches. This algorithm is based on motion estimation by a camera to plan the next moment of a robot and robust feature matching to recognize home and destination locations [3]. Some use mobile robot by control station and control this robot by receiving the image data from camera by sending to the control station [4]. Some use the differential GPS and odometry data by detecting the curbs by laser range finder and building the map [5]. For boundary detection, many vision techniques have been used by using image segmentation algorithm and have shown the experimental results [6]. Tracking the people for mobile robot in motion is discussed in [7]. The kinematics of 4wheeld mobile has been derived by using two 2-wheeled mobile robots [8]. Motion in skid-steering mobile robots is achieved by varying the speed on opposite sides of wheels [10]. For vision some reference use hybrid localization approach which switchs between local, in strong illumination changes and global image features [11]. Some authors have used both the vision and laser data for tracking moving obstacles like humans on outdoor environment depending on the speed of the obstacles [12].A hybrid approach is used to decrease the computational time of type-reduction process [13]. The navigation is mainly dependent on the parameters like perception, localization, cognition and motion control [14].
The WLPS (Wireless Local Positioning System) provides local positioning with sufficient coverage, reliability and accuracy [15]. This paper has two main portions. In the first portion, hardware design of mobile robot is explained, and in second portion, navigation algorithm is proposed.
For the hardware, 4-wheeled skid steering mobile robot is used. These 4-wheels are controlled by four BLDC (Brushless DC motor) having four motor drivers [9]. It also has GPS receivers which provide one with the latitude and longitude of the mobile robot platform by using WGS-84 The Navigation track was about 150m in length consisting of obstacles, unplanned surface, and speed breakers. Also it has some specific positions for GPS alarms. There was a colored border of the track. The successful navigation was to pass through the navigation track without striking the obstacles, without moving it out of the track and making GPS alarm at some specific positions. Fig. 1 shows the sample navigation track in which we can see the border, obstacles, and some slope paths and Fig. 2 shows the size of obstacle.
Finally, by using this algorithm and kinematic design, navigation of the mobile robot in the outdoor environment has been successfully implemented.

Kinematics of the 4-Wheeled Mobile Robot
The kinematic description of a 4-wheeled mobile robot is shown in Fig. 3. Basically it is obtained by combining two 2-wheeled mobile robots [8].
, and v 4 are the velocities of the four wheels 'v' and 'w' are the linear and angular velocities of the mobile robot platform respectively, 'd' is the distance between two opposite wheels, and  is the angle between the wheel and the perpendicular axis of each 'd'.
The kinematic equations of the 4-wheeled mobile robot are shown below:

VISION SYSTEM FOR DETECTING COLORED BORDER
In vision process OpenCV library is used. The steps in the vision process are shown in the flow chart in Fig. 6.
In the vision process, following steps have been done on the original image.

Median Filtering
In image processing, it is usually necessary to perform high degree of noise reduction in an image before performing higher-level processing steps, such as edge detection. The image filter is a non-linear filtering technique, often used to remove noise from images or other signals.
Median filtering is a common step in image filtering. It is particularly useful to reduce noise and salt and pepper noise. Its edge-preserving nature makes it useful in cases where edge blurring is undesirable.

Color Extraction in HSV
There are two main kinds of color models. RGB, and the other one is HSI. Here, we have used HSI model because it is more tolerant to the intensity of light and also has more accurate color relationship as compared to RGB.

Canny Edge Detection
The Canny edge detection characterizes boundaries and is therefore a fundamental step in image processing.

Hough Transformation
The Hough Transformation is a feature extraction technique used in image analysis, computer vision, and digital image processing. The purpose of this technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a socalled accumulator space that is explicitly constructed by the algorithm for computing the Hough transform.

Vision Results
The original picture taken by vision camera is shown in If the center of closest line lies within some specific range, then the mobile robot needs to avoid it otherwise it has to go straight. In Fig. 11 we can see the range of the line.
We can see in Fig where v normal =30cm/sec and 'd' is the distance between the mobile robot and obstacle at that specific point.

OBSTACLE DETECTION
So far, the colored border has been detected by vision data using a vision camera. However, the performance of the vision-based technique is very sensitive to the conditions of camera setting such as view point and angle of pixels, and so on. Moreover, the common change of illumination and weather conditions are another major obstacle to the reliability and robustness of vision system so to detect obstacles LRF is used [7]. Fig. 12 shows how the obstacle is detected within the range of LRF.
Where r c is the radius of the obstacle, and x p1 , y p1 , , and  can be calculated by the formulas shown below: In the same way, the coordinates of p 2 can be calculated by: x p2 = r 2 cos q 2 (9) y p2 = r 2 sin q 2 (10) The coordinates of the centre point p m can be calculated by:

Case-1:
We can see the vision of the camera showing the border at position 1 on the track in Fig. 18(a).
Here in this case the mobile robot will steer towards right because of detecting the border on the left side by the vision camera.

Case-2:
We can see the vision of the camera showing the border at position 2 on the track in Fig. 18(b).
Here we can see that because of steering towards right side, the length of border is getting reduced within the range of vision camera, but it will keep on steering little towards right side because of the border.

Case-3:
We can see the vision of camera showing the border at position 3 on the track in Fig. 18(c).
Here we can see that because of constantly steering towards right side, the length of border within the range of camera is getting reduced, but still it is inside the range of vision camera and thus it will keep on steering towards right side.

Case-4:
We can see the vision of camera showing the border at position 4 on the track in Fig. 18(d). Here

Right Side
There are four cases if the mobile robot is moving on the right side of the track.

Case-1:
We can see the vision of camera showing the border at position 1 on the track in Fig. 19(a).
Here the mobile robot will steer towards left because of detecting the border on the right side by vision camera.

Case-2:
We can see the vision of camera showing the border at position 2 on the track in Fig. 19(b).
Here, the vision camera has detected the right side corner of the border, thus still it will keep on steering towards the left side.  Fig. 19(c).
Here the vision camera has not detected any border line as it has come out of the range of the right side border and has not entered in the range of any side border yet, So it will keep on moving straight until it came across any border.

Case-4:
We can see the vision of camera showing the border at position 4 on the track in Fig. 19(d).
Here the mobile robot has detected the front line border.
Here it has to decide whether it has to move towards left side or right side. but here the correct side is the right side, so the algorithm is written in such a way that if it comes across the situation that before detecting front line border, it has no border line within the camera range as mentioned in the case 3, it must have to steer on the same side where it has detected the last side border before getting no border data within the camera range, as it has detected the border on the right side as in case 2, it will move towards right side and vice versa.

VISION AND LASER RANGE FINDER
So far we have only discussed the vision process to detect the border and LRF to detect obstacles individually, but now we will combine the data of vision and LRF to avoid border and obstacles simultaneously. Different cases having both vision camera and laser range finder have been discussed.
In Fig. 20(a), we can see the mobile Robot on navigation track with obstacle and in Fig. 20(b), we have two cases whether the robot will be on the right side of the track or on the left side of the track.

Case-1:
If the mobile robot is moving on the left side as shown in Fig. 20(b) and comes across obstacle, then it will behave in the same way as front side border and as before detecting front line border or obstacle it has left side border in the record and by following the same algorithm it will steer towards right side.

Case-2:
If the mobile robot is moving on the right side, then it will go straight by avoiding the right side border.
Now we can consider another case by taking the obstacles in different positions as shown in Fig. 21(a-b).

Case-1:
If the mobile robot is on the left side, it will go straight by avoiding the border.

Case-2:
If the mobile robot is moving on the right side of border and it comes across an obstacle, then it will consider it as a front line border and by following the algorithm it will steer towards left to avoid the obstacle. Now we can see another case of mobile robot with obstacle in Fig. 22(a-b).
In Fig. 22(b), the mobile robot can follow any direction depending on the situation and distance between the obstacle and the border.

VISUAL C++
Microsoft Visual C++ tool is used for making the algorithm of navigation. Fig. 23 shows all the control program of computer simulation window.

Computer Simulation Window of Vision Camera
First the simulation window of vision camera is shown in

Computer Simulation Window of Laser Range Finder
The simulation window of the laser range finder is shown in Fig. 25.

Computer Simulation Window of Motor Control
The computer simulation window for the motor control is shown in Fig. 26.
Here we have two control buttons. One is Motor Enable

Computer Simulation Window of GPS
The computer simulation window for the GPS is shown in There is a TIME edit box which is used to show time at the current moment and finally the Set GPS Points button which is used to show the GPS data.

CONCLUSION
As it was a long navigation track thus it has made some motor problem but as it was a redundantly actuated system so it kept on moving and finally by applying all these techniques, successful navigation of mobile robot has been completed and successfully implemented.

FUTURE WORK
In future, this work can be implemented to more complicated navigation track by changing algorithm and hardware parameters as this is the first step and can be extended in many ways.