How To Build an Obstacle Avoiding Robot

How To Build an Obstacle Avoiding Robot

adminJune 10, 2020156min200
adminJune 10, 2020156min200
Power-setup-for-three-ultrasonic-sensors_opt.jpg
In this tutorial, you will learn how to build an obstacle detection and avoidance robot with Arduino and three HC-SR04 ultrasonic sensors. The robot is a low-cost mobile platform with two drive wheels and a rear caster. It includes three sensors making it aware of the obstacles in the environment. The robot navigates without knowing […]


In this tutorial, you will learn how to build an obstacle detection and avoidance robot with Arduino and three HC-SR04 ultrasonic sensors. The robot is a low-cost mobile platform with two drive wheels and a rear caster. It includes three sensors making it aware of the obstacles in the environment.

The robot navigates without knowing a detailed map of the surroundings. If an obstacle is detected in its path, the robot adapts its velocity in order to avoid the collision. If the surrounding environment is free of obstructions, the robot simply moves forward until an obstacle is detected in the range of the sensors.

List of main components:

  • 1 X 2WD Robot Chassis Kit [Amazon]
  • 1 X L298N Motor Driver [Amazon]
  • 1 X Arduino UNO [Amazon]
  • 3 X HC-SR04 Ultrasonic Sensor [Amazon]
  • 1 X 5V Battery Bank [Amazon]
  • 1 X Battery Holder for 4 x AA-Cell [Amazon]

First, we need to define the inputs to know from where the robot takes the information into its system. This detection and avoidance robot will have two types of inputs.

1. The most straightforward input is the switch button that will turn ON and OFF the motor driver and a power on/off button for the battery bank to power the Arduino board and the sensors.

2. The robot sees the world through sensors and is the way of taking information into the control system. The sensors are the second input of the robot. The sensors let the robot to detect and respond to the surrounding environment around it. The sensors enable the robot to operate safely, automatically detecting obstacles, and dynamically change its route.

The robot uses sensors to measure the distance between the robot and the obstacle. As you may know, inside a home are different objects with different sizes and build from many types of materials. The robot can detect objects from different materials like a wooden chair or a sofa bed.

Once the robot detects an obstacle, the algorithm calculates an alternative path based on the last outputs of the sensors. If the object is on the left side of the platform, the robot dynamically moves its direction to the right until the sensors no longer detect the obstacle. The same behavior when the sensor detects a barrier on the right side.

If the sensor detects an obstacle in the middle of the robot’s path, then the algorithm randomly changes its direction to the left or right until the sensors no longer detect an obstacle.

We finished defining the types of input for the autonomous robot. Inputs are essential, so are the outputs.

The obstacle detection and avoidance robot have one type of output.

3. The mobile robot uses two DC motors, one for each side of the platform. Each motor is programmed individually and can move the robot in any direction for turning and pivoting.

To have complete control over the DC motors, we have to control the speed and the direction of rotation. This can be achieved by combining the PWM method for controlling the speed, and the H-bridge electronic circuit for controlling the rotation of direction.

Once we finish defining the inputs and output, we go further and break up the obstacle detection and avoidance robot into simple pieces and work on them one by one.

WARNING: You need knowledge about circuitry, like electrical engineering concepts- schematics, datasheets, volts, as well as some experience in connecting power supplies. I’m not responsible for any loss or damage of your personal property.

For this robot, I use a flexible architecture, meaning that the user is allowed to:

  • Add multiple sensors and components.
  • Possibility to replace actual sensors with other sensors.
  • Write with easy your own software.

Part 1: The Detection System

This part is about translating the raw data output from sensors into accurate measurements that are used in Navigation.

Part 2: Navigation

In this part we build the algorithm to calculate the steering in order to find a collision-free path in the surrounded environment of the mobile robot.

Part 3: Drive System

In this part, we are using the data from the Navigation to steer the robot.

Once we know what we have to do, we go further to make things simple and easy-to-understand. We will start by explaining how to do it, build the circuits, and write the program.

Part 1: The Detection System

At this step, we have to give to the robot sensing capabilities. We have to create a detection system with the ability to see the surrounding environment. The ultrasonic sensors are very efficient for sensing the surrounding environment in both the inside and the outside world.

Instead of using a single ultrasonic sensor to solve the sensing problem, we made use of three sensors to guide the robot in a complex environment. We will use all the sensors in a close-range observation task. This is because the robot will work inside, and the maneuver space is limited. For example, an HC-SR04 provides 2 cm to 400 cm of non-contact measurement function. Make use of the maximum range of the sensors, the robot will always detect an object in its path inside a room and will not move at all.

1.1 Setup Arduino and HC-SR04 Ultrasonic Sensors

According to specifications, the HC-SR04 has an operating voltage of 5V. Arduino has an interface pin that provides 5V output.

The operating current for one ultrasonic sensor is 15mA. Three sensors will consume 45mA, which can be supported by Arduino even if it is connected to a USB port of a 5V battery bank. The 5V output pin of Arduino UNO is suitable for ~400 mA on USB, and more than double (~900 mA) when it is used with an external power adapter.

For connections between sensors and Arduino, I use jumper wires.

Powering the sensors from one 5V pin is a challenge and requires cutting and soldering the wires. As a workaround, you can use a breadboard to connect the 5V and ground pins from Arduino to the 5V and ground pins of the sensors.

This is my final setup to power the sensors. The red/orange wires are the 5V, and black/blue wires are for ground.

Power setup for three ultrasonic sensors

After connecting the power wires, we go further and use female-to-male jumper wires to connect the trigger and echo pins.

Sensor left:
Echo -> pin 2 Arduino
Trig -> pin 3 Arduino
Sensor center:
Echo -> pin 4 Arduino
Trig -> pin 5 Arduino
Sensor right:
Echo -> pin 6 Arduino
Trig -> pin 7 Arduino

The final setup for the sensors and Arduino:

Setup 3XHC-SR04, Arduino, and 5V Battery Bank

Setup 3XHC-SR04, Arduino, and 5V Battery Bank

Once we have all the sensors connected to Arduino, it’s time to read the output signal from these and transform the output signal into centimeters.

For the moment, the Arduino is connected via USB cable to my laptop. In this way, we can upload the sketch to Arduino and test if everything works as designed.

Lines 100 to 109: We’re using the NewPing library. Looping through each sensor and when the sensor ping cycle is complete, return the readings;
Lines 114 to 115: if the ping is received, add to array the distance measured;

1.2 Apply filters to remove noisy, jumpy or erratic readings

Our task in filtering the output of the sensor includes several steps. We apply a filter to remove some of the unwanted readings and leave a smoother result. Then use a Kalman Filter to remove jumpy or erratic sensor’s readings.

Lines 127 to 131: Check the output of sensors, and if the value is 0, return the last stored value different than 0. The previous value different than 0 is stored in an array.

Lines 136 to 138: Apply Kalman Filters to all the three sensors. In this way, we remove jumpy or erratic sensor’s readings;

1.3 Install the sensors

The 2WD robot chassis that I used is approximately 16 centimeters width. This means that the sensors should detect obstacles for at least 16 centimeters in front of the robot. To be clear, let’s check the sketch below.

Sensor detection sketch

Sensor detection sketch

Detecting obstacles inside the area that is covered by the width of the robot is not the best approach. We increase the area of interest with one centimeter left and right of the chassis – which means 18 centimeters width in front of the robot. This means that each sensor should cover a width of at least 6 centimeters in front of the robot.

A few weeks ago, I wrote a tutorial about HC-SR04 and Arduino. In the tutorial, you can find the operating detection range for an HC-SR04 in the range of 5 to 100cm. At 100cm, the sensor has an operating detection range of 8 centimeters, which is more than we need for this example. Doing some math, we discover that a maximum range of 75 centimeters is what we need to detect all the obstacles in front of our robot.

We also have a blind zone between 0 and 5 centimeters in front of the robot. A bumper along the lower front part of the robot could be a solution to help protect it. But this part is not the subject of this tutorial, and will not be covered.

Sensor holder

Sensor holder

This is a 3D printed triple sensor holder for the HC-SR04 ultrasonic sensor. It has a simple design, and it is practical to assemble and disassemble. The sensors are very stable and don’t need any screws or glue them. Also, the holder will protect the sensors and give the robot a better look.

The sensor holder is fixed on the chassis using M3 screws and M3 nuts.

We finish the detection system. We connect the sensors to Arduino and design the 3D printed triple sensor holder. We are ready to use the sensors for detecting and avoiding obstacles.

Part 2: Navigation

Our robot would be much useful if it could move the obstacles out of the way. Well, since it’s impossible for a small robot, we have to program the robot to find collision-free paths.

The real world contains objects, doors, furniture that should be detected and avoided by the robot. The algorithm that controls the movements of the robot should decide which course to follow and steer it there.

2.1 The algorithm

1. First, we have to define the minimum and maximum range of the sensors. We have to write the algorithm capable of identifying the obstacle and its position between the blind zone and the maximum range acceptable for this project.
2. If none of the sensors detects an obstacle, go forward at maximum speed.
3. If at least one sensor detects an obstacle, decrease the speed.
4. Check again if the sensors detect obstacles.
5. If at least one sensor is still detecting an obstacle, go to the sensor’s state.
6. If the left sensor detects an obstacle, move to the right until the sensor doesn’t detect the obstacle.
7. If the center sensor detects an obstacle, move randomly to the left or right until the center and left/right sensors do not detect the obstacle.
8. If the right sensor detects an obstacle, move to the left until the sensor doesn’t detect the obstacles.
9. If all the sensors detect obstacles, move back and turn left or right until a free obstacle path is detected.

This simple obstacle avoidance algorithm includes nine states. It is a solution that doesn’t demand a heavy computational load and is easy to implement.

Lines 54 to 64: define the set of values through which the robot passes depending on its condition;

Line 143: Define the minimum and maximum range of the sensors, and return true if an obstacle is in range.

Lines 150 to 157: We are in the CHECK_ALL case where we check if an obstacle is in the range of the sensors. If none of the sensors detect an obstacle, then go forward with maximum speed. If at least one of the sensor detects an obstacle, we start to count the milliseconds used to decrease the speed of the robot and then move to the SPEED_DECREASE case;

Lines 160 and 161: We are moving forward at maximum speed and check if all the sensors are free of obstacles;

Lines 165 to 167: The robot is moving forward with a lower speed for a certain amount of time. When the time is passed, enter in the CHECK_OBSTACLE_POSITION.

Lines 172 to 191: If the path is free, this means that the sensor(s) has a false reading, and we can go back to navigate with maximum speed.
If one of the sensors still detects an obstacle, then we check all the sensors to determine the position of the obstacle relative to the robot. If the sensor from the left side of the robot detects the obstacle, then we are to the LEFT case. If the sensor located in the center of the robot is detecting the obstacle, then we are in the CENTER case. If the right sensor detects an obstacle, we are in the RIGHT case. If all the sensors detect the obstacle, then the robot cannot move forward and enter in case BACK.

Lines 194 to 198: The robot is moving to the left at minimum speed for a particular amount of time. When time passed, we recheck the left sensor. If the sensor returns the obstacle, then we stay in the LEFT case and move again to the left. If the sensor is free of obstacle, then we go to the CHECK_ALL stage.

Line 202: Randomly select one of the next cases in which the robot enters. It could be left or right.

Lines 206 to 210: The robot is moving to the right at minimum speed for a particular amount of time. When time passed, we recheck the right sensor. If the sensor returns the obstacle, then we stay in the RIGHT case and move again to the right. If the sensor is free of obstacle, then we go to the CHECK_ALL stage.

Lines 214 to 217: All the sensors return an obstacle. In this case, the robot is moving backward at a lower speed for a certain amount of time and then randomly is moving to left or right.

Part 3: Drive System

The drive system should be capable of moving the robot with maximum power generated by the DC motors. The two DC motors are generally suited to driving the robot in all directions.

Since this is a 2WD platform, turning via skid steering is easier than a 4WD robot. Using the differential drive (two wheels plus a caster), we are allowed to program the robot to turn in place, left or right by turning the motors in opposite directions.

The DC motors have gears mechanisms that adjust the speed of electric motors, leading them to rotate the wheels at a maximum of 100 rpm. The electric motors have an operating voltage range between 3 to 6V. The 6V is the nominal voltage for which the DC motors are supposed to operate optimally.

The two wheels measure 65 mm in diameter and press-fit onto the 3mm D shafts on the DC motors. The black tires are made of soft rubber for increased traction.

The two DC motors are controlled with an L298N driver. It is a dual full-bridge driver that controls the speed and direction of the DC motors. The electric motors can be driven at the same time or one by one. We take this advantage to drive the robot in any direction with minimum effort for the drive system.

The L298N can drive DC motors at nominal voltages between 5 and 35V with peak current up to 2A.

The output of the motor driver is the PWM signal (pulse width modulation), which switches the output power on and off fast to reduce the average voltage supplied to the motor to precisely control the rotations of the motors.

3.1 Connect the motor drive to motors and Arduino

The driver board has two screw terminal blocks for motor A and B. We will use these two terminals to connect the DC motors. Also, we will use another screw terminal block for the Ground pin and the VMS as the power source for motors. GND and VMS terminals should be connected to the 6V battery pack.

Use female-to-male jumper wires to connect the driver pins to Arduino pins.
ENA -> pin 9
IN1 -> pin 8
IN2 -> pin 11
IN3 -> pin 12
IN4 -> pin 13
ENB -> pin 10
Ground screw terminal block -> pin Gnd Arduino (we use two power supply sources, and we have to share the ground between them)

Depending on your driver board, there may be small differences between the placement of the pins.

3.2 Steering the Robot

Once the DC motors are connected to the motor driver, and the motor driver to Arduino, let’s start writing the sketch to control the steering of the robot.

According to the algorithm that controls the robot, we need to decrease the speed of the motors, move the robot forward, backward, left, and right. In our program, is a function for each of these actions.

Download source code

Summary

We finished a very long tutorial. In this tutorial, you learned how to define the inputs and outputs of the robot, how to write an algorithm that controls the actions and the steering of the robot.

In the first part of the tutorial, we covered the functional specifications of the robot. We define the two inputs and the output of the robot. Then we give to the robot sensing capabilities by installing the sensors and transforming the readings in centimeters. Applying filters, we managed to stabilize and remove jumpy or erratic sensor readings.

Then we go further, and in the second part, we write the algorithm that controls how the robot responds to obstacles in the environment.

In the third part, we implement the steering of the robot by reducing the motor speed and moving the robot in a certain direction according to the decision of the algorithm.



Source link

Share it

Leave a Reply

Your email address will not be published. Required fields are marked *