mandag den 28. april 2014

Lesson 9 - Navigation

Date: 24/04 2014 and 28/04 2014
Duration of activity: 10.15 - 15.00 and 9.15 - 13.30
Group members participating: Pætur, Christian og Søren


Goal: At gennemføre ugens opgaver [1].
Plan: Gå opgaverne sekventielt igennem.
Results: Vi har gennemført alle opgaverne, dog havde vi ikke tid til den sidste opgave.


Lesson 9
In this lab session we will apply the tacho counter of the NXT motor to keep track of the position and direction of a LEGO car with so called differential drive where two motors are used independently to move and steer a car as shown in Figure 1. The different interfaces and classes available in leJOS for localization and navigation are described in the tutorial : Controlling Wheeled Vehicles, [5].


Figure 1: A car with differential drive that can turn in place, [3].


Navigation
In Chapter 12 of Brian Bagnall, [2], he uses a differential driven car to experiment with the leJOS facilities for localization and navigation. On page 298 of Brian Bagnall's Chapter 12 he describes a simple test program that uses the class TachoNavigator to make his car "Blightbot" move between different positions in a Cartesian coordinate system.


Figure 2: A car with differential drive moves between four points in a Cartesian coordinate system, [2].


Use a similar car to perform a similar test several times and record how accurate the car returns to its original position. In Maja Mataric's paper on Toto, [4], she describes how to gather data on the movements of Toto by attacking a marker to the robot.


Use a similar method to gather data on the path that your car drives.
We tried to make the car navigate the course of figure 2, but we also tried to make our car go in a square. The following picture is where we try to get our car to navigate the course from figure 2:



Figure 3: Car drives the course from figure 2 on a whiteboard, repeated 3 times.


The runs made in figure 3 were both made with a track width of 11,0 cm, while the wheel diameter in the left picture is 5,6 cm and in the right picture 5,5 cm.
To see how the pattern changed with the change of wheel diameter we repeated the run three times. In figure 3 it can be seen, that the consequence of a wheel diameter of 5,6 cm was that the pattern slowly rotated counter-clockwise while the pattern drawn with a wheel diameter of 5,5 cm slowly moved clockwise. These observations is marked with the two arrows in figure 3 where the numbers 1,2 and 3 represents the sequence in which the patterns were drawn.
We thought that the solution was to set the wheel diameter to 5,55 cm, just in the middle of 5,5 cm and 5,6 cm, where the pattern respectively rotated clockwise and counter-clockwise.
Unfortunately our assumption that the pattern would be perfect with a wheel diameter of 5,55 cm was wrong. In figure 4 this run can be seen.


Figure 4: Car drives the course from figure 2, 3 times in a row with a
wheel diameter of 5,55 cm.


These test drives were done on a whiteboard, and we noticed that it looked like that the rear wheel made the car drift sometimes during some of our test drives. We got some very skew squares. To accommodate this issue we changed the rear wheel of the car to a vertically mounted LEGO axle. This reduces the car drift, but we still could not get a perfect run where the car returns to its original position and direction.


We then changed from testing the drives on the whiteboard to test the drives on pieces of A3 paper taped to a table and remounted the LEGO 9797 Car standard rear wheel. We had the assumption that this would reduce the drift for 2 reasons:


  1. The paper would be less smooth than the whiteboard and thus reduce drift
  2. The paper would be more even (as we had noticed that the whiteboard had an uneven surface) and thus reduce drift.
Figure 5: Car drives in a square on two A3 papers with wheelDiameter = 5,6 cm and trackWidth = 10,9 cm. A video of the leftmost drive can be found here: http://goo.gl/ZQf1Te  and a video of the rightmost square drive can be found here: http://goo.gl/jwZx82


The tests on the paper produced more steady results with less drift, which made it easier to finetune the values of the trackWidth and the wheelDiameter. From the two drives on figure 5 we can see that the endpoint is very close to the start point when driving in a square.


Brian Bagnall notes that "Blightbot does a good job of measuring distance but his weakness arises when he rotates". Perform experiments to investigate his observations.

Figure 6: A picture of the drawing made after a marker was attached to the vehicle and the vehicle was programmed to draw a square.
Figure 6 shows that the vehicle is quite good at driving the same distance at every run, but does indeed, as Bagnall states, not handle the rotations very well (it must be noted that this can be because of the challenge of getting the trackWidth and wheelDiameter parameters right, which results iin a challenge of handling rotations). It is clear in figure 6 that the car rotates a little bit too much at every run, meaning that the pattern, a regular square at its origin, is repeated a few centimeters left from the original starting point. But it is also clearly seen in figure 6 that the size of the square is exactly the same at all runs, meaning that “Blightbot does a good job of measuring distance.”


The code we used for our drives can be seen here:

Figure 7: Code used to draw a square.


Navigation while avoiding objects
How can you make the car move from place to place by means of the leJOS navigation system while the car avoids objects in front of it?


We did not implement this, but our plan would be to first make a path consisting of a few waypoints eg. (0,0), (0,100), (100,100), (0,0) to make a right triangle in the navigator class from leJOS, see figure:  
Figure 8: The triangle we would draw if the program was implemented


For the car to avoid an obstacle placed to block the route we would equip the car with an ultrasonic sensor. If the ultrasonic sensor detected an obstacle within 20 cm the car would stop and measure the distance to the left and to the right. The direction in which the largest distance is measured, a waypoint would be added to the front of the path, which is illustrated in the following figure:


Figure 9: The path after a new waypoint is added to avoid the obstacle


A bumper could also be used to avoid getting stuck, using the same procedure as we did in lesson 7.


Improved Navigation
In [2] there is a formula that shows how the position and direction are updated during the moves of a mobile robot. In [5], another update formula is used. What is the difference, what is the most accurate? Try to figure out how leJOS updates the position and direction. Can you use [5] to improve that? Or?


We didn’t have time for this one.


References
[2], Brian Bagnall, Maximum Lego NXTBuilding Robots with Java Brains, Chapter 12, Localization, p.297 - p.298.
[3], Java Robotics Tutorials, Enabling Your Robot to Keep Track of its Position. You could also look into Programming Your Robot to Navigate to see how an alternative to the leJOS classes could be implemented.
[4], Maja J Mataric, Integration of Representation Into Goal-Driven Behavior-Based Robots, in IEEE Transactions on Robotics and Automation, 8(3), Jun 1992, 304-312.
[5], leJOS Tutorial: Controlling Wheeled Vehicles
[6], Thomas Hellstrom, Foreward Kinematics for the Khepera Robot
[7], Picture and video gallery: http://goo.gl/7z1Qq2

torsdag den 10. april 2014

Lesson 8 - Robot Race

Date: 10/04-2014
Duration of activity: 10.15 - 16.00
Group members participating: Pætur og Christian



Goal: At gennemføre ugens opgave [1].
Plan: Trial and error.
Results: Vi formåede ikke at gennemføre opgaven


Lesson 8
Infinity

References:

tirsdag den 8. april 2014

Lesson 7 - Agents and behaviour selection in agents

Date: 03/04 2014 and 08/04 2014
Duration of activity: 10.15 - 15.00 and 11.55 - 14.30
Group members participating: Pætur, Christian og Søren


Goal: At gennemføre ugens opgaver [1].
Plan: Gå opgaverne sekventielt igennem.
Results: Vi har gennemført alle opgaverne.

Lesson 7
The Braitenberg vehicle 2a and 2b, [6], that was implemented as LEGO vehicles in Lesson 6, only one behavior can be seen by an outside observer: drive towards light or drive away from light.
In this lesson we will use the behavior control paradigm, [4] and [7], to implement several observable behaviors on a single NXT that controls a LEGO vehicle with an ultrasonic sensor, a light sensor and two touch sensors.

The base vehicle from Lesson 6 has been augmented with a bumber, an ultrasonic sensor and a light sensor mounted as shown in the fotobased building instructions for a base car with extensions.

The classes for this Lesson can be downloaded as Programs.zip.

LEGO vehicle that exhibits a single behavior
Chapter 9 in [4] presents how to implement the behavior control paradigm. Each behavior is a simple mapping from sensors to actuators as described for the single avoid behavior shown in Figure 9.3 of [4].




The program AvoidFigure9_3.java implements the avoid behavior with a single ultrasonic sensor.

Download the program and observe the car. Describe how the car behaves.
The car drives forward until the ultrasonic sensor senses that the car is close to some object. Then the car first turns a little to the left and get a leftDistance reading, then it drives the same little to the right and records a rightDistance. Then the car drives in the direction where there is most space in front of the car.


Figure 1 - Picture of the vehicle with 1 sonar sensor, 1 light sensor and a bumper connected to 2 touch sensors

Change the program so that the car drives backward a little when all the three distances, leftDistance, frontDistance, and rightDistance are less than stopThreshold, and afterwards the car should spin around on the spot 180 degrees.

We did this by using the NXTRegulatedMotor instead of MotorPort, editing the PrivateCar class. NXTRegulatedMotor can rotate() one wheel by a certain amount. We used this to make the spin around on the spot by 180 degrees. A video can be seen in our gallery [2]. The following code shows how we check for if we are in a corner, and the used rotate method to spin around in place can be found below:

*
Figure 2 - We check if we are in a corner, and if we are, we go backwards and then turn 180 degrees on the spot.

*
Figure 3 - We instantiate NXTRegulatedMotor for the left and right motor



*
Figure 4 - We use the NXTRegulatedMotor global variables to make the rotate method, which makes the car spin around on the spot. We found that rotating the left wheel 500 degrees forward while the right wheel rotates backward, see figure 2, approximately made the car turn 180 degrees in place. We used the small wheels, 56 x 26 mm, see picture below.


Figure 5 - the wheels we use for our car

Behaviors as concurrent threads
In section 9.5 in [4] it is shown how to implement the behavior control network of Figure 9.9 by means of processes in the language IC (Interactive C):

The program RobotFigure9_9.java implements the Avoid, Follow and Cruise behaviors and the arbitration mechanism suggested on page 306 in [4].

Download the program and observe the car. Describe how the car behaves. Try to describe the conditions that trigger the different behaviors.
The car drives forward when it is not close to an object. If it close to a wall or other object which the ultrasonic sensor can detect, it turns a little backwards first on one of the wheels and then on the other wheel.

If you look into the program RobotFigure9_9.java you will see that there are three objects:

       Avoid avoid   = new Avoid(car[1]);
    Follow follow = new Follow(car[2]);
    Cruise cruise = new Cruise(car[3]);
   
Each of these objects are threads that are created and started by the main program and thereafter they run concurrently with the main thread. Each of these threads represent a single behavior: cruise makes the car drive forward, follow uses the single light sensor to follow bright light in the environment and avoid uses the single ultrasonic sensor to avoid objects in front of the car.

Look into the three classes and try to identify how the triggering conditions are implemented in each of the classes and how the actions for each behavior is implemented.

Avoid.java is similar to AvoidFigure9_3.java, the main difference is that Avoid has a while-loop within a while-loop. The inner while-loop checks whether frontDistance is equal to or higher than the stopThreshold = 30. As long as this holds, the SharedCar car.noCommand is called and nothing happens, and the frontDistance is measured again. The triggering condition is when the frontDistance is < stopThreshold, which is 30. Then the car turns a little to the left and make a leftDistance reading, then the car turns the same distance to the right and makes a rightDistance reading. The direction in which there is the largest distance (the most space) is the direction in which the car drives.

In the Follow class the trigger condition is when the measured frontLight > lightThreshold. The lightThreshold is sensed in the constructor of the Follow class. The car then continually measures the lightSensor values (frontLight), and if it is greater (brighter) than in the context where you started the car (the lightThreshold), the trigger condition frontLight > lightThreshold is reached. When this happens the car measures the light conditions to the right and to the left. It calculates a delta-variable which is the measured value from the right side subtracted from the measured value from the left side. The power to the motors are then calculated by either subtracting or adding the delta-value to the power-values in such a manner that the car drives towards the light.

The cruise class has a while-loop, and while the while-loop is true the car drives forward. The cruise class does not have a triggering condition.

Try to watch the car with only the cruise thread active and then with only the cruise and the follow threads active to observe the behaviors more clearly.

When only the cruise thread is active, the car drives straight forward and does not react to distance and light.

When the follow thread is activated as well as the cruise thread, the car does not always go straight forward, but sometimes measures the light to the left and right and then goes in the direction with the most light. This is due to the follow thread.

Add an Escape behavior
In the Figure 9.9 the top priority behavior is Escape. An implementation of Escape in IC is given on page 305 of [4].

Implement the Escape behavior in the RobotFigure9_9.java program.
The escape behavior is supposed to allow the robot to escape from collisions using the bump sensor to detect collision with obstacles. We found that it was better to let the car go backwards on one wheel instead of going forwards to go left or right when only one of the bumpers was activated. We also found the time it should go backwards straight, to the left or to the right was better with 1000 ms than 500 ms. The code can be seen below, and a video can be seen in our gallery, where the car runs for more than two minutes without getting stuck (actually i drove more than 5 minutes without getting stucked before we had to move on to the next exercise, so it seems to work quite well).


Figure 6 - The escape behavior

Add a third motor to turn the light sensor
In the fotobased building instructions for a base car with extensions it is shown how a horizontal motor can be added to the base vehicle.

Mount the light sensor so it can be turned by this horizontal motor. Use this mechanism to re-implement the Follow behavior so instead of turning the car to get readings to the left and right, the horizontal motor is used.

We mounted the horizontal motor following the foto based building instructions for a base car with extensions [3]. We made the car stop when it took the readings. We used the NXTRegulatedMotor and the rotate() method to rotate the light sensor to the left and right by 45 degrees to get the readings.


Figure 7 - Picture of the car with the light sensor mounted on a motor

The classes SharedCar and Arbiter
Describe how the classes SharedCar and Arbiter implements the arbitration suggested on page 306 in [4].
The SharedCar and Arbiter works together in the way that the behaviors Escape, Avoid, Follow and Cruise each have their own SharedCar variable which are also stored in an array in the RobotFigure9_9 class. The SharedCar also keeps a CarCommand. The Arbiter then uses the SharedCar array, using a for-loop, to check, for each associated behavior, if a command is ready, which happens when a behavior tells it SharedCar object forward(), backward() or stop(). If that is the cause, the highest priority thread, which is the one associated with the SharedCar object at the lowest index in the array, which has a command ready, will be run.

Compare this with the arbiter of Fred Martin, [5, page 214-218].
The difference between Martin’s implementation of the Prioritization Algorithm and our implementation is that where Martin uses 5 arrays our code only uses one single array corresponding to Martins process_priority[]. This one can be found in the RobotFigure9_9 class as SharedCar [] car.

The other 4 arrays
process_name[]     which holds the name of the processes or threads
process_enable[]    which can be used to enable or disable a process or thread
left_motor[]            which holds the left motor command for the different processes and
right_motor[]           which holds  the right motor command for the processes found at the corresponding    
    indexes

are implemented differently using the SharedCar and the Arbiter. Each behavior or thread/process holds their own process_name in the class name, for example Escape. Each behavior also holds a SharedCar object where the left and right motor command arrays are represented differently, namely as a stop(), forward() and backward() method. The process_enable[] array, to allow for disabling a thread in the code, is not implemented in the SharedCar and Arbiter implementation.

Conclusion:
We have completed this week's tasks.

References:
[2] Videos of the exercises: http://goo.gl/jbwodv
[4], Jones, Flynn, and Seiger, "Mobile Robots, Inspiration to Implementation", Second Edition, 1999.
[5], Fred G. Martin, Robotic Explorations: A Hands-on Introduction to Engineering, Prentice Hall, 2001.
[6], Braitenberg, V. 1984. Vehicles, Experiments in Synthetic Psychology London, Cambridge: The MIT Press.
[7], Rodney Brooks, A robust layered control system for a mobile robot, IEEE Journal of Robotics and Automation, RA-2(1):14-23, 1986, also MIT AI Memo 864, September 1985.