Robotopgaver 2008

From ImageWiki

Jump to: navigation, search

This page contains the exercises for the course Robot Eksperimentarium 2008.

Contents

Exercise 1: Hello, World

In this exercise we will learn how to do simple driving manuvers with the robot. The main purpose of the exercise is to get familiar with ERSP and / or Player / Stage.

In order to run programs on the robot you have to go through the following steps:

  1. Compile the program on a machine with a ERSP or Player installation.
  2. Connect the robot with the machine / laptop using the USB cables and turn on the robot.
  3. Run the program.

If you use Player you can in principle run the Player server and client programs on two different machines. However, the Player server should always be run on one of the robot laptops with an ERSP installation. Your client program can run on another machine on the network and connect to the server.

Notes for ERSP users

In order to access various devices on the robot we have to initialize the hardware layer of ERSP. In C++ you do this by:

Evolution::ResourceManager *resource_manager;
Evolution::Result result;
resource_manager = new Evolution::ResourceManager( NULL, &result ); 
Evolution::IResourceContainer *resource_container;
result = resource_manager->get_resource_container( 0, &resource_container );

Now the device interfaces is accessible through resource_container. The variablen result gives you feedback on whether the attempted operations succeeded or failed. Hence it is good practice to check if this variable takes on the value Evolution::RESULT_SUCCESS after every operation.

Sub-exercise 1: Simple movement

Lets start by making the robot drive around in a square. That is, try to make the robot drive 4 times straight ahead 1 meter and then turn 90 degrees.


ERSP Hints

It is necessary to initialize the motor control system Evolution::IDriveSystem. In C++ you do this by:

Evolution::IDriveSystem *driver;
result = resource_container->obtain_interface(Evolution::NO_TICKET, "drive", 
            Evolution::IDriveSystem::INTERFACE_ID, (void**) &driver);

Now you can use the methods move_delta and turn_delta on the object driver. For details on how these methods works consult the documentation for IDriveSystem and the ERSP tutorial section 2.6.4 (04-drive-system).

Hint 1: For your first ERSP program it is a good idea to use one of our ERSP Templates.

Hint 2: By default the robot will try to avoid bumping into objects. This is usually a good idea, but can cause some problems during development and debugging. In this exercise we suggest that you turn of the robots automatic Avoidance system. In C++ this is done by:

Evolution::IAvoidance *avoid;
result = resource_container->obtain_interface(Evolution::NO_TICKET, "avoidance", 
Evolution::IAvoidance::INTERFACE_ID, (void**) &avoid);

Now the Avoidance system can be turned off by

avoid->disable_avoidance(Evolution::NO_TICKET);

Hint 3: When accessing the ERSP drive system you have to specify speed and acceleration. The following values are good initial choices for these parameters:

const double velocity = 20; // cm/sec
const double acceleration = 20; // cm/sec^2
const double angular_velocity = 0.5; // radians/sec
const double angular_acceleration = M_PI/2.0; // radians/sec^2

Hint 4: If you experience problems with ERSP you will find a lot of help in the tutorial as well as the Doxygen generated API documentation. Both are found in the directory /opt/evolution_robotics/doc/ on the robot laptops.

Sub-exercise 2: Continuous motion

The functionality we worked with in sub-exercise 1, works well in known environments where we can plan every movement of the robot carefully. However in most situations we are not that lucky and we need to be able to control the robots motion continuously. This is done through a continuous drive interface. In ERSP this is called move_and_turn.

It up to your group to come up with an example route on which you can experiment with continuous motion control. As inspiration you could consider driving along a figure 8 route.

Exercise 2: Fuzzy Logic (Avoidance)

In this exercise we implement an Avoidance System using Fuzzy Control. The robot decides from measurements by the distance sensors where it can safely move, i.e. can move without touching objects in the environment.

We use a Java Fuzzy Logic implementation called jFuzzyLogic to do the computations. As the robots are programmed in C/C++, we access the Java functions either using a client/server architecture or using the Java Native Interface JNI.

Client/server solution:
A server (programmed in Java)

  • receives sensordata from the client through a socket,
  • processes the sensor measurements by a Fuzzy Logic system and
  • sends the result to the client through the same socket.

A client (programmed in C/C++)

  • read data from the distance sensors,
  • sends these data to the server using a socket,
  • receives the resulting motor data from the server and
  • controls the motors by setting the speed wanted.

JNI solution:
Using JNI you may activate an JVM from C++.

  • A Java class interfaces with jFuzzyLogic (fuzzy.java)
  • A C++ wrapper uses this Java class by starting a JVM (fuzzy.h, fuzzy.cpp).
  • The jFuzzyLogic System is used through the C++ wrapper class specifying an FCL file.


The following is supplied:

  • An implementation of the server. This does not include the Fuzzy logic part, which has to be developed as a part of the exercise.
  • A client program reading "sensor data" from the command line (current input). This program demonstrates how to communicate with the server and may further be used when debugging the Fuzzy Logic system. The code is found in SVN under Exercise2. See the file README.txt.
  • For the alternative JNI solution wrapper classes (fuzzy.java, fuzzy.h, fuzzy.cpp), an example of how to use it, and a corresponding Makefile is found in SVN under Exercise2/JNI_solution.

The exercise is then

  • to write a program in C/C++ which samples the distance sensors. The sensor data are given to the Java Fuzzy Logic subsystem which returns the target value for the speed. The speed value is then given to the motor control.
  • to write the Fuzzy Logic part of the server in Fuzzy Control Language (FCL). The server processes scripts in FCL by giving the command ./FAZ FCL_FILNAVN.

The resulting FCL script shall have four input variables ("Front_sensor", "Rear_sensor", "Left_sensor" og "Right_sensor") and two output values ("Velocity" og "Angular_velocity") to cooperate with the present Java program (you have to modify the Java program if this is not the case). You can use the following template (in the file Avoidance.fcl in SVN):

FUNCTION_BLOCK FuzzyAvoidance

// Define input variables
VAR_INPUT
    Front_sensor : REAL;
    Rear_sensor  : REAL;
    Left_sensor  : REAL;
    Right_sensor : REAL;
END_VAR

// Define output variables
VAR_OUTPUT
    Velocity         : REAL;
    Angular_velocity : REAL;
END_VAR

// Fuzzification of input
FUZZIFY Front_sensor
    // Fill this!
END_FUZZIFY

FUZZIFY Rear_sensor
    // Fill this!
END_FUZZIFY

FUZZIFY Left_sensor
    // Fill this!
END_FUZZIFY

FUZZIFY Right_sensor
    // Fill this!
END_FUZZIFY

// Defuzzification
// Velocity is in cm/sec and Angular_velocity is in radians/sec
DEFUZZIFY Velocity
    // Fill this!
END_DEFUZZIFY

DEFUZZIFY Angular_velocity
    // Fill this!
END_DEFUZZIFY

RULEBLOCK myrules
    // Fill this!
END_RULEBLOCK
END_FUNCTION_BLOCK

The material is found in SVN under Exercise2. Copy the contents of Exercise2 into your group folder. Do not make changes to the content of the Exercise2 directory followed by a commit in SVN. This will replace the contents of Exercise2 with your solution overwriting the example code for everybody else as well. Instead do a svn copy (see the SVN description).

Exercise 3: Use the camera to follow an object

In this exercise we use the camera to follow a moving object, solely by using the visual input from the camera. The object is one of the orange cones found in the laboratory.

This exercise has two parts: Training and following.

Training

In this exercise we recommend that you work with a colour model of the cone which describes the distribution of the orange pixels (e.g. their RGB values) by a Gaussian (normal) distribution. This means that the training consists of determining the mean vector and covariance matrix for the orange pixels. The OpenCV function cvCalcCovarMatrix can be used for computing these properties.

Following

After the training is done the robot is to pursue the object. You can use the pre-programmed tracker as a part of the solution. The main loop consists of the following:

  1. Use the camera to take a picture (giving a colour input image)
  2. Use the tracker to determine the position of the object in the image
  3. Modify the way the robot moves according to the position and size of the object

Step 1: To access the camera use OpenCV -- and more specifically highgui. In this exercise you need to use the functions cvCaptureFromCAM and cvQueryFrame.

Step 2: The tracker uses a probability map. This means that you have to compute for each pixel the probability that its value belongs to the orange cone. Then use the function tracking::update to do the tracking. The position and size of the object can be determined by the functions tracking::object_position and tracking::object_size. We recommend that you look at the documentation before using the tracker.

Step 3: We expect the following behavior:

  • If the object is to the right, the robot should turn right (and correspondingly for the object being to the left)
  • If the object is far away (is small) the robot should move towards the object. If the object is very close, the robot should move backwards. Hence, the robot should try to keep a constant distance to the object.

Hint 1: The orange cones have holes which can be hidden by putting one cone on the top of another. This makes the detection easier.

Hint 2: When working with colours it is important that the camera does not change its sensitivity dynamically, otherwise you need to update the colour model as the scene lighting changes. The easiest way to handle this is to set gain to 0, which can be done by the pwc-driver found in the src/ SVN directory.

Hint 3: It is recommended that the probability is computed by using a Gaussian distribution model of the RGB values. The tracker works with 8-bit values, which means that the probability densities (in the range [0, 1/Z], where Z is the normalization constant of the Gaussian distribution) have to be scaled to values in [0, 255]. This may be done by

    probability = round( 255*exp(-0.5*d*d));

where d is the Mahalanobis distance from the pixel in question to the the mean pixel value (parameter in the model). (The Mahalanobis distance may be computed by the function cvMahalanobis).

Hint 4: Here is how to read the data in a pixel:

 inline unsigned char* PIXEL3(IplImage* im, int x, int y)
 {
    int pos = 3 * (x + y * im->width);
    return ((unsigned char*) (im->imageData + pos));
 }

and you use it like

 unsigned char* pix = PIXEL3(input, x, y);
 pix[0] // Blue channel
 pix[1] // Green channel
 pix[2] // Red channel

Exercise 4: Localization

In this exercise you will program the robot to estimate its own position from visual observations. More precisely, the robot knows the location of two orange cones (our landmarks) with uniquely colored (blue and yellow) balls on top. When the robot sees one of the landmarks it can improve its estimate of its own position.

You should solve this problem by implementing a particle filter for estimation of the robots position and its pose (orientation). We provide you with code for recognizing the two different landmarks and for measuring distances to them. You find this code in the course SVN under Exercise4 (read the README.txt file).

The robot needs to know the position of the two landmarks (it is fixed), so it has to be represented some how in your program. In the code we provide, one landmark is located in (0,0) and the other in (300,0), which means that the landmarks should be physically positioned with 300 cm distance in between.

As indicated above, the robot has to be able to distinguish between the two landmarks, and we do this by positioning respectively a yellow or a blue ball on top of the cones. You can find balls with holes that fit the cones in image lab. In order for the robot camera to see the orange cones and colored balls it can be necessary to lift them a bit from the ground. For this purpose we have specially engineered precision styrofoam (Danish: flamingo) blocks that you can find in image lab (the blocks consist of 3 pieces glued together).


Hint 1: The distance between the two landmarks does not have to be precisly 300 cm in order for the program to work. Just make sure that the distance corresponds roughly (an error of around 10 cm should not make that big a difference).

Hint 2: In order for the distance calculation to work correctly it is necessary to perform a rectification of the camera lens distortion. The provided code does this for you. For details see Kalibrering af Scorpion robotternes kamera (in Danish).

Hint 3: You have to estimate the robots position and orientation, which means three parameters (x, y, theta). However to begin with, we suggest that you focus on estimating the position (x, y). When this work, you can try to estimate the orientation theta as well.

Hint 4: Notice that the coordinate system in the visualization is seen from below, as if the floor is made of glass.

Hint 5: For the deterministic part of the dynamical model you may use the motor control that you issue to the robot or you can try to estimate odometry (however the Player driver does not support this, but ERSP does).

Hint 6: Don't use the function move_particles in particle.h, it is not correct!

Hint 7: As observation model p(z|x) you can use a Gaussian distribution on distances. It should have mean equal to the distance from the particle state to the landmark and have some variance which reflects the measurement error. This holds for both position and orientation, however you should split them into two Gaussian models and simply multiply them to get the weight update.

Exercise 5: Flocking behaviour

In this exercise you will work with flocking behaviour. You will implement a flocking algorithm which follows a set of local rules for the behviour of the individual robots (see rules below). The same program should run on several robots simultaneously and (hopefully) flocking behaviour will emerge from this setup.

Put an orange cone on top of each robot as a hat. We have specially manufactured styrofoam blocks with a slit that fits on the robots handle. You can tape a cone on top of such a block and it stays on the robot while it drives.

You may want to use the cone tracker and the distance measurement from previous exercises (e.g. 3 and 4). You need this in order to find and track cones and thereby estimating orientations and distances to other robots.

Rules for robot behaviour ordered after precedence:

  1. Keep a minimum distance of e.g. 1 m to closest robots (separation). You could use your obstacle avoidance system form exercise 2 in order to fulfill this rule.
  2. Ask all other robots of their speed via network communication (see below). Use the average speed as basis for this robots speed. In order to avoid the situation where all robots stand still, it is a good idea to add a bit of random noise to the speed. It may also be a good idea to use the principle of hysteresis when changing the speed in order to get a smooth motion.
  3. If this robot sees an orange cone in its visual field of view then steer towards it (coherence and alignment). Drive as close as possible to the other robots in your field of view without bumping into them. This will control the orientation and position of the robot.
  4. If this robot does not see a cone, then perform a random walk (see below).

What happens if some robots disappears out of view (e.g. behind a wall)? Is it possible for the flock to separate around objects with the current set of rules?

Is it possible for you to guide the flock through the door of image lab by guiding the leading robot with a handheld cone?

Can you make them drive in a circle following each other?

Additional rule: Possible way of recovering from losing parts of the flock.

  • If you cannot see other robots (orange cones), ask for their locations and estimate your own location following the approach from exercise4. Then steer towards the other robots locations, until you see them.

Try to come up with additional rules for handling problems in your flock.


Tips: Sharing robot laptops

In this exercise you need several robot laptops to test the system in practice. Try to make appointments with other groups and be present at the Thursday exercises.


Tips: Max speed

It is a good idea to limit the maximum speed the robot can drive with. In this exercise there is a large chance that the robots will crash into each other. We would like to avoid trashed bumpers and other damages to our robots!


Tips: Random walk

This can be implemented by sampling a random orientation and then move the robot in that direction with e.g. a constant speed. To avoid too erratic motion, we suggest that you sample the new orientation from a Gaussian distribution with the mean given as the current orientation of the robot and with some appropriate variance. The size of the variance will dictate how dramatic a turn the robot makes.

Example in pseudo code:

 theta_new = theta_0 + randn(0.0, sigma_theta);

where theta_0 is the current orientation, sigma_theta is the standard deviation on the orientation, and randn is a function for sampling from a Gaussian distribution with mean and standard deviation given.

You could also in a similar manner change the speed by sampling:

 v_new = v_0 + randn(0.0, sigma_v);

where v_0 is the current speed and sigma_v is the standard deviation on the speed.

Tips: Communication

To communicate between two robots one can use TCP sockets. This means that one machine has to act as a server and the other acts as a client. You can find simple example of a client-server setup on the Practical Sockets home page --- find the example called TCPEchoServer and TCPEchoClient.

One possible way to let the robots communicate about their speed, position, etc. is to let each robot act as both a server and client. The robot keeps a list of open connections to other robots in order to ask for their e.g. speed. Similarly, the robot needs a list of open connections to other robots, so that they can ask about its speed. You may want to use threads for this part --- we suggest PThreads.

Alternatively, you could use some form of UDP communication or come up with a clever way for the robots to agree on who is server and client. And then only use one channel for communication both ways.

A simple solution for transmitting data over a TCP socket is sketched in the following functions:

inline void send_float(TCPSocket *socket, float f)
{
    socket->send((char*)(&(f)), sizeof(float));
}
inline float get_float(TCPSocket *socket)
{
    char buffer[sizeof(float)];
    socket->recv(buffer, sizeof(float));
    return *((float*)buffer);
}
Personal tools