In this series, I looked at overengineering 'making dots on a piece of paper' by using multiple robots. In part 1 we looked at making stippling work for a single robot, and in part 2, we extended this to two robots. In part 3, we had a look at approaches that optimize the order for the single-robot case. This here is a writeup of a paper I wrote as follow up, which is hopefully a bit more accessible than the paper itself.

In previous posts, I looked at finding poses for robot arms to make dots, sorting the poses, and then finding paths for one robot in part 1, and for two robots in part 2. In part 3 we had a look at sorting the sequences for the single robot case to find the optimal order to minimize the makespan.

In the paper I had a look at optimizing paths and sequences for multiple robots. There are multiple caveats in the appraoch, which means that the solution that we get out is not the optimal solution, but it is a better one than various baselines.

The current version of the code is available on github (I am planning to continue working on that), and the version of the code that was used to produce the data that is in the paper is here.

Problem setting

The problem setting I am interested in is mostly the same as before: We are given a set of dots, and I want to figure out how to minimize the time it takes to make all of them with multiple robots.There is one major caveat to this very generic problem setting: For now, we only consider a single pose per robot that makes this dot (in reality, there are clearly multiple poses in which the arm makes the same dot).

The questions we need to answer are then:

  • which robot makes which dot?
  • in which order should the dots be made?
  • how does the robot need to move between the dot-making-poses?

This can be formalized into an optimization problem, but since that is not solvable by any solver that I know (and there most likely is no solver that is able to solve such a problem using optimization based approaches), I’ll skip this here. Interested readers should definitely have alook at the paper.

I generalized the problem above slightly in the paper: I do want to be able to use the approach for not only the dot-making setting, but I care about generic sets of tasks that can be distributed over multiple robots. In the paper, the example I use is bin-picking with two robot arms. This does not change anything majorly in the problem setting that we used in the previous parts.

Method

The core idea in the paper is that we can search over all possible orders and assignments if we find a suitable representation of the task sequence. The issue here is that we typically have a task-order per robot, but no overall task order. Typical motion planners can not really deal with this problem well.

We can then introduce ‘artificial’ precendence constraints that order the tasks, i.e., we introduce the fact that task i should be done before task j. That means that we represent our sequence by simply serializing the per-robot-sequences into a single sequence in the form of [(robot, task), …, (robot, task)] By doing this, we can theoretically analyze all possible sequences by looking at all possible serializations of a robot-task assignment.

Then, for computing a plan for this specific assignment and task order, we use the framework I developed in the paper here. I’ll recap the approach briefly (hopefully in a more approachable manner than in the paper):

The approach is based on decomposing the complete sequence into the separate tasks. Then, a planning problem only deals with a single robot, and a single objective (i.e., make a dot) and each task is planned sequentially one-by-one.What I realized at some point after I wrote the paper is that this approach is a version of prioritized planning. Particularly, in our setting, our priorization is strictly given by when the task occurrs in the sequence we made up before. Additionally, we also plan an escape path for the robot to return to its home-position. If we do not do this, it could occur that a robot gets stuck in its position because it might be blocked by another robot. This escape path is removed when planning the next task. In pseudocode:

1
2
3
4
5
6
7
8
9
10
11
paths = {}
for (robot, task) in seq:
    remove_escape_path_for_robot(robot, paths)

    robot_path = plan_task_for_robot(robot, task, paths)
    paths[robot].append(robot_path)

    escape_path = plan_escape_path(robot, paths)
    paths[robot].append(escape_path)

return paths

The core issue is then how to take previous solutions into account when planning the path for the next task.

Since the paths that we already planned are parametrized by time, we can use any planning method that is able to take known moving obstacles into account. In this instance, I used ST-RRT*, which extends the configuration space of the robot with time, and explicitly samples both configurations and time when planning. This makes it possible to take moving obstacles into account in the collision checker, since we know where each robot is at a given time.Stated differently: Since we sample a time in addition to a configuration, we can set all the robots to the positions that they occupy at that time. We can then do a normal collision check.

To search over the possible sequences (more or less) efficiently, we use a greedy descent with random restarts. Using the planner, we can then evaluate the makespan of the sequence that we are currently looking at, and either discard it (if it is worse than the one we had previously), or accept it.

The algorithm is then really just

  • generate a sequence
  • evaluate the makespan of the sequence
  • alter the sequence

in a loop. In pseudocode:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
best_seq = None
best_makespan = None

for j in range(N_outer_loop):
    curr_best_seq = generate_sequence()
    curr_best_makespan = plan_for_sequence(curr_best_seq)
    for i in range(N_inner_loop):
        seq_new = generate_neighbor_sequence(curr_best_seq)
        makespan = plan_for_sequence(seq_new)
        if makespan < curr_best_makespan:
            curr_best_makespan = makespan
            curr_best_seq = seq

    if curr_best_makespan < best_makespan or best_makespan is None:
        best_makespan = curr_best_makespan
        best_seq = curr_best_seq

Experiments

For the paper, I did not only look at stippling, but also at a bin picking scenario. The scenarios we looked at were the following:

Fof the stippling problems, I looked at multiple versions with different numbers of robots, and for the logo specifically, I also had a look at a large version, and a smaller version. The larger version is well parallelizable, whereas the small version is too small to be worked on by multiple robots simultaneously.

Results

There is a full playlist of both simulations and real world execution here. I’ll put a few gifs and videos here, but I encourage you to actually look at the playlist above.

Below is the bin-picking scenario. On the left, the greedy (alternating between the two arms) version is shown, and on the right, the optimized variant can be seen.Refresh the page for starting them at the same time.

The execution of the optimized version on real robots look like this:


I also made some quantitative comparisons in the form of a few plots that show the evolution of the makespan over the runtime, and a comparison to some baselines.

First, there are the grid-scenarios with 2 and 4 robots respectively. On the case of the 4 robots, the single-arm comparison is not present since some of the points are not reachable by some of the robots. In the plots below, blue is our approach, green is a single-arm baseline, and orange is a greedy round-robin baseline. For the blue line, the dashed one is the makespan that was attained in the current iteration, and the solid line is the overall attained minimum so far.

And the same plot for the bin-picking setting:

Generally, we can see that the algorithm does what it is supposed to do: decreasing the makespan over time by trying different sequences. I do believe that with more runtime, this would eventually find the minimum makespan for the planning method we use.

A quick word about planning times: for the grid with 2 robots, the planning times per task is roughly 11 seconds. For the grid with 4 robots, the planning times per task is roughly 33 seconds. That is, the planning times are relatively long, and are (obviously) the major part of the planning time. For many robots moving in the same space, the planning times are much higher than for the case where movement is relatively free.

Limitations

Suboptimality:

  • As discussed, the approach presented here is essentially a prioritized search. This is suboptimal. I recently had a look at the literature from multi agent pickup and delivery (MAPD) here.
  • Further, the approach we took is a ‘hybrid’ approach, i.e., we first run a geometric planner, and then run an optimizer over it. This leads generally to good results (especially with the robot arms), but neglects input constraints and robot dynamics.

Capabilities:

  • As demonstrated, the approach is only applicable to tasks where robots do not need to cooperate. In the framework we used, it is possible however to plan tasks that require cooperation between robots.

Speed:

  • This whole thing is relatively slow.

Outlook

The outlook is perfectly correlated with the limitations. Some things I am interested in as direct follow up are:

  • Making this a complete TAMP approach, i.e., dealing with task-search properly, enabling more skills (such as drawing lines), and enabling actions that require coordination between robots, and require actual precedence constraints (e.g., in the tower of hanoi).
  • I do have a few videos for demonstration purposes of the paths that are produced. However, those are executed in open-loop, and sometimes the robots smash the pens into the paper (due to mode inaccuracies). There are two main takeways from this point: we need to deal with disturbances during execution, and we need to deal with sensor feedback.
  • Speeding everything up.
  • Figuring out how suboptimal this approach is.