lundi 27 février 2012

Asteroid is out! new open-source CBLS Solver ...



Plenty of good news in the local search community these days!
But this one is even better since it is an open-source solver with a commercial friendly license (LGPL) and developed in Scala (my current favorite programming language).
This is a new CBLS solver developed by Renaud De Landtsheer at the Cetic.
This solver is very close to the constrained base local search and the models you would see in the Book of Pascal Van Hentenryck and Laurent Michel.
I guess the name of this solver (Asteroid) is an allusion to the pioneering CBLS system Comet:
Cetic is a research center in Computer Science in Belgium. I’m very happy they start to invest into optimization and also to open-source it.
Here is an n-queens model to give you a taste of it:



val N:Int=40
val min = 0
val max = N-1
val range:Range = Range(0,N)
val tabulength = 1
val m: Model = new Model(false)
val MaxIT = 10000

println("NQueens(" + N + ")")
val Queens:Array[IntVar] = new Array[IntVar](N)
for (q <- range){Queens(q) = new IntVar(m, min, max, q, "queen" + q)}

val c:ConstraintSystem = new ConstraintSystem(m)

//c.Post(AllDiff(Queens)) handled trough permutations
c.Post(AllDiff(for ( q <- range) yield (q plus Queens(q)).ToIntVar))
c.Post(AllDiff(for ( q <- range) yield (q minus Queens(q)).ToIntVar))

c.close
m.close

var it:Int =0

val Tabu = (for(q <- range)yield -1).toArray

var longueurplateau = 0;
while((c.Violation.GetValue() > 0) && (it < MaxIT)){
val oldviolation:Int = c.Violation
val allowedqueens = range.filter(q => Tabu(q) < it)
val (q1,q2):(Int,Int) = SelectMin2(allowedqueens,allowedqueens, (q1:Int, q2:Int) => c.GetSwapViol(Queens(q1),Queens(q2)), (q1:Int,q2:Int) => q1 != q2)

Queens(q1) :=: Queens(q2)
Tabu(q1) = it + tabulength
Tabu(q2) = it + tabulength

it += 1
println("it: " + it + " " + c.Violation + " (swapped "+ q1 + " and " + q2 + ")")
if(oldviolation <= c.Violation.GetValue()) longueurplateau+=1 else longueurplateau = 0

if (longueurplateau > 5){
println("jump away")
for (i <- 1 to N/5){
Queens(SelectFrom(range)) :=: Queens(SelectFrom(range))
}
longueurplateau = 0
}
}
println(c.Violation)
println(m.GetSolution(true))
}

vendredi 24 février 2012

You need to understand a bit the technology you use to build good models!

I just read yesterday evening the paper published in 4OR about LocalSolver to understand a bit more the technology behind. I would like to add a few comments on the results section in particular the results on Steel Mill Slab problem that I know quite well (a kind of cutting stock problem with several cutting lengths and a color constraint on the patterns).

Just a reminder on the progresses to solve this problem over the last years:

Antoine Gargani and Philippe Refalo closed the CSPLib instance with CP+LNS in 2006. Then Pascal Van Hentenryck and Laurent Michel explained that LNS is not necessary if you break dynamically the symmetries. A bit later, on more difficult instances the best results where obtained with CBLS using an exact differential invariant that you can implement in 5 lines of Comet. Finally Stefan Heinz had the last word closing the most difficult instances with MIP (SCIP) by generating all the columns in advance (not so many).

So you can imagine how surprised I was reading in this section that LocalSolver outperforms MIP (Cplex) and CBLS ?!?!?
But now I think I understand how the results where conducted (please correct me if I’m wrong). My understanding is that they used the same kind of models (0/1 decision variables) for LocalSolver, CBLS and Cplex. In that case the results make complete sense because:
- in MIP you’ll get all the symmetries as would get with a completely naïve bin-backing formulation (nobody having a minimum background in OR would never model a bin-packing problem in MIP like that).
- in CBLS the differentiation of the objective will always give you 0 which means you are making completely random moves.

Unfortunately this is not really explained (the models are not given, and the CBLS moves neither) and I don’t want that someone reading this article think that CBLS is not great to solve complex problems (with a correct differentiation, CBLS solves the steel mill slab instance of CSPLib with about twenty greedy swaps ;-) )
If you use CBLS you should understand when the differentiation of the objective is correct, what a differential invariant is, and you should certainly not build naïve CBLS models in the same way, as you never model with 0/1 variables in CP.

In conclusion, Optimization is quite complex, you need to have a minimum of understanding on how to build good models for the underlying technology you are using. I personally don’t believe that good modelers (in MIP, LS or CP) will be replaced very soon by automatic tools to get the best possible results.

Two great blogs where you can read nice things on the art of good modeling and good formulation are:

mercredi 22 février 2012

Local Solver is out

LocalSolver is a back-box commercial local search solver developed at Bouygues.
It's free for academics but since I'm not academic I cannot fully try it. Here are some thoughts after reading a bit their website:
It seems very nice and the results are quite impressive but to me it doesn't mean that tree search methods are dead.
Approaches like LNS (large neighborhood search) are very powerful to explore complex neighborhood with tree search methods like CP.
Also counting the number of 0-1 variables and the number of moves per second is not the argument that I prefer to judge the quality of the solution I can potentially get with a solver.
I prefer much more to have some kind of control to do very complex moves and self designed search as CBLS in Comet.
The thesis of Sebastien Mouthuy "Constraint-based very Large-Scale Neighborhood Search" is very nice on this topic of building very complex moves.
Anyway, it's always a good news to see new technology in optimization coming out. I'm looking forward to read feedback or user experience from the local search community. Maybe this will become the future "cplex" for local search i.e. a kind of standard benchmarking tool...

samedi 18 février 2012

Hexiom Puzzle

Hexiom is a very interesting game that you can play online here

“The object of Hexiom is to arrange all tiles in a way that they are adjacent to exactly the number of other tiles as they are showing.”

Two solutions are for instance:



This puzzle can be solved with CP in a nice way (like most of the problems of course ;-)).
If you don’t want to see my model right now, just think about which decision variables and constraints you would use before continuing the reading of this post.

I use 3 types of variables in this model:

1) Type1: I call them used(i) with domain {0,1} to tell whether or not there is a tile in position i.
2) Type2: I call them nbNeighbors(i) is a variable representing the number of neighboring tiles of position i. The domain of these should be {0,1,2,3,4,5,6} because tiles are hexagons
3) Type3: I call them card(i). Those are almost the same as Type 2 but I also include 7 so their domain will be be {0,1,2,3,4,5,6,7}. It will become clear with the explanation on the constraints why we also need Type3 variables.

Now the constraints:

If you have for instance in input 3,4,0,2,2… it means that you must have one tile with 3 adjacent tiles, one tile with 4 adjacent ones, one isolated tile, two tiles with two adjacent tiles, ….

All of this can be imposed with just one Global Cardinality Constraint (GCC ) posted on the card variables. The problem is that we don’t want to count a value in the card variables if there is not tile placed in this position. This is why I introduced the value 7 in the domain of the card variables: If there is not tile in a position i, card(i) will take value 7 rather than taking as value the number of adjacent tiles to position I (i.e. nbNeighbors(i)). Of course in our GCC the number of occurrences for value 7 is not constrained.

To enforce the value 7 when there is no tile we have several options but not all of them are equivalent in terms of filtering. The optimal filtering is obtained by using a table constraint where you specify that the three variables (nbNeighbors(i),used(i),card(i)) altogether must be equal to one of those tuples: (0,0,7), (1,0,7), (2,0,7), (3,0,7), (4,0,7), (5,0,7), (6,0,7), (0,1,0), (1,1,1), (2,1,2), (3,1,3), (4,1,4), (5,1,5), (6,1,6)
Where nbNeighbors(i) is equal to sum(neighbors(i))(i => used(i)).
If you have a MIP background, you might be tempted to write something like:
card(i) == (used(i) * nbNeighbors + (1- used(i)) * 7)

Unfortunately, using sum and prods to do it would have a weaker pruning than with the table option because products and summations mainly reason on the bounds of the domains in CP.

In this model, it is also important to try to place tiles on the left branch and remove it on the right branch during the search tree exploration (optimistic heuristic). With this strategy we can find a solution to the hardest instance with only two backtracks!

Here is it's solution (level 40)


The source file of this model in Scampi is available here

In the real hexiom, I think some positions of some tiles are forced. Of course it is not difficult to enfore that in this model (just force the value of card(i)).
An interesting variant to me would be to consider the overconstrained problem and try to place as many tiles as possible, or try to collect has many points as possible (number on the tiles).

Let me know if you also implement a model for Hexiom in another system such that I attach a link to them on this blog.