dimanche 23 octobre 2011

PhD Fellowship Award by IBM on CP topics

Patrick Albert (IBM) just announced the PhD Fellowship Award by IBM for topics related to R&D Optimization at IBM, in particular the two following topics:
- Fine grained, constraint-level multi threading in CP solvers.
- Just-in-time compilation techniques for CP solvers.
For more info, contact him and/or consult this page.

jeudi 20 octobre 2011

2nd DHC to Pascal Van Hentenryck

After the University of Louvain it is now the university of Nantes that recognizes the contributions and impact of Pascal Van Hentenryck by confering him the title of Doctor Honoris Causa (more info here).
This is always a good news for the visibility of CP. Congratulation to him!

jeudi 6 octobre 2011

CP solves 99% of the problems

Laurent Perron gave an interesting talk at CP'11. Laurent was in charge of ILOG CP Opt and he is now at Google developping the OR department. During his talk he made this remark: "ILOG CP was an expensive product, thus we had only complex problems that were very difficult to solve. When a product costs a lot of money then the companies are interested in it only when they have a very complex (and difficult) problems. Or-tools (the Google CP Solver) is mainly used in internal by Googlers and most of the time for solving easy problems. Almost all problems that we solve everyday are easy." I also remember that Laurent explained to me that he received requests for solving "simple" questions like : enumerate all the possible combinations of these variables. If you need to write such a code from scratch this is not an easy task at all. In fact, some people would like to have a simple tool for writing things like "for each combination of values in this array satisfying this constraint do something". In this case CP is really a good answer.
So maybe the answer about the problems solved by CP is the one proposed by Laurent and that is verified at Google: 99% of the real life problems arising everywhere :-)

jeudi 22 septembre 2011

The sailing experience with CP

This is a very nice blog post written by Nathan Brixius here. He compares the persons who build solvers to shipbuilder, people who write models to sailors and the end user of applications to passengers. The problem with CP is that the boat is so difficult to sail that to be a good sailor, you also need to have a very good knowledge in shipbuilding so I'm not sure the comparison holds for CP.
My opinions is that CP is probably more like a formula one. Very cool, very fast but .... don't tell me that Michael Schumacher doesn't know anything about the internals of his car (the only problem is that there is no place for passengers in a formula one ;-)).

lundi 5 septembre 2011

The CP summer Mercatto

After Philippe Gilbert (cycling) joining BMC Racing Team and Romelu Lukaku (soccer) joining Chelsea another great Belgian transfer happened during the summer mercato.
Leading computer scientist Pascal Van Hentenryck joins NICTA. This is a very good news for the field of CP that people like that will work together. More details in the press release.

dimanche 3 juillet 2011

RAII and backtracking

A conventional wisdom is that C++ is a powerful language but difficult to use and that could be unsafe. In fact, this is wrong. Only some languages like C++ may lead to safe applications.
Why? This is mainly due to RAII. RAII means Resource Acquisition Is Initialization. You can have a look at Wikipedia.

Here is the idea. Consider a mechanism that is mainly implemented thanks to two opposite actions. For instance, you want to manage a file: you need to open and to close it. You want to manage resources: you will need to acquire and to release the resources. You need to manage the memory: you need to allocate and to deallocate the memory.

Now, imagine that a problem can happen in your program and you would like to be able to manage it. In other words, you have a normal code and a problem occurs. You need to catch it and to continue the program in a safe way. The main issue is that you need to abandon some current process before applying the second action of your mechanism. For instance, you need to close the files or release the resource you take. The main difficulty is that the error may happen anywhere in the program, so it is not easy to know what are the open files, the required resources... and how to access to this data.

C++ offers a nice way to deal with such a problem:  you can easily respect the RAII principle which is  "the object which performs the first action also performs the second one". In addition, the first action is performed in the constructor and the second one in the destructor. If a problem occurs then an exception is triggered and the execution goes back until a catch is found. The advantage of C++ is that all the destructors of the objects that are no longer valid will be called. Therefore, the files that have been opened will be closed, the resources will be release and so on... In addition, this will automatically be performed, so you don't have to be worried about it.

You can't do that in Java, because there is no real equivalent to destructor. There is no guarantee that the finalize function will be called and there is no guarantee about the order along with they will be called.

This is a strong advantage for C++.

C# has been recently modified for this reason.

Ok, but what is the relation with CP?

In CP we have to manage the wipeout event (the failure), that is when an inconsistent state is met. This happens when a domain is emptied for instance. In this case, the current state is inconsistent and we need to stop it immediately. Thus, we are face to the problem I explained. This problem involves all the local objects that have been defined or also the memory management.

There are usually 3 ways to deal with this in your code:
- You use C++ and exceptions. Unfortunately, exception is a slow mechanism
- You use C++ and setjmp/longjmp. Setjmp is faster than exception. However, it is not C++ compliant because the destructors are not called. This method is used in a lot of Solvers (like Gecode or or-tools), mainly because it is fast. Unfortunately this does not respect the RAII principle. You will have to manage by hand some part of the code.
ILOG Solver did not recommend to use destructor and mentioned that destructor would not be called for object allocated on the solver heap. Maybe, this is the reason...
- You deal with a lot of test. So, you add a lot of "if" in your code to make sure that there is no failure that happens. Unfortunately, you do not respect the RAII principles...

Hence, it seems that there is no perfect solution.
In fact, there is a fourth solution, better than the previous ones and almost perfect, but I keep it for me :-)

vendredi 1 juillet 2011

List of accepted papers to CP2011 is out




This conference is the main international conference on the subject of constraint programming. The list of accepted papes is available here.

jeudi 30 juin 2011

Answers to the LP/IP Quizz are online

You can find the slides of Martin Grötschel here.
Thank you to Martin Grötschel and the CPAIOR11 team for making them available.




mercredi 29 juin 2011

The LP/IP History viewed by Martin Grötschel in 24 Questions

The LP/IP History viewed by Martin Grötschel in 24 Questions

The concluding talk of CPAIOR this year was given by Martin Grötschel who asked 24 interesting questions about the history of LP and IP.

  1. The questions about LP were:
  2. Who described the first linear equations solver?
  3. Who had the first LP solver (without knowing it)?
  4. Who formulated the first LP instance?
  5. Who described the first LP solver?
  6. Who described the first LP solver with impact in practice?
  7. Who described the currently most frequently used LP solver?
  8. Who implemented the first commercial LP code?
  9. Who received a Nobel Prize for LP?
  10. Who proved expected polyn. running time of the simplex method first?
  11. Who described the first polynomial time linear equations solver?
  12. Who had the first polynomial time LP solver (without knowing it)?
  13. Who described the first polynomial time LP solver?
  14. Who described the first polynomial time LP solver with practical impact?
  15. Who described the currently most frequently used barrier LP solver?
  16. What is the state of the art in LP solving?
and the questions about IP were:
  1. Which is the most important paper on integer programming?
  2. Which is the most important paper on combinatorial algorithms?
  3. Who described the first cutting plane algorithm?
  4. Who described the first branch&bound algorithm?
  5. Who described the first branch&cut algorithm?
  6. Who described the first column generation algorithm?
  7. Who implemented the first commercial IP code?
  8. Who described the first polynomial time IP solver?
  9. What is the state of the art in MIP solving?
I wonder what the questions for CP could be ?? :
  1. Who described the first CP solver?
  2. Who had the first CP solver (without knowing it)?
  3. Who formulated the first CP instance?
  4. Who described the first CP solver with impact in practice?
  5. Who described the currently most frequently used CP solver?
  6. Who implemented the first commercial CP code?
  7. Who received a Nobel Prize for CP? (I'm not sure we have one ;-) )
  8. Who described the first backtracking search?
  9. Who described the Branch and Bound for CP?
  10. Who described the first global constraint and its filtering algorithm?
  11. Who described/solved the first scheduling problem with CP?
  12. ...
Any suggestion/remarks on what the questions/answers could be for CP are very welcome.
Probably some answers can be found in a talk given by Pascal Van Hentenryck at CPAIOR08: "30 Years of Constraint Programming". Maybe CP is simply too young and it doesn't really make sense yet... I like to think that the major discoveries for CP are still to come ;-)

mercredi 22 juin 2011

Google or-tools new developments

Laurent Just sent an update on the or-tools mailing list:

- linear assignment (including dimacs challenge support): A. V. Goldberg and R. Kennedy, "An Efficient Cost Scaling Algorithm for the Assignment Problem." Mathematical Programming, Vol. 71, pages 153-178, December 1995.
- Min cost flow: R.K. Ahuja, A.V. Goldberg, J.B. Orlin, and R.E. Tarjan, "Finding minimum-cost flows by double scaling," Mathematical Programming, (1992) 53:243-266. http://www.springerlink.com/index/gu7404218u6kt166.pdf
- Max flow (many references, see graph/max_flow.h for details).
- SCIP support (see scip.zib.de). We will make a separate announcement on how to compile scip to be used in or-tools.
- Deviation constraint in the Constraint Programming solver : Pierre Schaus et. al., "Bound Consistent Deviation Constraint", CP07.
- Initial support for no good management in the CP search tree.
- 30-50% speedup on large sums in constraint programming.

mardi 14 juin 2011

My PhD thesis

Last week I attended the JFPC 2011. I was very happy to meet some people I haven't met for a long time ago (more than 15 years!). Philippe Vismare presented a nice study about common subgraphs identification.
Philippe and I were in the same office when we were phD students. I also studied the common subgraphs problem during my PhD. At JFPC, I also discussed with some people about some stuff that I wrote in my thesis and people complained because my thesis is no longer available. Thus, I decided to scan it and to put it online. You can find it one my webpage (be careful it is a 75MB pdf):
http://www.constraintprogramming.net/people/regin/papers/these.pdf

Unfortunately it is in french...

I hope that this document will be more cited now :-)

vendredi 10 juin 2011

The Java/Scala choice to implement CP Solvers

I recently started to implement a solver in Java during my free time. I choosed Java for several reasons:
- it is still by far the most popular language
- integration and deployment facility of a Java project + portability
- most of my colleagues at n-Side know Java and they are less familiar with C++
- the development tools (eclipse/InteliJ) are wonderful
- good tradeoff between speed and ease of modeling

Of course the price to pay is this factor 3. I like a lot the point of view of Nate Brixius (developer of Solver Foundation) about that. Here are few extracts about his blog post:

"I did standard critical path scheduling in both C# and C++ and found that the C# version was only 10-15% slower. We went with C# and in the course of development we were able to find algorithmic improvements that resulted in a faster implementation that the original C++ codebase. "

"Many programmers (myself included) are much more productive writing C# code. This gives me more time to investigate possible algorithmic improvements, write good unit tests, etc. There are opportunity costs!"

This is also true for CP solvers, what really makes a difference to solve a problem with CP is the functionalities available in your solver: the modeling facility, the global constraints available, how easy it is to write a custom search, to do a LNS, restarts, make parallelisation ...

I'm certainly not the first one to have chosen Java to develop a CP solver. Two popular open source CP solvers are implemented in Java: Choco and Jacop. There is also a standardization project for solvers in Java.


One of my colleague (Gilles) at n-Side is a real "Scala Enthutiast" and made me discover this wonderful language called Scala.
Since that time, I don't think anymore that writing a model in Java is very convenient...
The really good thing about Scala is that its syntax is so flexible that you can use it to write DSL (If find Scala much better than Python to do that)
But the other cool thing about Scala is that it is at least as efficient as Java.
A comparison between C++, Java, Scala and Go made by Google was published at the Scala Days a few days ago on an algorithmic benchmark:
Scala was about 3 times slower and Java 5 times slower than an optimized C++ code.
Of course I didn't throw away all my Java implem but I started a Scala wrapper on top of it for fun (integration between Scala and Java is perfect in both direction). I choosed to make a syntax quite close to OPL/Comet because I'm used to it and I find it very clear. Here is how the Queen problem looks like in my Scala wrapper:

object Queens extends Model {
def main(args: Array[String]) {

val cp = new CPSolver()

val n = 6 //number of queens
val Queens = 0 until n
//variables
val queens = for(i <- Queens) yield CPVarInt(cp,1 to n)

var nbsol = 0
cp.onSolution {
nbsol += 1
}

cp.solveAll subjectTo {
cp.add(alldifferent(queens))
cp.add(alldifferent(for(i <- Queens) yield queens(i) + i))
cp.add(alldifferent(for(i <- Queens) yield queens(i) - i))
} exploring {
//exploration of the search tree
new Binary(queens)
}

//print some statistics
println("#sol",nbsol)
cp.printStats()
}
}

Again, it seems that I'm not the only one to have taken that decision. The Jacop team also adopted the same approach and started to make a wrapper for their solver. Here is the Queen model in Jacop/Scala:

object Queen extends Application with jacop {

val n = 50

val q: List[IntVar] = for (i <- List.range(0, n)) yield new IntVar("q"+i, 0, n)

def noattack(i: Int, j: Int, qi: IntVar, qj: IntVar) = {
qi != qj
qi + i != qj + j
qi - i != qj - j
}

for (i <- 0 until n; j <- i+1 until n) noattack(i, j, q(i), q(j))

val result = satisfy( search(q, first_fail, indomain_middle) )

if (result)
q.foreach(qi => {
for( i <- 0 until n)
if (qi.value() == i) print(" # ") else print(" . ")
println

})
else println("No solution")
}



I think we may see even more Scala wrapper for CP solvers in the near future...

mardi 7 juin 2011

Cloud and iCloud

I am currently more and more involved in cloud computing management. Mainly because I am member of the Aeolus ANR project (see http://www.aeolus-project.org). The university for which I work, has also just hired Fabien Hermenier who comes from Ecole des Mines de Nantes and who was one of the developer of the Entropy system. Fabien has some very interesting problems of cloud management and he modeled some of them by using CP.  I hope to show that CP is a really efficient technique for solving these problems, mainly because it can easily integrate new additional constraints.

At the same time, Steve Jobs presented iCloud. I think this is really a revolution in cloud computing or maybe a revolution in marketing. Here is his idea: if you have musics on your computer that you bought from iTunes, now you will be able to access to this music from the iTunes music store! Great you still have the right to use what you paid for. But the true revolution of iCloud is the following: if you have some musics on your computer that you didn't buy from itunes (for instance you ripped a CD) then you will be able to access to this music from the iTunes music store for only $25 a year! They call this iCloud and as you can see on this slide



you have no longer to wait for the long upload like with Google or Amazon! In fact, they just need to upload the MP3 Tag! That's a new way to see cloud computing. Apple need just to know your MP3 Tags and upload them : how many lines of code for doing that ? 100 ? That's a revolution.
At the same they avoid a lot of problem like memory consumption, time transfer and so on... Note that they still need minutes for your MP3 tag.

In addition you buy one more time your music...

For sure, iCloud is different...

lundi 6 juin 2011

Blind reviewing

Here is a review I just received:
"The paper should be resubmitted with more references to the existing work about the All-Different constraint"

jeudi 2 juin 2011

Open Source and Originality

Last week, a lot of people (including Pierre, Thierry and myself) attended the CP-AI-OR Conference in Berlin.
We had some discussions about a paper implementing Goal programming in an open source solver. I wondered what was the scientific contribution and I asked the question. In fact, the authors were not really able to answer to my question. It was a presentation associated with an abstract, thus I think that it is not really important. 

I am more interested in one argument that has been answered: "there is a scientific contribution because this is new for open-source". We discussed about this point. Here are some remarks. I am curious to know if you have some others:

First, it means that all existing "proprietary" stuff has absolutely no value because it is not open-source! This is a strange conception of the originality. I also wonder why the open-source has several licences which forbid to reproduce the open source concept or code (because rewritten is not equivalent to originality)... 

Then, some people suggested that the minimum of honesty  should be to present the ideas in that way:
"Hello all, I am going to present an implementation in my open source solver of some works that has been carried out in several non open source solvers. In addition, you need to know that these ideas was clearly documented in the other products. Therefore, our job was mainly to  integrate these works in our open source product"

I think it is an interesting point of view. However, I guess that it will be more complex to have a paper accepted with such an introduction.

At last, note that it is not mandatory to publish papers...

vendredi 27 mai 2011

CP arrives in business schools!

One difficulty faced by the new CP users is that you need programming skills to use it because all solvers’ API can only be used from programming languages. This is probably the reason why in business schools, the simplex is used almost exclusively to illustrate optimization (you can solve linear problems in Excel!).

But this will change since AIMMS (with the help of Willem-Jan van Hoeve) has decided to add a CP modeling API as well (paper available here). I think the version is still beta and they are looking for people to test it on AIMMS' google group.

I believe it is very good to disseminate this great technology to make it available from high level modeling tools like AIMMS (only a few clicks and few numbers to encode and your CP model is ready). Now the professors in business schools have no excuse any more for not teaching CP ;-)

Actually this may only be the beginning of something: Robert Fourer (creator of the very popular AMPL) was there at the CPAIOR conference that just finished today. Maybe I speculate but we may see CP coming soon in AMPL as well. After all this plan of extension for AMPL for CP is there since a long time. Robert Fourer answered about this extension on AMPL mailing list last month "We have hopes that this long-delayed project will move forward in the next year or so."

As you see, only good news for CP because if AMPL and AIMMS offers CP, it will reach many people from the OR-INFORMS community.
The future will tell us if those dreams come true and CP will have the popularity it deserve… In the meanwhile we can only encourage AMPL team to build this CP extension.

vendredi 15 avril 2011

Informs Analytics, some feedback



I’m just back from Informs Analytics. I was there with 4 of my colleagues from n-Side. We were presenting posters about our products (this is a picture of my poster)

It was really a nice experience and a nice organization. If you feel sometimes alone or isolated doing optimization… Go there ! You’ll see that analytics (the new buzzword for OR) is used in every big organizations.

It was also fun to put a face on names that you know from Internet (like Natan Brixius having a blog on Microsoft Solver Foundation, Bob Fourer from Ampl and many others from Or-Exchange).

I was also very seduced by the presentations on Knitro solver (non linear continuous optimization) made by Richard Waltz. Continuous optimization is a technology that we use a lot at n-Side and we learned a lot there thanks to Richard!

I also attended a presentation given by Greg Glockner (Gurobi) on how to debug/detect numerical issues of a MIP. He is a great presenter! I didn’t know for instance that if you write in a MIP a constraint like y <= 10000000000 x with x a binary variable, you can get into troubles and have y taking positive values even without forcing x to 1 due to rounding errors. I found it a bit fun. CP doesn’t have those kinds of numeric issues because it is discrete by nature.

Of course the battle of MIP solver was more fierce than ever (FICO, CPLEX, GUROBI, …). It seems that they are fighting for some percents of speed-up on some benchmarks (mittleman bench among others). Honestly, I’m not sure it really matters to the end user. First, we even don’t know what kind of problems is present in those benchmarks. Second, what I would like to see is how easy it is to make a column generation, a branch and price or a new cut…

The Edelman award was really impressive. A bit like a Hollywood show http://www.flickr.com/photos/informs/5611418031/in/photostream The winner was MIDWEST ISO. They converted a local electricity market to a larger regional one resulting in lower electricity costs and more efficient market. Of course all that taking the constraints of the network into account. That was really great to me and my colleague to see this winner because we have a very similar product at n-Side where we are in charge of coupling the European market.

My main regret was that CP doesn’t have the status it should have at INFORMS. The only place where I’ve seen a CP application was on the Monsanto poster of my friend Ravichandran Venkatesh. They use CP to solve a kind of hybrid placement/scheduling problem for greenhouses.

The reason may be that CP is too focused on the AI community and less on Informs. This is sad because many of the problems solved there could be solved by CP or by hybridization with CP. For instance I’ve seen a presentation given by Fico where they try to build optimal planograms for supermarkets. A planogram is something like that:

They use a column generation for that. Great! But they have a log of sequence like constraints (regular, stretch…). So I’ve told them that CP would be very appropriate to do that.

In conclusion, It would be nice to see in the future more CP there. For instance I would enjoy seeing Google winning the Edelman award solving a real application with his new CP Solver ;-).

vendredi 8 avril 2011

Yes we can … optimize!

Recently, I heard from someone (that I was trying to convince how CP is cool) : « yes but you know, most of the time we have an objective function … ». This person has a very strong MIP background and it’s not the first time I heard such a belief. Needless to say this is wrong and no, CP is not only for satisfaction problems.

CP usually solves optimization problems with B&B by optimizing one variable having its own domain. In short, this is how it works:

When a solution is found, a constraint is dynamically added (without restarting the search) such that the next solution found is strictly better. The last solution found is by construction optimal (correct me if I’m wrong but I think this adaptation of B&B to CP was imagined by Pascal Van Hentenryck).

This is also completely false to say that CP doesn’t use lower bounds. Usually the objective function/variable is linked into one (or several) global constraint computing very good lower bounds on it (look for instance the paper of Jean-Charles: « Arc consistency for global cardinality constraints with costs ». With this kind of constraints you can solve TSP problems with CP. This is a small demo on google map I made some time ago http://travellingsalesmanproblem.appspot.com/ It is a bit slow due to the time needed to collect the distance matrix L

vendredi 18 mars 2011

new Java wrapper for google solver

Laurent just informed me that they just finished a Java wrapper for the google solver.
Here is an example on the trunk for the RabitPheasants.
It seems that they choose the approach (like Gurobi) to let the users code their models in their favorite language.

mardi 8 mars 2011

Bob's Sale Problem

This is a very nice and interesting problem for CP posted recently on stackoverflow :

" Bob has a store, and wants to do a sale. His store carries a number of products, and he has a certain integer quantity of units of each product in stock. He also has a number of shelf-mounted price labels (as many as the number of products), with the prices already printed on them. He can place any price label on any product (unitary price for one item for his entire stock of that product), however some products have an additional restriction - any such product may not be cheaper than a certain other product. "

You must find how to arrange the price labels, such that the total cost of all of Bob's wares is as low as possible. The total cost is the sum of each product's assigned price label multiplied by the quantity of that product in stock. "


This can be modeled easily with some precedence constraints and an global alldiff constraint with costs (see Filippo Focacci, Andrea Lodi, Michela Milano: Cost-Based Domain Filtering. CP 1999 for the filtering algorithm) .

As soon is I find the time to do it, I'll generate some instances to test it....


vendredi 25 février 2011

Ten years of CPAIOR

The Hybrid Optimization book about the ten year of CPAIOR is available!

http://www.springer.com/mathematics/book/978-1-4419-1643-3

I recommend to buy it! Especially because I wrote a (73 pages!) chapter about Global Constraints :-)