Robust Optimization
Dimitris Bertsimas gave a talk here today on robust optimization. One question he asked was (paraphrasing) "What do you do when reality refuses to match up to the model?", which I think is a great question. So much of what we do seems to be fragile (think cascading effects of a snowstorm in Chicago stranding travelers in Miami) when we know that are models are based on data that is only an approximation to reality. Robust optimization (roughly, optimizing with an assumption that no more than a certain number of data points are wrong, and each is wrong by no more than a fixed amount) is one way to attack this. Stochastic optimization is another. I am not sure we have found the right method yet (though Dmitris' work is extremely impressive!).
5 Comments:
On of the major reasons that this robust optimization approach hasn't been widely accepted is that it's difficult for modelers to implement. It would be very useful if someone came up with a way to interface modeling languages such as GAMS or AMPL to software for robust optimization. Even a MATLAB toolbox at the level of the LMI toolbox for linear matrix inequalities would be useful.
Some of Robust Optimization seems ideal for this sort of interface since, for some types, the problem complexity does not increase (MIP leads to MIP, etc.). The work seems mainly in the interface: how to describe the instance.
Modelling of robust optmization problems has been on my todo list for the MATLAB modelling language YALMIP since 2002 or something (and some small code snippets are already in there, but nothing finished)
It will be implemented, but I am still trying to settle on the feature set and how extensive support there should be (we, i.e. control oriented people, often address LPs and QPs with additive uncertainty, so this is probably where it will begin)
My thesis addressed control problems involving uncertainties that could be addressed using SDP relaxations, so that is probably the next step.
I am a little disappointed with the intensity with which "young" robust optimizers were trying to pitch RO as an alternative to stochastic optimization (SO). As you correctly point out, RO is an alternative viewpoint for dealing with uncertainty, and both SO and RO have there relative merits and disadvantages. I hope the turf wars subsides soon and we can get back to solving the real problem.
RO is not simply alternative viewpoint for dealing with uncertainty. Using decision rule methods, which has been long abandoned by SO, RO has recently been extended to approximating multiperiod stocastic optimization probems with chance constrants. Such models were not addressed in classical SO.
Post a Comment
Note: Only a member of this blog may post a comment.
<< Home