Das Oberseminar ist eine gemeinsame Veranstaltung der RWTH-Arbeitsgruppen im Bereich der kombinatorischen Optimierung / diskreten Mathematik:
2019 | |
---|
28.03.2019 | 16:30 - 17:30 B201, Kackertstraße 7 | A. Ridha Mahjoub LAMSADE, Université Paris-Dauphine | Survivable Network Design Problems And PolyhedraSurvivable Network Design Problems And Polyhedra
The introduction of fiber optic technology in telecommunication has increased the need of designing survivable networks. Survivable networks must satisfy some connectivity requirements. That is networks which are still functional after the failure of certain links. More precisely, we are given a graph G=(V,E) with costs on the edges. For each node v there is a nonnegative integer r(v), called connectivity type of v, that represents the importance of communication from and to node v. The survivable conditions require that between every pair of nodes (s,t) there are at least
min{r(s),r(t)}
edge-disjoint paths. The survivable network design problem is to determine a subgraph of G that minimizes the total cost subject to the survivable conditions.
In this talk we will discuss some variants of this problem . First we consider the 2-edge connected subgraph problem, the case where r(v)=2 for every node. We characterize the graphs for which the linear relaxation of the problem is integral, and present some algorithmic consequences of that characterization. In particular we show that the so-called F-partition inequalities can, in some cases, be separated in polynomial time.
Then we consider the problem when r(v)=1 or 2 for every node v. This case is of particular interest to the telecommunication industry. We present a class of inequalities, also called partition inequalities, valid for the problem in this case. And we show that the separation problem for these inequalities reduces to the minimization of a submodular function, and can then be solved in polynomial time.
We finally discuss some computatinal issues of these inequalities and present some extentions when length constraints are considered in the network.
|
06.02.2019 | 10:30 - 11:30 Math 203 | Lukas Schürmann Universität zu Köln | Separation of Möbius Ladder Inequalitiies for the Acyclic Subdigraph Problem |
22.01.2019 | 10:30 - 11:30 SeMath 008 | Sven Mallach Universität zu Köln | A Compact Linearization Technique for Binary Quadratic Optimization ProblemsA Compact Linearization Technique for Binary Quadratic Optimization Problems
Given a (binary) quadratic optimization problem, two major possible
solution strategies exist: Solve the problem directly as is, or
reformulate and solve it as a mixed-integer linear program. While direct
methods are constantly evolving, (more) effective linearization
techniques are likewise of major interest, as they enable the
application of the whole toolbox available to tackle a (comparably)
well-understood problem.
This talk starts with an overview of available linearization techniques
for binary quadratic optimization problems of different shape, along
with their advantages and disadvantages. Then, a new, compact, and for
several cases provably and/or empirically strong, linearization method
is presented that exploits the structure of linear equations or
inequalities present in the original problem formulation. Being easily
applicable by hand, it may as well be automated - e.g. for use within
a general-purpose solver framework. Moreover, previously published integer
programming formulations for particular prominent combinatorial optimization
problems suddenly appear as special cases of the proposed technique. |
2018 | |
---|
18.10.2018 | 10:00 - 11:00 Raum 203 | Clemens Thielen University of Kaiserslautern | Dienstplanung für Ärzte an einer orthopädischen Klinik – Entscheidungsunterstützung mittels mathematischer Optimierung Dienstplanung für Ärzte an einer orthopädischen Klinik – Entscheidungsunterstützung mittels mathematischer Optimierung
Die Erstellung von Dienstplänen für Ärzte und Pflegepersonal einer Klinik ist ein wichtiges Themenfeld innerhalb der Personaleinsatzplanung und hat weitreichende Auswirkungen beispielsweise auf die Effizienz des Klinikbetriebs und die Mitarbeiterzufriedenheit. Gerade bei der Erstellung von Dienstplänen für das ärztliche Personal treten dabei häufig komplexe Bedingungen und vielfältige, teils gegenläufige Zielsetzungen auf, welche bei der händischen Erstellung von Dienstplänen nicht immer adäquat berücksichtig werden können. Dies motiviert die Verwendung mathematischer Optimierungsmodelle zur Berechnung von Dienstplänen für Ärzte, welche alle gesetzlichen Vorgaben einhalten, einen effizienten Klinikbetrieb ermöglichen und gleichzeitig die Wünsche der betroffenen Ärzte möglichst gut berücksichtigen.
Dieser Vortrag stellt ein mathematisches Optimierungsmodell zur Erstellung von Dienstplänen für die Assistenzärzte einer großen orthopädischen Klinik vor, welches sich seit Anfang 2016 im Praxiseinsatz befindet. Anhand realer Daten der Partnerklinik werden wir die Qualität der erstellten Dienstpläne bezüglich der Berücksichtigung verschiedener Ziele evaluieren und zusätzlich eine Entscheidungsunterstützungskomponente für die Umplanung bei unerwarteten Ausfällen von Ärzten während der Planungsperiode vorstellen.
|
08.10.2018 | 11:00 - 12:00 Raum B201 Kackertstrasse 7 | Stephen J. Maher Lancaster University | BendersSCIP - A Benders' decomposition framework for SCIP BendersSCIP - A Benders' decomposition framework for SCIP
Benders' decomposition is a popular mathematical programming technique for solving large scale optimisation problems. While Benders'
decomposition is historically viewed as requiring a problem specific implementation, general frameworks can provide an ideal platform for the investigation of general algorithm enhancement techniques. Such an investigation was not possible until only recently when Benders'
decomposition framework have been implemented in general purpose mixed integer programming solvers, most notably SCIP. This talk will give an overview of the current features within the Benders' decomposition framework available with SCIP and GCG. Further, current work on cut enhancement techniques and the handling of integer subproblems will be discussed. The results will demonstrate the value of a general framework for the development and investigation of enhanced algorithms for Benders' decomposition.
|
04.09.2018 | 10:00 - 11:00 Raum 203 | Sebastian Wiederrecht TU Berlin | Tight Cut Decomposition auf Hypergraphen Tight Cut Decomposition auf Hypergraphen
The perfect matching polytope, i.e. the convex hull of (incidence vectors of) perfect matchings of a graph is used in many combinatorial algorithms. Kotzig, Lovasz and Plummer developed a decomposition theory for graphs with perfect matchings and their corresponding polytopes known as the tight cut decomposition which breaks down every graph into a number of indecomposable graphs, so called bricks.
For many properties that are of interest on graphs with perfect matchings, including the description of the perfect matching polytope, it suffices to consider these bricks. A key result by Lovasz on the tight cut decomposition is that the list of bricks obtained is the same independent of the choice of tight cuts made during the tight cut decomposition procedure.
We generalise the notions of a tight cut, a tight cut contraction and a tight cut decomposition to hypergraphs. By providing an example, we show that the outcome of the tight cut decomposition on general hypergraphs is no longer unique. However, we are able to prove that the uniqueness of the tight cut decomposition is preserved on a slight generalisation of uniform hypergraphs. Moreover, we show how the tight cut decomposition leads to a decomposition of the perfect matching polytope of uniformable hypergraphs.
|
04.07.2018 | 10:15 - 11:15 SeMath | Sigrid Knust Universität Osnabrück | Synchronous flow shop scheduling problems Synchronous flow shop scheduling problems
A synchronous flow shop is a variant of a non-preemptive permutation flow
shop where transfers of jobs from one machine to the next take place at
the same time. The processing is organized in synchronized cycles which
means that in a cycle all current jobs start at the same time on the
corresponding machines. Then all jobs are processed and have to wait
until the last one is finished. Afterwards, all jobs are moved to the
next machine simultaneously. As a consequence, the processing time of a
cycle is determined by the maximum processing time of the operations
contained in it. Furthermore, only permutation schedules are feasible, i.e.,
the jobs have to be processed in the same order on all machines.
The goal is to find a permutation of the jobs such that the makespan is
minimized.
Motivated by a practical application, we investigate special cases where the
processing times of the cycles are determined by a subset of so-called
dominating machines. Furthermore, we consider problems with additional
resources and setup times.
|
07.02.2018 | 12:15 - 13:15 B201/Kackertstraße 7 | Elisabeth Rodríguez Heck HEC Liège | Linear and quadratic reformulation techniques for nonlinear 0-1
optimization problems Linear and quadratic reformulation techniques for nonlinear 0-1
optimization problems
We are interested in the problem of nonlinear unconstrained
optimization in binary variables. A common approach to solve a problem
of this type consists in defining first a linear or a quadratic
reformulation of the objective function and then using linear or
quadratic integer programming techniques to optimize the reformulation.
We focus on methodological aspects of these reformulations and present
several theoretical and experimental results including a comparison of
the computational performance of the different reformulation methods. |
02.02.2018 | 14:15 - 15:15 Raum 008/SeMath | Christina Büsing Lehrstuhl II für Mathematik RWTH Aachen University | Dealing with Uncertainties in discrete Optimization - a Recoverable Robust Approach Dealing with Uncertainties in discrete Optimization - a Recoverable Robust Approach
Dealing with uncertainties is an important task whenever minimization problems of real world applications are considered. A classical way to include them into the optimization process is called robust optimization. In robust optimization we choose a solution that is feasible in all considered uncertainty settings and minimizes the worst-case cost. However, this approach leads to over conservative solutions and neglects the fact that solutions can be adapted. The concept of recoverable robustness takes this option into account.
In this talk I will present a special version of recoverable robustness - the so called k-Delete recoverable robustness (k-Del RR) – to deal with cost uncertainties. K-Del RR consists of two stages. In the first stage, a solution is chosen. In the second stage, after the real costs are revealed, the solution can be adapted by deleting up to k elements to reduce the cost. This is motivated by an application in telecommunication where a technology upgrade is refused to at most k customers in order to hedge against the increased cost.
After a short introduction of the problem, we will consider 0-1 optimization problems and their k-Delete recoverable robust version. We prove in the first part of the talk that the k-Del RR shortest path and minimum matching problem are strongly NP-hard and provide polynomial solvable cases. In the second part of the talk we focus on exact algorithms to solve these problems. We will therefore derive several different IP-formulations and provide preliminary computational results on the run-time. |
2017 | |
---|
14.11.2017 | 13:00 - 14:00 Raum 208 (Rauhut) | Henning Bruhn Universität Ulm | Determinants of 0/1-matricesDeterminants of 0/1-matrices
How large can the determinant of a nxn-matrix be that has at most 2n
entries equal to one, and all other entries equal to 0? Questions like
these have a long history. The oldest and probably most well-known is
the Hadamard problem: what is the smallest upper bound for the
determinant of a nxn-matrix with all entries either 1 or -1?
Despite a considerable body of literature there are not only still many
open problems in this topic but also new questions that may be asked.
For many classes of natural 0/1-matrices, that perhaps come from
combinatorial structures, tight upper bounds for the determinant seem to
be open. In the talk I will discuss some of these questions and some
steps towards their resolution. The proofs we found mix classic tools
from linear algebra with combinatorial insights. The talk is based on
joint work with Dieter Rautenbach. |
07.11.2017 | 13:00 -- 14:00 B 201 | Marc Schröder Management Science RWTH Aachen University | Network Pricing: How to Induce Optimal Flows under Strategic Link OperatorsNetwork Pricing: How to Induce Optimal Flows under Strategic Link Operators
Network pricing games provide a framework for modeling real-world settings with two types
of strategic agents: owners (operators) of the network and users of the network. Owners of the
network post a price for usage of the link they own so as to attract users and maximize profit;
users of the network select routes based on price and level of use by other users. We point
out that an equilibrium in these games may not exist, may not be unique and may induce an
arbitrarily inefficient network performance.
Our main result is to observe that a slight regulation on the network owners market solves
all three issues above. Specifically, if the authority could set appropriate caps (upper bounds)
on the tolls (prices) operators can charge then: the game among the link operators has a unique
and strong Nash equilibrium and the users' game results in a Wardrop equilibrium that achieves
the optimal total delay. We call any price vector with these properties a great set of tolls.
As a secondary objective, we want to compute great tolls that minimize total users' payments
and provide a linear program that does this. We obtain multiplicative approximation results
compared to the optimal total users' payments for arbitrary networks with polynomial latencies
of bounded degree, while in the single-commodity case we obtain a bound that only depends on
the topology of the network. |
12.10.2017 | 09:00 - 10:00 Math 203 | Waldemar Laube, M. Sc. Lehrstuhl II für Mathematik RWTH Aachen University | dial-a-ride problems |
18.09.2017 | 13:30 - 14:30 SeMath | Björn Bahl, M. Sc. Lehrstuhl für Technische Thermodynamik RWTH Aachen University | Rigorous synthesis of energy systems by decomposition via time-series aggregationRigorous synthesis of energy systems by decomposition via time-series aggregation
The synthesis of complex energy systems usually involves large time series such that a direct optimization is not possible. In this paper, we propose a decomposition method for synthesis problems using time-series aggregation. To initialize the method, the time series is aggregated to one time step. A lower bound is obtained by relaxing the energy balances and underestimating the energy demands leading to a relaxed synthesis problem, which is effciently solvable. An upper bound is obtained by restricting the original problem with the full time series to an operation problem with a fixed structure obtained from the lower bound solution. If the bounds do not satisfy the specified optimality gap, the resolution of the time-series aggregation is iteratively increased. The decomposition method is applied to an industrial real-world synthesis problem. The results show the fast convergence of the decomposition method outperforming commercial state-of-the-art optimization software.
|
19.04.2017 | 14:00 - 15:00 SeMath | Jan Simon, M. Sc. Lehrstuhl II für Mathematik RWTH Aachen University | Die Rekonstruktion von Färbungen endlicher GruppenDie Rekonstruktion von Färbungen endlicher Gruppen
|
30.03.2017 | 13:00 - 14:00 SeMath | Stefano Coniglio Department of Mathematical Sciences University of Southampton | Network routing through the Internet as a Stackelberg gameNetwork routing through the Internet as a Stackelberg game
The ability of traffic flows to adapt their rate and fairly use all available resources is one of the Internets pillars. However, this traffic characteristic, often referred to as elasticity, has not so far been considered in network routing methodologies.
We propose a new approach to network routing for elastic traffic that models the interaction between the network operator and the congestion control mechanism as a Stackelberg game, whose equilibrium computation entails the solution of a bilevel programming problem. In the first level, the network operator plays choosing a set of routing paths, while, in the second level, the congestion control scheme determines a bandwidth allocation to the flows by maximizing a measure of fairness.
We propose a bilevel programming formulation and derive a compact set of optimality conditions for the case of max-min and proportional fairness which allow for the derivation of single-level mathematical programming formulations, solvable with state-of-the-art mathematical programming solvers. We compare the two formulations and prove some key properties of them and their respective problems. Numerical experiments on different network topologies and instance sizes are reported and illustrated.
|
28.03.2017 | 13:30 - 14:15 B037 | Stephen Maher Department of Management Science Lancaster University | A column generation approach for the recursive circle packing problemA column generation approach for the recursive circle packing problem
Packing rings into a minimum number of rectangles is an optimization problem that appears naturally in the logistics operations of the tube industry. Considering each rectangle as a transportation container, minimal transportation costs are given by recursively packing rings into the smallest number of rectangles. No exact solution methods exist for the recursive circle packing problem (RCPP)---a more difficult variant of the circle packing problem---with the best heuristic algorithms only able to find solutions for small instances. A cutting stock formulation of the RCPP is described that reduces the difficulty of the problem that arises due to the recursive nature. An exact column generation algorithm is developed by applying a Dantzig-Wolfe reformulation to the cutting stock formulation of the RCPP. The exact column generation algorithm is demonstrated to outperform previous heuristic approches by providing improved upper bound solutions and strong lower bounds for a large collection of test instances. |
15.02.2017 | 11:00 - 12:00 SeMath | Sebastian Milz Lehrstuhl II für Mathematik RWTH Aachen University | Degree Complete Graphs |
2016 | |
---|
10.11.2016 | 13:00 - 14:00 SeMath | Sascha Kuhnke Lehrstuhl II für Mathematik RWTH Aachen University | An Adaptive Discretization Algorithm for the Waste Water Network Design and Operation ProblemAn Adaptive Discretization Algorithm for the Waste Water Network Design and Operation Problem
The Waste Water Network Design and Operation Problem deals with a flow network where water is polluted by different contaminants. The objective is to install and operate water treatment units to remove contaminants from the water at minimum costs. These regeneration facilities have to ensure that certain industrial processing units receive water that is clean enough and that environmental regulations for effluent streams are met. Due to many bilinear terms, this water allocation problem is a non-convex MINLP and, therefore, nonlinear solvers have difficulties to even find feasible solutions for larger instances.
In this talk, we present an adaptive discretization algorithm for this problem, which approximates a solution by iteratively solving discretized MILPs and finally computes a feasible solution for the original nonlinear problem. The discretization grid in the MILPs is adapted after each iteration based on the previous calculated solution, thus allowing us to generate a suitable superstructure of the network. Afterwards, by fixing all previously calculated design decisions, the original non-convex program is solved. In many cases where general nonlinear solvers failed, this approach leads to feasible solutions with good solution quality in short running time. |
21.07.2016 | 13:00 - 14:00 SeMath | Matthias Walter Lehrstuhl für Operations Research RWTH Aachen University | Complete Description of Matching Polytopes with One Linearized Quadratic Term for Bipartite GraphsComplete Description of Matching Polytopes with One Linearized Quadratic Term for Bipartite Graphs
We consider, for complete bipartite graphs, the convex hulls of characteristic vectors of matchings M, extended by a binary number indicating whether the matching contains two specific edges or not. This polytope is associated to the quadratic matching problem with a single linearized quadratic term. We provide a complete irredundant inequality description of this polytope, which settles a conjecture by Klein (Ph.D. thesis, TU Dortmund, 2015).
In the talk I will motivate the one-term linearization approach, give a rough idea of the proof technique and discuss properties of the relevant polytopes. |
30.06.2016 | 13:00 - 14:00 SeMath | Bernard Fortz Universite de Libre Bruxelles | Stackelberg games: A comparison of MIP formulations and a border patrol applicationStackelberg games: A comparison of MIP formulations and a border patrol application
Stackelberg Games confront contenders with opposed objectives, each wanting to optimize their rewards. Decision-making parties involve a party with the capacity of committing to a given action or strategy, referred to as the leader, and a party responding to the leader's action, called the follower. The objective of the game is for the leader to commit to a reward-maximizing strategy anticipating that the follower will best respond.
Finding the optimal mixed strategy of the leader in a Stackelberg Game is NP-hard when the leader faces one out of several followers and polynomial when there exists a single follower. Additionally, games in which the strategies of the leader consist in covering a subset of at most K targets and the strategies of the followers consist in attacking some target are called Stackelberg security games and involve an exponential number of pure strategies for the leader.
A Stackelberg game can be modeled as a bilevel bilinear optimization problem which can be reformulated as a single level MILNP. We present different reformulations of this MINLP and compare their LP relaxations from both theoretical and computational points of view.
We will conclude with some snippets from a border control application we are developing for Carabineros de Chile. The goal is to provide Carabineros with a monthly patrol schedule along the northern border of that country exploiting our expertise on Stackelberg games such that available resources are optimally deployed.
(joint work with Carlos Casorrán-Amilburu, Martine Labbé, Fernando Ordonez, Victor Bucarey, Hugo Navarrete, Karla Rosas) |
23.06.2016 | 13:00 - 14:00 SeMath | Nils Spiekermann RWTH Aachen University | Robustifying Combined Heat-and-Power Operation using Affine Decision RulesRobustifying Combined Heat-and-Power Operation using Affine Decision Rules
Considering the change in the energy supply systems in Europe, the potential of running small sized and variable energy generators attracts a great amount of interest, especially for private investors.
In this talk we will consider combined heat and power (CHP) production units with a fixed heat to power ratio, coupled to a heat storage.
On the power side we assume market participation, where power can be bought as well as sold day-ahead, and our power balance can be momentarily restored at a cost according to the deviation.
Since, under these assumptions only heat uncertainties can result in physical infeasibilities, while other uncertainties solely affect the costs, we will focus on an uncertain heat demand.
To compensate for the heat uncertainty, we rely on a limited heat storage which suffers exponential losses over time.
The aim is to find a robust production plan, consisting of the energy production as well as the day ahead market activities, which is feasible for all realizations inside an uncertainty set build around a given heat forecast, using historical data.
Here, the problem is formulated as a two stage mixed integer linear program (MILP) for both the discrete and gamma-robust uncertainty sets.
A comparison between a static solution approach and one based on affine rules is drawn considering realized robustness as well as computational time. |
15.06.2016 | 12:15 - 13:15 B201 | Vera Weil RWTH Aachen / Universität Köln | Bounding the difference between the maximum degree and the clique numberBounding the difference between the maximum degree and the clique number
We consider the hereditary graph class where the difference between the maximum degree and the clique number is bounded by a constant k. We prove that for every k, the number of minimal forbidden induced subgraphs is finite.
Hence, for every k, deciding if a graph is a member of this class lies in P (joint work with O. Schaudt, University of Cologne).
We further consider two generalizations: On the one hand, we'll have a look at those hereditary graph classes where the difference between the maximum degree and a monotone graph parameter is bounded by a constant (a graph parameter p is called monotone if for every induced subgraph H of a graph G, p(H) is lower or equal to p(G); examples are the clique number, the chromatic number, the independence number).
On the other hand, observe that the class considered in the beginning can also be described as those graphs where hereditarily, the maximum degree is smaller or equal to the clique number plus a constant. We ask if the finiteness of the set of minimal forbidden induced subgraphs can still be guaranteed if we allow other functions on the clique number to bind the maximum degree. |
TBD | 13:00 - 14:00 SeMath | Sigrid Knust Universität Osnabrük | Synchronous flow shop problemsSynchronous flow shop problems
A synchronous flow shop is a variant of a non-preemptive permutation flow shop where transfers of jobs from one machine to the next take place at the same time. The processing is organized in synchronized cycles which means that in a cycle all current jobs start at the same time on the corresponding machines. Then all jobs are processed and have to wait until the last one is finished. Afterwards, all jobs are moved to the next machine simultaneously. As a consequence, the processing time of a cycle is determined by the maximum processing time of the operations contained in it. Furthermore, only permutation schedules are feasible, i.e. the jobs have to be processed in the same order on all machines. The goal is to find a permutation of the jobs such that the makespan is minimized. Motivated by a practical application we investigate special cases where the processing times of the cycles are determined by a subset of so-called dominating machines. Besides complexity results we present exact MIP formulations and heuristic solution algorithms. Furthermore, we consider problems with additional resources and setup times.
|
TBD | 14:15 - 15:15 SG 23 | Martin Weibelzahl Universität Erlangen | A Bilevel Optimization Model for Fair and Efficient Church Policy:
An Assessment of Alternatives A Bilevel Optimization Model for Fair and Efficient Church Policy:
An Assessment of Alternatives
Despite being one of the most important aspects of religious services, in times of the priest shortage the preaching of faith becomes more and more challenging. In this paper we are the first to present a bilevel model for the planning of religious services in the church, where we focus on holy mass planning. We assume different priests of a parish cluster that decide on their optimal mass schedule on the first level. On the second level optimal mass attendance behavior of believers is modeled. We give conditions under which uniqueness of the lower level problem is guaranteed and reformulate the bilevel problem as a mixed-integer single level problem. We also propose an integrated planner model as an optimal planning benchmark. Our bilevel model can be seen as a valuable tool for assessing different church policy regulations in the context of religious service production. In this paper we (i) analyze different objectives that priests may pursue when they decide on their mass plan, (ii) study different degrees of planning cooperation among priests of the parish cluster, and (iii) investigate mobility programs that enable older or immobile believers to attend a mass. As expected, mass plans that maximize fairness or priest utility may be accompanied by a welfare loss. We also show that an increased cooperation among priests must not always be welfare enhancing and that a higher mobility investment may paradoxically decrease welfare in some cases. Investment programs may additionally be accompanied by severe incentive problems. |
28.04.2016 | 13:00 - 14:00 SeMath | Sergei Chubanov Universität Siegen | Polynomielle Projektionsalgorithmen mit Anwendungen in ganzzahliger und unendlichdimensionaler linearer OptimierungPolynomielle Projektionsalgorithmen mit Anwendungen in ganzzahliger und unendlichdimensionaler linearer Optimierung
In diesem Vortrag wird eine neue Klasse von Projektionsalgorithmen vorgestellt, die zu einem polynomiellen Algorithmus für die lineare Programmierung führt. Die Algorithmen dieser Klasse sind auch auf ganzzahlige Probleme und lineare Optimierungsprobleme in unendlichdimensionalen Räumen anwendbar. In allen diesen Fällen sind die Laufzeiten der Algorithmen durch ein Polynom in der Problemgröße und in einigen anderen Parametern beschränk |
12.04.2016 | 16:00 - 17:00 SeMath | Stefano Coniglio University Southampton | Computing Leader-Follower Nash Equilibria in games with many followers via optimistic and pessimistic bilevel programmingComputing Leader-Follower Nash Equilibria in games with many followers via optimistic and pessimistic bilevel programming
We investigate the problem of computing a Leader-Follower Nash Equilibrium. Given a normal-form game with a leader and two or more followers, a strategy profile is a Leader-Follower Nash Equilibrium if i) given the leader's strategy, the followers' strategies yield a Nash Equilibrium in the corresponding subgame and ii) the leader's strategy is such that his utility is maximized. To cope with the presence of multiple Nash Equilibria in the followers' subgame, we consider two natural (extreme) cases: the optimistic case, where the followers select a Nash Equilibrium maximizing the leader's utility, and the pessimistic case, where a Nash Equilibrium minimizing the leader's utility is chosen.
We first illustrate that computing a Leader-Follower Nash Equilibrium is NP-hard and not in Poly-APX unless P=NP, and that even deciding whether one of the leader's actions will be played with a strictly positive probability in an equilibrium is NP-hard. We also show that, in the general case, pessimistic equilibria may not be finite.
After showing that the optimistic problem can be cast as a single level mathematical program (whereas the pessimistic one is intrinsically bilevel), we propose some (mixed-integer) nonconvex formulations for the optimistic case, which we then evaluate computationally. We also illustrate a black box (heuristic) algorithm suitable for both the optimistic and pessimistic cases. For exact (up to a finite precision) solutions to the pessimistic case, we present a branch-and-bound-like algorithm based on disjunctive programming, suitable for the case where the followers are restricted to pure strategies, and illustrate its extension to the general case of mixed strategies.
This is joint work with Nicola Basilico and Nicola Gatti. |
29.02.2016 | 15:30 - 16:30 B227 | Marc Pfetsch TU Darmstadt | Compressed Sensing and Discrete OptimizationCompressed Sensing and Discrete Optimization
The goal of this talk is to give an overview on compressed sensing from
an discrete optimization/geometry point of view. The main problem in
compressed sensing - the sparse representation problem - is to find the
sparsest solutions of underdetermined linear equation systems. This
problem is computationally hard, but can be efficiently solved by a
linear program if certain conditions are satisfied. The talk will review
some well known conditions and show that they are computationally hard
to check. The conditions can be approximated using semi-definite or
linear programs. Finally, some steps towards an exact solution of the
original sparsest representation problem will be reviewed. |
22.02.2016 | 14:00 - 15:00 B037 | Dorothea Baumeister Universität Düsseldorf | Interference in Judgment AggregationInterference in Judgment Aggregation
The aim of a judgment aggregation process is to aggregate individual
judgment sets of judges over possibly interconnected propositions to reach a
collective outcome. In computational social choice, the complexity of changing the out-
come of elections via manipulation, bribery, and various control
actions, such as adding or deleting candidates or voters, has been studied intensely. In
this talk I will give an overview of different forms of interference in judgment
aggregation, namely manipulation, bribery, and control. A judgment aggregation scenario
is said to be manipulable if one judge has an incentive to report an
untruthful judgment set as this yields a more favorable outcome for him, where the
distance between the preferences of the manipulator can be measured by the Hamming
distance. Besides the manipulation problem, we also investigate
different forms of bribery in judgment aggregation, where an external actor seeks to change
the outcome by bribing some of the judges. Furthermore, we introduce differ-
ent types of control specific to judgment aggregation. In addition to
classical complexity (NP-hardness) results we also obtain W[2]-hardness with
respect to a natural parameter, and membership in P for restricted problem instances.
This is a joint work with Gabor Erdelyi, Olivia Erdelyi, Jörg Rothe, and Ann-Kathrin Selker. |
27.01.2016 | 10:00 - 11:00 B037 | Andreas Tillmann TU Darmstadt | Maschinelles Lernen zur Signalrekonstruktion aus phasenlosen MessdatenMaschinelles Lernen zur Signalrekonstruktion aus phasenlosen Messdaten
Manche Signale erlauben eine (exakte oder approximative) Darstellung
als Linearkombination einiger weniger Elemente eines sogenannten
Dictionaries. Die hierdurch induzierte Dünnbesetztheit der
Representationskoeffizientenvektoren lässt sich insbesondere bei
Vorliegen linearer Messungen vielseitig ausnutzen, z.B. zur
effizienten algorithmischen Signalrekonstruktion und Entrauschung oder
zur Verringerung der Anzahl für Rekonstruierbarkeit benötigter
Messdaten. In den letzten Jahren wurde zudem gezeigt, dass
Dünnbesetztheit auch zur qualitativen Verbesserung von
Rekonstruktionen aus nichtlinearen Messungen ausgenutzt werden kann.
In der Regel sind Dictionaries zur dünnen Representierbarkeit jedoch
nicht vorab bekannt. Bei linearen Messdaten können geeignete
Dictionaries durch maschinelles Lernen anhand von Trainingsdatenbanken
für spezielle Signalklassen gewonnen werden. Wir stellen einen neuen
Algorithmus vor, der ein Dictionary zur dünnbesetzten Kodierung aus
bestimmten *nichtlinearen* Messdaten lernt und zugleich für die
Signalrekonstruktion ausnutzt. Wir konzentrieren uns auf das Problem
der sogenannten Phasenrekonstruktion, d.h. der Rückgewinnung eines
unbekannten Signals aus (möglicherweise verrauschten) quadrierten
Absolutbeträgen einer komplexwertigen Lineartransformation des
Signals. Numerische Ergebnisse für Bildverarbeitungsprobleme
demonstrieren, dass unser Ansatz signifikant bessere Rekonstruktionen
erzielen kann als Methoden, die versteckte Dünnbesetztheit (bzgl.
eines a priori unbekannten Dictionaries) nicht ausnutzen können |
26.01.2016 | 16:15 - 17:15 B037 | Imke Joormann TU Darmstadt | Unzulässigkeit in Fluss- und stationären GasnetzwerkenUnzulässigkeit in Fluss- und stationären Gasnetzwerken
In der mathematischen Beschreibung von Netzwerken kann es aus unterschiedlichen
Gründen zu Unzulässigkeiten kommen, u.a. Datenfehler, physikalische
Nicht-Erfüllbarkeit und Modellierungsfehler. Zu den Standardanätzen für die
Isolierung der Unzulässigkeit existieren für lineare Systeme vor allem
die beiden Konzepte minimal unzulässiger Teilsysteme (irreducible infeasible
subsystem, IIS) und Überdeckungen von IISen (IIS cover, IISC). Ein IIS ist ein
unlösbares Teilsystem, das eine ösung hat, sobald eine enthaltene Restriktion
entfernt wird. Ein IISC ist ein Teilsystem von Restriktionen, dessen Entfernung
aus dem ursprünglichen System für Zulässigkeit sorgt.
Wir beginnen mit der theoretischen Untersuchung der Basisform eines Netzwerks,
Flussprobleme mit Angeboten und Nachfragen, und treffen Strukturaussagen über
IISe und IISC in Flussnetzwerken.
Abschließend zeigen wir für stationäre Gasnetzwerke, die als
gemischt-ganzzahliges Programm (MI(N)LP) modelliert werden, wo Unzulässigkeit
eine Rolle spielt und wie unsere Ergebnisse des Flussfalls generalisiert
werden können |
13.01.2016 | 15:00 - 16:00 B037 | Matthias Walter Universität Magdeburg | Investigating Polyhedra by OraclesInvestigating Polyhedra by Oracles
The software IPO is presented which investigates a polyhedron P given by
means of an optimization oracle, e.g., a mixed-integer hull and a MIP
solver. It detects all equations, can check adjacency of vertices, and
compute some facets valid for P in exact arithmetic. The facets are
produced in such a way that they are helpful in optimizing a given
objective function, using target cuts introduced by Buchheim, Liers, and
Oswald in 2008. In contrast to usual convex-hull algorithms which
produce the entire description, but run out of resources for small
dimensions already, IPO can handle much larger dimensions. |
11.01.2016 | 13:00 - 14:00 B037 | Stefan Waldherr Universität Osnabrück | Scheduling synchroner Flow ShopsScheduling synchroner Flow Shops
Ein synchroner Flow Shop stellt eine Variante eines Permutations Flow
Shops dar, bei dem alle zu einem Zeitpunkt bearbeiteten Jobs von einem
ungetakteten Transportsystem gleichzeitig (synchronisiert) von einer
Maschine zur nächsten Maschine transportiert werden. Dabei starten die
aktuell zu bearbeitenden Operationen gleichzeitig auf ihren Maschinen
und die Jobs werden erst dann zur nächsten Maschine transportiert,
wenn die längste Operation beendet ist. Motiviert durch die
Kooperation mit einem Praxispartner untersuchen wir synchrone Flow
Shops mit zusätzlichen Nebenbedingungen. Im Vortrag werden synchrone
Flow Shops am Beispiel des Praxisproblems eingeführt und
Lösungsansätze für synchrone Flow Shops mit Ressourcen und Rüstkosten
präsentiert |
2015 | |
---|
14.12.2015 | 12:00 - 13:00 B056 | Frank Fischer Universität Kassel | A new dual relaxation approach for the train timetabling problemA new dual relaxation approach for the train timetabling problem
In the train timetabling problem (TTP) one is given an infrastructure
network and a set of trains with predefined routes running in this
network. The goal is to find schedules for the trains so that certain
operational constraints like station capacities and headway times are
satisfied.
One of the most widely used integer programming models is based on
time expanded networks. Each train is associated with a network and
the schedule is represented by a path in this network. Coupling
constraints ensure the operational restrictions, in particular headway
and capacity constraints. For large scale instances solution
approaches are typically based on Lagrangian relaxation of the
coupling constraints. We will discuss two different dualization
approaches based on different representations of the polytope defined
by the headway constraints. The classical Lagrangian dual approach
leads to weaker bounds but has very good convergence properties when
solved by a subgradient or bundle method. In contrast, a Lagrangian
decomposition approach applied to a lifted formulation yields stronger
bounds but suffers from very bad convergence of the solution
algorithm.
In this talk we present a combined approach. It uses both descriptions
of the headway constraints in order to obtain the good bounds of the
decomposition approach and to provide the faster convergence of the
classical relaxation at the same time. However, this new model is
extremely large and cannot be handled directly. We show that this
approach can be interpreted as applying an automatically scaling
bundle method to the decomposed model, leading to an implementable
algorithm. Finally, we present some computational results illustrating
superior convergence properties of the new approach while preserving
the good bounds of the decomposition approach. |
09.07.2015 | 17:00 - 17:30 SeMath 008 | Nils Spiekermann Lehrstuhl II für Mathematik | The Firefighter-Problem with Multiple Fires - On the Survival Rate of TreesThe Firefighter-Problem with Multiple Fires - On the Survival Rate of Trees
This talk deals with the firefighter problem, a game of bounding fire outbreaks on the vertices of a graph. Aim is to maximize the surviving-rate, the percentage of the vertices that can be protected on average if fires break out randomly. We extend the asymptotic constant lower bound by Cai and Wang (SIAM J. Discrete Math., 2009) for the case with one fire and one firefighter on trees, to an abritrary number of fires. |
09.07.2015 | 16:15 - 17:00 SeMath 008 | Jan Simon Lehrstuhl II für Mathematik | On the Intersection of G-set colouringsOn the Intersection of G-set colourings
Let G be a finite group and X a finite G-set. For a set of colours F we consider the set F^X of colourings c : X → F. The G-action on X induces another G-action on F^X. Let m ≥ 2. We ask for the average number of elements in X on which all colourings g_i.c (for i = 1, ..., m) coincide if (g_1, ..., g_m ) ranges over G^m. We investigate several special cases (including a generalization
of Burnside’s lemma) and discuss various applications including variations on a theorem by Jordan and bounds on the mth-power sum of the k-deck of a subset in G. |
18.06.2015 | 12:15 - 13:15 B201 | Hans Mittelmann School of Mathematical and Statistical Sciences, Arizona State University | On Solving Hard Quadratic 3-Dimensional Assignment Problems from Wireless ComminicationsOn Solving Hard Quadratic 3-Dimensional Assignment Problems from Wireless Comminications
We address the exact solution of a very challenging (and previously unsolved)
instance of the quadratic 3-dimensional assignment problem, arising in digital
wireless communications. The paper describes the techniques developed to solve
this instance to proven optimality, from the choice of an appropriate mixed
integer programming formulation, to cutting planes and symmetry handling. Using
these techniques we were able to solve the target instance with moderate
computational effort (2.5 million nodes and one week of computations on a
standard PC). Further such problems with different modulations and thus
reduced symmetry are near optimally solved using metaheuristic and engineering
ad hoc approaches. |
27.05.2015 | 14:15 - 15:15 SeMath 008 | Andreas Wierz Lehrstuhl für Management Science | Maximum Γ-robust flows over time |
22.05.2015 | 12:15 - 13:15 B057 | Kazuo Murota University of Tokyo | Auction Theory and Discrete Convex AnalysisAuction Theory and Discrete Convex Analysis
We discuss iterative auctions where there are multiple differentiated items in multiple units and each bidder has a gross substitutes valuation. On the basis of the well-known equivalence between gross substitutes property and M-natural concavity, we indicate a further connection between iterative auctions and discrete convex analysis. In particular, it is pointed out that the Liapunouv function is an L-natural convex function and the behavior of iterative auctions can be analized systematically in terms of L-convex function minimization algorithms in discrete convex analysis.
This a joint work with Akiyoshi Shioura and Zaifu Yang. |
12.05.2015 | 16:15 - 17:15 B201 | Annika Thome Lehrstuhl für Operations Research | The Budgeted Minimum Cost Flow Problem with unit CostThe Budgeted Minimum Cost Flow Problem with unit Cost
In this talk we present a bi-level minimum cost flow optimization problem. The leader problem represents the government which intents to improve its infrastructure by reducing the travelling time on a fixed number of roads. The follower problem represents the users of the network who want to minimize their individual travelling time.
More formally, we are given a directed graph with several sources and sinks each having a supply or demand respectively. There is a capacity, a regular and lower cost associated with each arc of the graph. The objective in the leader problem is finding a selection of at most K arcs of whom the lower cost applies to the flow sent via those arcs. The regular cost still applies on all other arcs. The follower problem consists of finding a flow satisfying the demand of all sinks that is of minimum cost.
In this talk, we consider the special case where there is exactly one source and no capacities on the arcs. We show that this problem is strongly NP-complete in the case where both K and the number of sinks are part of the input. For special graphs, such as trees or graphs with a tree-like structure we present polynomial algorithms. |
26.03.2015 | 14:00 - 15:00 B201 | Samuel Fiorini Université libre de Bruxelles | No Small Linear Program Approximates Vertex Cover within a Factor 2-epsilonNo Small Linear Program Approximates Vertex Cover within a Factor 2-epsilon
The vertex cover problem is one of the most important and intensively studied
combinatorial optimization problems. Khot and Regev (2003) proved that the problem is
NP-hard to approximate within a factor 2-epsilon, assuming the Unique Games Conjecture
(UGC). This is tight because the problem has an easy 2-approximation algorithm. Without
resorting to the UGC, the best inapproximability result for the problem is due to Dinur and
Safra (2002): vertex cover is NP-hard to approximate within a factor 1.3606.
We prove the following unconditional result about linear programming (LP) relaxations of
the problem: every LP relaxation that approximates vertex cover within a factor of 2-epsilon
has super-polynomially many inequalities. As a direct consequence of our methods, we also
establish that LP relaxations that approximate the independent set problem within any constant
factor have super-polynomially many inequalities.
This is joint work with Abbas Bazzi (EPFL), Sebastian Pokutta (GaTech) and Ola Svensson (EPFL) |
24.03.2015 | 10:00 - 11:00 SeMath 008 | Leonardo Taccari Politecnico di Milano | Short-term operational planning of cogeneration systems via Mixed-Integer Programming |
18.03.2015 | 10:00 - 11:00 Pontdriesch 14/16, Raum 305 | Tom McCormick Sauder School of Business, University of British Columbia | Parametric Network FlowsParametric Network Flows
We review the parametric optimization framework of Topkis, and how the parametric min cut results of Gallo, Grigoriadis, and Tarjan fit into the framework. This framework gives two main results: a Structural Property that min cuts are monotone in the parameter, and an Algorithmic Property, that one can compute all min cuts in the same time as solving the non-parametric problem. Most applications of this framework in combinatorial optimization are to 0-1 problems such as Min Cut.
We extend these results to parametric capacity and parametric rewards versions of Max Reward Flow, a max version of Min Cost Flow. We prove that the Structural Property again holds, and we adapt the Relax algorithm of Bertsekas to also get the Algorithmic Property. We further indicate how to extend the results to parametric Abstract Min Cut, and to other problems. (joint work with Britta Peis and Jannik Matuschke) |
29.01.2015 | 10:15 - 11:15 SG 23 | Julia Buwaya Lehr- und Forschungsgebiet Advanced Analytics | A flow model for the analysis of acquisition patterns of strategic agents interacting in a social networkA flow model for the analysis of acquisition patterns of strategic agents interacting in a social network
In this presentation, we investigate a min-cost-multi-commodity-flow formulation modeling the acquisition of products by strategic agents over time. The agents communicate and influence each other in a social network.
The classical min-cost-flow problem is complicated by edge-weights representing agents' utilities. These have to be computed iteratively over time, as agents communicate and adapt their utility. Inspired by models by Krumke et al. that study the influence of agents' communication on the resulting revenue from a product seller's point of view, we investigate three base cases. Agents only positively influence each other (i.e., agents that bought a product promote buying it), only negatively influence each other, or arbitrarily influence each other. These base cases each evoke different complexities on the problem. The underlying research goal is to determine the utilities of the involved agents at the beginning of a time horizon from sales information collected at the end of the horizon. In providing this information, the model may provide a useful starting point to calibrate agent-based simulations. |
26.01.2015 | 16:00 - 17:00 Raum 203 | Isabel Beckenbach Zuse Institute Berlin | A Hall condition for normal hypergraphsA Hall condition for normal hypergraphs
We investigate a sufficient and necessary condition for the existence of a perfect matching in normal hypergraphs. The class of normal hypergraphs strictly contains all balanced hypergraphs for which Conforti et. al. proved a Hall-type condition for the existence of a perfect matching. We show that this condition can be generalized to normal hypergraphs by multiplying vertices and give a tight upper bound on the number of times a vertex has to be multiplied. |
22.01.2015 | 10:15 - 11:15 SG 23 | Stefan Hougardy Forschungsinstitut für Diskrete Mathematik, Universität Bonn | Edge Elimination in TSP InstancesEdge Elimination in TSP Instances
The Traveling Salesman Problem (TSP) is one of the best studied NP-hard problems in combinatorial optimization. We present results that allow to prove that some edges of a TSP instance cannot occur in any optimum TSP tour and we propose a combinatorial algorithm to identify such edges. The runtime of the main part of our algorithm is O(n2 log n) for an n-vertex TSP instance. For large TSP instances our algorithm allows to speed up the computation of exact solutions. We also present results on the integrality ratio of the subtour LP and on the approximation ratio of heuristic algorithms for the TSP. |
15.01.2015 | 10:15 - 11:15 B057 | Martin Grohe Lehrstuhl für Informatik 7 | Colour Refinement: A Simple Partitioning Algorithm with Applications From Graph Isomorphism Testing to Machine LearningColour Refinement: A Simple Partitioning Algorithm with Applications From Graph Isomorphism Testing to Machine Learning
Colour refinement is a simple algorithm that partitions the vertices of a graph according their iterated degree sequences. It has very efficient implementations, running in quasilinear time, and a surprisingly wide range of applications. The algorithm has been designed in the context of graph isomorphism testing, and it is used an important subroutine in almost all practical graph isomorphism tools. Somewhat surprisingly, other applications in machine learning, probabilistic inference, and linear programming have surfaced recently.
In the first part of my talk, I will introduce the basic algorithm as well as higher dimensional extensions known as the k-dimensional Weisfeiler-Lehman algorithm. I will also discuss an unexpected connection between colour refinement and a natural linear programming approach to graph isomorphism testing. In the second part of my talk, I will discuss various applications of colour refinement.
|
2014 | |
---|
18.12.2014 | 10:15 - 11:15 SG 23 | Heiko Röglin Institut für Informatik, Universität Bonn | Smoothed Analysis of the Successive Shortest Path AlgorithmSmoothed Analysis of the Successive Shortest Path Algorithm
The minimum-cost flow problem is a classic problem in combinatorial
optimization with various applications. Several pseudo-polynomial,
polynomial, and strongly polynomial algorithms have been developed in
the past decades, and it seems that both the problem and the algorithms
are well understood. However, some of the algorithms' running times
observed in empirical studies contrast the running times obtained
by worst-case analysis not only in the order of magnitude but also in
the ranking when compared to each other. For example, the Successive
Shortest Path (SSP) algorithm, which has an exponential worst-case
running time, seems to outperform the strongly polynomial Minimum-Mean
Cycle Canceling algorithm. To explain this discrepancy, we study the SSP
algorithm in the framework of smoothed analysis and establish a
polynomial bound for its smoothed running time. This shows that
worst-case instances of the SSP algorithm are fragile and
unlikely to be encountered in practice.
We will also discuss how an extension of this result can be used to find
a short path between two given vertices of a polyhedron.
(joint work with Tobias Brunsch, Kamiel Cornelissen, and Bodo Manthey) |
11.12.2014 | 10:15 - 11:15 SG 23 | Stephanie Houben Universität zu Köln | Flottenzuordnung im Flugverkehr mittels Minimalkostenflusstechnicken |
13.11.2014 | 10:15 - 11:15 B057 | Dirk Degel & Pascal Lutter Lehrstuhl für Operations Research | Optimierung der Standortplanung für Rettungswachen |
06.11.2014 | 10:15 - 11:15 SG 23 | Matthias Lampe Lehrstuhl für Technische Thermodynamik | Computer-Aided Molecular Design for Organic Rankine Cycle (ORC) Working Fluids |
30.10.2014 | 10:15 - 11:15 SG 23 | Stefano Coniglio & Martin Tieves Lehrstuhl II für Mathematik | (Gamma-Robustness for) Virtual Network Embedding |
27.10.2014 | 16:00 - 17:00 SeMath | Oliver Schaudt Institut für Informatik, Universität zu Kön | 3-Colouring graphs without triangles or induced paths on seven vertices3-Colouring graphs without triangles or induced paths on seven vertices
The computational complexity of the k-colouring problem is among the most prominent topics in
algorithmic graph theory. A notoriously difficult case is that of 3-colouring Pt-free graphs, that
is, graphs that do not contain the t-vertex path as an induced subgraph. It is not known whether
or not there exists any t such that 3-colouring is NP-complete on Pt-free graphs. Randerath
and Schiermeyer gave a polynomial time algorithm for 3-colouring P6-free graphs. Recently,
Chudnovsky, Maceli, and Zhong extended this result to 3-colouring P7-free graphs. They divide
the algorithm into two cases: graphs containing a triangle, and triangle-free graphs. Their
algorithm for the triangle-free case is quite involved and has a running time of O(n18).
We describe an algorithm that solves the 3-colouring problem for {P7,triangle}-free graphs in
O(n7), developed independently of the algorithm by Chudnovsky et al. Joint work with Flavia Bonomo and Maya Stein. |
16.07.2014 | 11:00 - 12:00 SG 203 | Jan Simon Lehrstuhl II für Mathematik | Reconstructing Colourings of Finite Groups |
10.07.2014 | 13:00 - 14:00 SG 23 | Pascal Schweitzer Lehrstuhl für Informatik 7 | The Graph Isomorphism Problem - An Overview |
03.07.2014 | 13:00 - 14:00 SG 23 | Rudolf Müller Universiteit Maastricht | Multi-item auctions with exclusivity marginMulti-item auctions with exclusivity margin
We study the problem of nding the prot-maximizing multi-item mechanism in a setting,
where bidders hold two-dimensional private information: one for the value of the item being
sold, the other one for the added value margin they exhibit in case of exclusive allocation. We
require the mechanism to be deterministic, individually rational, and implementable in dominant
strategies. Due to the great demand from practitioners for simple and speedy solutions, instead
of going for optimality, we focus on heuristics that provide good revenue relative to the optimal
mechanism. Notably, we demonstrate that even the simplest mechanisms, such as selling always
exclusively or always non-exclusively, produce revenue within a constant factor approximation of
the optimal revenue. We identify two different single-dimensional relaxations of the problem, for
which we determine the optimal auction using well-known techniques. The relaxations provide
revenue bounds that can be used to evaluate the quality of heuristic auctions. We also devise a
heuristic mechanism from the class of arrayne maximizers and demonstrate by means of simulation
that it yields revenue very close to the upper bounds, and thus very close to optimality. |
18.06.2014 | 15:00 - 16:00 SG 202 | Karthik Chandrasekaran Harvard University | Finding a most biased coin with fewest flipsFinding a most biased coin with fewest flips
The multi-armed bandit problem is a well-studied problem with applications in bioinformatics, clinical trials, etc. A variation of the problem is to find the most rewarding arm in the fewest possible number of steps. When the rewards are Bernoulli, this is equivalent to the problem of finding the most biased coin among a set of coins by tossing them adaptively. The goal here is to devise a strategy that minimizes the expected number of tosses until there exists a coin whose posterior probability of being most biased is at least 1-delta, for a given delta. In this talk, I will present an optimal adaptive strategy for a particular probabilistic setting of the problem. I will show that the strategy is decision-theoretically optimal, i.e., it performs the best possible action in each step.This is based on joint work with Richard Karp. To appear in COLT, 2014. |
18.06.2014 | 14:00 - 15:00 SG 202 | Frits Spieksma ORSTAT, KU Leuven | Scheduling a Soccer League |
05.06.2014 | 13:00 - 14:00 B201 | Corinna Gottschalk Lehrstuhl für Management Science | Properties of Graph ATSP |
22.05.2014 | 13:00 - 14:00 B201 | Elisabeth Lübbecke Institut für Mathematik, TU Berlin | Bidirectional Scheduling on a Path |
24.04.2014 | 13:00 - 14:00 SG 23 | Daniel Schmand Lehrstuhl für Management Science | Sharing costs for good equilibria in set-dependent congestion games |
14.04.2014 | 16:00 - 16:45 B201 | Jannik Matuschke Departamento de Ingenieria Industrial de Universidad de Chile | Strong LP formulations for scheduling splittable jobs on unrelated machinesStrong LP formulations for scheduling splittable jobs on unrelated machines
We study a natural generalization of the problem of minimizing makespan on unrelated machines in which jobs may be split into parts. The different parts of a job can be simultaneously processed on different machines, but each part requires a setup time before it can be processed. First we show that a natural adaptation of the seminal approximation algorithm for unrelated machine scheduling by Lenstra, Shmoys, and Tardos yields a 3-approximation algorithm, equal to the integrality gap of the corresponding LP relaxation. Through a stronger LP relaxation, obtained by applying a lift-and-project procedure, we are able to improve both the integrality gap and the implied approximation factor to 1 + φ, where φ ≈ 1.618 is the golden ratio. This ratio decreases to 2 in the restricted assignment setting, matching the result for the classic version. Interestingly, we show that our problem cannot be approximated within a factor better than e/(e-1) ≈ 1.582 (unless P = NP). This provides some evidence that it is harder than the classic version, which is only known to be inapproximable within any factor strictly better than 1.5. Since our 1 + φ bound remains tight when considering the seemingly stronger machine configuration LP, we finally propose a new job based configuration LP that has an infinite number of variables, one for each possible way a job may be split and processed on the machines. Using convex duality we show that this infinite LP has a finite representation and can be solved in polynomial time to any accuracy, rendering it a promising relaxation for obtaining better algorithms.
This is joint work with José R. Correa, Alberto Marchetti-Spaccamela, Leen Stougie, Ola Svensson, Victor Verdugo, and José Verschae. |
17.03.2014 | 14:00 - 15:00 SG 12 | Frauke Liers Universität Erlangen-Nürnberg | Verallgemeinertes quadratisches Assignment - Strukturanalyse und Lösungsmethoden |
05.02.2014 | 13:00 - 14:00 SG 12 | Di Yuan Linköping University | Optimizing Load-Coupled Heterogeneous LTE NetworksOptimizing Load-Coupled Heterogeneous LTE Networks
In this presentation, we first review a load coupling model that has been proposed and used for performance engineering of LTE networks. The model characterizes the coupling relation between cell loads due to mutual interference, where the load of a cell refers to the resource consumption of the cell. Solving the model enables evaluating resource efficiency at the network level for arbitrary topology and non-uniform demand. The key properties of the model as well as its connection to classical interference functions will be reviewed and examined. Next, we consider LTE heterogeneous networks (HetNets), for which load coupling occurs among macro cell, small cells, as well as between the two cell layers. We illustrate planning and optimization of HetNets by range selection of small cells to maximize the demand to be accommodated. We characterize problem complexity, and present optimization algorithms that allow for close-to-optimal solutions. Finally, some related topics under study are discussed.
Short Bio: Di Yuan received his MSc degree in computer science and engineering, and PhD degree in operations research at Linköping Institute of Technology in 1996 and 2001, respectively. At present he is full professor in Telecommunications at the Department of Science and Technology, Linköping University, and head of a research group in Mobile Telecommunications. His research interests span design, analysis, and resource optimization of telecommunication systems, with over 110 publications within the research area.
Dr Yuan has been guest professor at Technical University of Milan (Politecnico di Milano), Italy, in 2008. In 2011 and 2013 he has been part time with Ericsson Research, Sweden, funded by Swedish Foundation for Strategic Research (SSF). He is an area editor of the Computer Networks journal. He has been in the management committee of four European Cooperation in field of Scientific and Technical Research (COST) actions, invited lecturer of European Network of Excellence EuroNF, and Principal Investigator of five European FP7 projects. He is co-recipient of IEEE ICC'12 Best Paper Award. |
03.02.2014 | 11:00 - 12:00 SG 512 | Jens-P. Bode AG Algebra und Diskrete Mathematik, TU Braunschweig | Achievement GamesAchievement Games
A polyomino is a simply edge-connected set of polygons of a tessellation,
that is, the set and its complement are edge-connected. Two sets of
polygons are considered to be the same polyomino if there is a mapping
(generated by translations, rotations, and reflections) of the
tessellation onto itself which also maps one of the sets of polygons onto
the other one.
For a given polyomino P the following achievement game will be
considered. Two players A (first move) and B alternatingly color the
polygons (cells) of the corresponding tessellation. Player A wins if he
achieves a copy of P in his color and B wins otherwise. The polyomino
P is called a winner if there exists a winning strategy for A.
Otherwise there exists a strategy for B to prevent A from winning and
then P is called a loser.
|
30.01.2014 | 14:15 (TBC) B201 (TBC) | Sebastian Stiller Institut für Mathematik, TU Berlin | How to a pack a bag without knowing its size?How to a pack a bag without knowing its size?
The availability of free seats on a flight is usually not known until last minute. Meanwhile airline staff at the gate maintain a waiting list for the groups of passengers on stand-by. How shall they order that list, given that the groups have different sizes and denial of service causes different opportunity costs to the airline?
This - and similar situations of managing stand-by capacity in logistics or services - is the problem of packing a knapsack without knowing its capacity. In this problem we are given a finite set of items each with specific size and value. We want to pack as much value as possible into a knapsack of which we do not know the capacity. Whenever we attempt to pack an item that does not fit, the item is discarded; if the item fits, we have to include it in the packing. We show that there is always a policy that packs a value within factor 2 of the optimum packing, irrespective of the actual capacity. If all items have unit density, we achieve a factor equal to the golden ratio φ ≈ 1.618. Both factors are shown to be best possible.
In fact, we obtain the above factors using packing policies that are universal in the sense that they fix a particular order of the items and try to pack the items in this order, independent of the observations made while packing. We give efficient algorithms computing these policies. On the other hand, we show that, for any α > 1, the problem of deciding whether a given universal policy achieves a factor of α is coNP-complete. If α is part of the input, the same problem is shown to be coNP-complete for items with unit densities. Finally, we show that it is coNP-hard to decide, for given α, whether a set of items admits a universal policy with factor α, even if all items have unit densities. |
29.01.2014 | 13:00 - 14:00 SG 12 | Ulrich Faigle Mathematisches Institut, Universität zu Köln | Vector space methods in cooperative game theory |
22.01.2014 | 13:00 - 14:00 SG 12 | Philip Voll Lehrstuhl für Technische Thermodynamik, RWTH Aachen | Optimization for the synthesis of energy systems |
15.01.2014 | 13:00 - 14:00 B057 | Andreas Wierz Lehrstuhl für Management Science, RWTH Aachen | Primal-Dual Algorithms for Precedence Constrained Covering Problems |
2013 | |
---|
04.12.2013 | 13:00 - 14:00 SG 12 | Daniel R. Schmidt Universität zu Kön | Basic Network Design: Single Commodity Flows with uncertain demandsBasic Network Design: Single Commodity Flows with uncertain demands
We consider a basic network design problem: Suppose a network is used to transfer a single commodity between its nodes V$. Each node can have a certain supply or demand of that commodity; however, that amount is subject to uncertainty. Our aim is to find minimum cost integer capacities for the network's links E$ such that all possible realizations of supplies and demands can be routed through the network. In her PhD thesis (2009), Sanit\`a gave a O(log |V|)$ approximation for the case that the number of possible realizations is finite and it is still unknown if a better approximation ratio can be achieved. We do, however, not aim for an approximation algorithm but tackle the problem with linear programming methods, building on results by Buchheim, Liers and Sanit\`a (INOC 2011). Finally, we extend the model to interval supplies/demands and solve it with an integer programming formulation that uses only O(|E|)$ variables. |
27.11.2013 | 13:00 - 14:00 SG 12 | Stefano Gualandi IDSIA, Lugano | Constrained Shortest Paths with Superadditive Objective Functions |
20.11.2013 | 13:00 - 14:00 B057 | Christian Dobre Process Systems Engineering, Aachener Verfahrenstechnik RWTH Aachen | Conic programming bounds for structured combinatorial problemsConic programming bounds for structured combinatorial problems
In this presentation we show how one can exploit the symmetry in a
hierarchy of improving semidefinite programming (SDP) relaxations of
NP-hard combinatorial optimization problems. Each level of the
hierarchy consists of a semidefinite part and a system of linear
equations coming from coefficient identification in equalities between
polynomials. Both parts need nontrivial preprocessing in order to
reduce the dimension of the problem, and we show how the symmetry in
the data of the problem can be used to achieve the reduction.
Numerically we exemplify the advantages of this approach by providing
new upper bounds on crossing number instances, one of the most well
known NP-hard combinatorial optimization problem which involves
symmetric data. |
06.11.2013 | 13:00 - 14:00 B057 | Tjark Vredeveld Operations Reserach Group, Maastricht University | Jumping through the neighborhood: a smoothed analysis approachJumping through the neighborhood: a smoothed analysis approach
We consider performance guarantees of local optima w.r.t. the jump neighborhood for makespan scheduling. Although local search methods exhibit good empirical behavior, they often have a bad worst case performance. In this talk, we consider smoothed performance guarantees on the quality of local optima, trying to explain their empirical behavior.
Smoothed analysis was introduced by Spielman and Teng (JACM 2004) as a hybrid between worst case and average case analysis, to explain the good behavior of algorithms that have a bad worst case performance. Up to now, smoothed analysis has been mainly applied to the running time of algorithms. We will use smoothed analysis to investigate the approximation ratio of an algorithm, that is, the ratio between the value of an approximate solution and the optimal solution value. In the last decade, there has been a strong interest in understanding the worst case behavior of local optimal solutions. We extend this research by investigating whether or not this worst case behavior is robust. We will apply the concept of smoothed performance guarantees to several local optima for makespan scheduling problems. As a by-product, we also get a smoothed price of anarchy for the corresponding scheduling games.
This talk is based on joint work with: Tobias Brunsch, Heiko Röglin, and Cyriel Rutten |
30.10.2013 | 13:00 - 14:00 SG 12 | Tobias Harks Operations Research Group, Maastricht University | Complexity and Approximation of the Continuous Network Design ProblemComplexity and Approximation of the Continuous Network Design Problem
We revisit a classical problem in transportation, known as the continuous (bilevel) network design problem, CNDP for short. We are given a graph for which the latency of each edge depends on the ratio of the edge flow and the capacity installed. The goal is to find an optimal investment in edge capacities so as to minimize the sum of the routing cost of the induced Wardrop equilibrium and the investment cost. While this problem is considered as challenging in the literature, its complexity status was still unknown. We close this gap showing that CNDP is strongly NP-complete and APX-hard, even for instances with affine latencies.
As for the approximation of the problem, we first provide a detailed analysis for a heuristic studied by Marcotte for the special case of monomial latency functions (Mathematical Programming, Vol. 34, 1986). Specifically, we derive a closed form expression of its approximation guarantee for arbitrary sets S of allowed latency functions. Second, we propose a different ap- proximation algorithm and show that it has the same approximation guarantee. As our final – and arguably most interesting – result regarding approximation, we show that using the better of the two approximation algorithms results in a strictly improved approximation guarantee for which we give a closed form expression. For affine latencies, e.g., this algorithm achieves a 49/41 ≈ 1.195-approximation which improves on the 5/4 that has been shown before by Marcotte. |
23.10.2013 | 13:00 - 14:00 B057 | Britta Peis Lehrstuhl für Management Science, RWTH Aachen University | Matchings, Vertex Cover und Network Bargaining Games |
16.10.2013 | 13:00 - 14:00 SG 12 | Stefano Coniglio Lehrstuhl II für Mathematik, RWTH Aachen University | New valid inequalities and lifting of robust cover inequalities for the 0-1 Gamma-robust knapsack problem |
10.07.2013 | 13:00 - 14:00 SG 23 | Florian Dahms Lehrstuhl für Operations Research, RWTH Aachen University | Variable aggregation with unequal subproblems |
03.07.2013 | 13:00 - 14:00 B201 | Sarah Kirchner Lehrstuhl für Operations Research, RWTH Aachen University | TBA |
26.06.2013 | 13:00 - 14:00 SG 23 | Annika Thome Lehrstuhl für Operations Research, RWTH Aachen University | Evaluating the quality of a Dantzig-Wolfe decomposition via graph modularity |
19.06.2013 | 13:00 - 14:00 B201 | Michael Bastubbe Lehrstuhl für Operations Research, RWTH Aachen University | A Branch-and-Price Algorithm for Rearranging a Matrix to Arrowhead Form |
12.06.2013 | 13:00 - 14:00 SG 23 | Alexander Grigoriev Department of Quantitative Economics, Maastricht University | The Valve Location Problem in Simple Network TopologiesThe Valve Location Problem in Simple Network Topologies
To control possible spills in liquid or gas transporting pipe systems, the systems are usually equipped with shutoff valves. In case of an accidental leak, these valves separate the system into a number of pieces, limiting the spill effect. In this paper, we consider the problem, for a given edge-weighted network representing a pipe system and for a given number of valves, of placing the valves in the network in such a way that the maximum possible spill, i.e., the maximum total weight of a piece, is minimized. We show that the problem is NP-hard even if restricted to any of the following settings: (i) series-parallel graphs, and hence graphs of tree-width two; and (ii) all edge weights equal one. If the network is a simple path, a cycle, or a tree, the problem can be solved in polynomial time. We also give a pseudo-polynomial time algorithm and a fully polynomial-time approximation scheme for networks of bounded tree-width. |
03.06.2013 | 12:00 - 13:00 SG 23 | Dieter Rautenbach Institut für Optimierung und Operations Research, Universität Ulm | Covering and Packing of Long(est) Cycles and Paths |
29.05.2013 | 13:00 - 14:00 B201 | Grit Claßen Lehrstuhl II für Mathematik, RWTH Aachen University | Traffic Node Assignment in Wireless Networks: A Multi-Band Robust Knapsack Approach |
15.05.2013 | 13:00 - 14:00 SG 23 | Stefano Coniglio Lehrstuhl II für Mathematik, RWTH Aachen University | Bound-optimal cutting planes |
08.05.2013 | 13:00 - 14:00 B201 | Martin Tieves Lehrstuhl II für Mathematik, RWTH Aachen University | Extended Cutset Inequalities for the Network Power Consumption Problem |
24.04.2013 | 13:00 - 14:00 SG 23 (Wüllnerstraße) | Stephan Lemkens Lehrstuhl II für Mathematik, RWTH Aachen University | Solving the AC Linear Power Flow Problem |
17.04.2013 | 13:00 - 14:00 B201 (Kackertstraße) | Arie Koster Lehrstuhl II für Mathematik, RWTH Aachen University | Robust Optimization: New Thoughts and New Questions |
31.01.2013 | 09:00 - 09:45 SG 23 | Michael Poss Heudiasyc, Universite de Technologie de Compiegne | A new model for robust combinatorial optimization |
17.01.2013 | 09:00 - 09:45 SG 23 | Jan Simon Lehrstuhl II für Mathematik, RWTH Aachen University | Zählen unter Gruppenoperationen |
10.01.2013 | 09:00 - 09:45 SG 23 | Jessica Emonts Lehrstuhl II für Mathematik, RWTH Aachen University | Suche nach vielen defekten Kanten in Hypergraphen |
2012 | |
---|
13.12.2012 | 09:00 - 09:45 SG 23 | Sebastian Milz Lehrstuhl II für Mathematik, RWTH Aachen University | Maximaler starker Zusammenhang in Turnieren und ihren Verallgemeinerungen |
06.12.2012 | 09:00 - 09:45 SG 23 | Michael Bastubbe Lehrstuhl für Operations Research, RWTH Aachen University | A Branch-and-Price Algorithm for Rearranging a Matrix to Arrowhead Form |
29.11.2012 | 09:00 - 09:45 SG 23 | Florian Dahms Lehrstuhl für Operations Research, RWTH Aachen University | Optimal Freight Train Classification using Column Generation |
22.11.2012 | 09:00 - 09:45 SG 23 | Fabio D'Andreagiovanni Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) | Robust Optimization under Multi-band Uncertainty |
15.11.2012 | 09:00 - 09:45 SG 23 | Christian Puchert Lehrstuhl für Operations Research, RWTH Aachen University | Large Neighborhood Search and Diving Heuristics in Column Generation Algorithms |
08.11.2012 | 09:00 - 09:45 SG 23 | Martin Bergner Lehrstuhl für Operations Research, RWTH Aachen University | An exact algorithm for the cut packing problem |
25.10.2012 | 09:00 - 09:45 SG 23 | Michael Hoschek Lehrstuhl II für Mathematik, RWTH Aachen University | Lower bounds on the independence number |
18.10.2012 | 09:00 - 09:45 SG 23 | Eberhard Triesch Lehrstuhl II für Mathematik, RWTH Aachen University | Kombinatorische Gruppentestprobleme |
11.10.2012 | 09:00 - 09:45 SG 23 | Marco Lübbecke Lehrstuhl für Operations Research, RWTH Aachen University | Rectangle Covers Revisited ComputationallyRectangle Covers Revisited Computationally
[2004] Given a rectilinear polygon, possibly with holes, we would like to cover the polygon with a minimal number of axis-parallel rectangles. We think of the polygon as a discrete set of pixels, that is we are looking for a smallest cardinality set of rectangles, the union of which is exactly the set of pixels. The dual question is to find a largest cardinality set of pixels, no two of which can be simultaneously covered by any rectangle. The cardinality of this so-called anti-rectangle set clearly gives a lower bound on the size of an optimal rectangle cover. It is known that both problems are NP-hard for general polygons. No constant factor approximation algorithm is available. We first review both problems, some variants, and special cases. We then discuss very promising computational results from an intuitive integer programming formulation, and point to future research induced by our computations. [2012] We still think that experiments were promising, but we may need to re-define future to 202x. |
27.08.2012 | 10:45 - 11:30 SG 23 | Truong Khoa Phan INRIA Sophia Antipolis | Optimization on energy-aware routing problem in broadband networks |
27.08.2012 | 10:00 - 10:45 SG 23 | Michael Bodewig Lehrstuhl II für Mathematik, RWTH Aachen University | Completeness and Multiplication of Fix-Free Codes Regarding Ahlswede's 3/4-Conjecture |
16.08.2012 | 10:00 - 10:45 SG 23 | Arie Koster Lehrstuhl II für Mathematik, RWTH Aachen University | The recoverable robust knapsack problem |
09.08.2012 | 10:45 - 11:30 SG 23 | Stephan Lemkens Lehrstuhl II für Mathematik, RWTH Aachen University | Structural Properties of Power Grid DesignStructural Properties of Power Grid Design
The problem of designing a cost minimal power grid is often formulated as a mixed integer linear
program using the well known DC power flow linearization. We consider its projection on the integral
space, as every feasible integral point can be considered as a possible power grid design. We
define the DC power grid design polytope as the convex hull of these integral points.
At first, we will consider the case in which the power flow on each line is not restricted by any means.
We will show, that in this setting the convex hull is described by the connected subgraph polytope
of the topology graph.
In addition, we will discuss the structural properties under the influence of bounded power flows,
as every real world scenario requires bounded flows. Further, we will study the effects on the
convex hull under the assumption of a metric topology.
Finally, we will discuss the impact on the stated results in the case where we use AC linear power
flows instead of the DC power flow linearization. |
09.08.2012 | 10:00 - 10:45 SG 23 | Grit Claßen Lehrstuhl II für Mathematik, RWTH Aachen University | A Branch-and-Price Approach for the Robust Wireless Network PlanningA Branch-and-Price Approach for the Robust Wireless Network Planning
In this work, we consider the problem of base station location and traffic node assignment
in wireless networks. The uncertainty by user behaviour is modelled by the well-known Gamma-robustness
approach by Bertsimas and Sim.
We compare a straightforward ILP with a column generation approach. The default
ILP is divided into a restricted master problem and one pricing problem per base station.
If a pricing problem finds an assignment variable with negative reduced cost fulfilling the
capacity constraints, this variable is added to the restricted master problem. Since the pricing
problems are robust knapsack problems, we can apply well-known techniques such as cover
inequalities to improve the solving performance. Due to the integrality of all variables, we
develop problem specific branching rules leading to a branch-and-price approach. Finally, we
present computational results comparing the branch-and-price formulation and the default
ILP. |
30.07.2012 | 10:45 - 11:30 SG 23 | Martin Tieves Lehrstuhl II für Mathematik, RWTH Aachen University | Creating Synergies between MIP-Solvers |
30.07.2012 | 10:00 - 10:45 SG 23 | Manuel Kutschka Lehrstuhl II für Mathematik, RWTH | Robust Metric Inequalities for Network Design under Demand UncertaintyRobust Metric Inequalities for Network Design under Demand Uncertainty
In this talk, we generalize the metric inequalities for the (classical) network design problem to its robust
counter-part. Furthermore, we show that they describe the robust network design problem completely in the
capacity space, where a straight-forward generalization of the classical metric inequalities is not sufficient.
We present a polynomial algorithm to separate robust metric inequalities as model inequalities for the capacity
space formulation of the robust network design problem. In computational experiments, we analyze the added
value of this new class of valid inequalities within a branch-and-cut approach to solve the robust network
design problem. |
02.07.2012 | 15:45 - 16:15 SG 23 | Robert Scheidweiler Lehrstuhl II für Mathematik, RWTH Aachen University | Vertex Coloring by means of gap colorings - are there easier approaches? |
22.05.2012 | 16:00 - 17:00 SG 23 | Stefano Coniglio Politecnico di Milano | Cut diversity and multi-objective separation for coordinated cutting plane generationCut diversity and multi-objective separation for coordinated cutting plane generation
In cutting plane methods, the issue of how to generate the ''best possible'' set of cuts is crucial one. Successful implementations typically include a first phase where a large number of maximally violated cuts are generated and a second phase in which a subset is selected according to different criteria. Among them, cut parallelism is often adopted to favor the selection of diverse cuts.
In this work, leveraging the idea of cut diversity, we propose a lexicographic multi-objective cutting plane generation scheme that generates, among all the maximally violated valid inequalities of a given family, an inequality that is undominated and maximally diverse w.r.t. the cuts that were previously found. Since the cuts generated according to our scheme depend on those that were previously added, our method introduces a form of coordination between successive cuts.
The focus in this work is on valid inequalities with 0-1 coefficients in the left-hand side and a constant right-hand side, adopting, as cut diversity measure, the 1-norm distance between the normal vectors of two cuts. We show that, in this case, our separation problem reduces to that of generating a maximally violated cut, with different values for the objective function coefficients.
The impact of our method is assessed when separating Stable Set and Cut Set inequalities for, respectively, the Max Clique and Min Steiner Tree problems, We consider the context of both a pure cutting plane and a cut-and-branch algorithm, comparing to the standard separation of undominated maximally violated cuts. Computational experiments show that, in the first setting, our method allows to close the same fraction of the duality gap in a considerably smaller number of rounds and cuts and that, in the second one, we can substantially reduce the number of branch-and-bound nodes and the overall computing time.
We also briefly consider extensions to the case of inequalities with integer coefficients. Since, in this case, we obtain a nonconvex separation problem which cannot be solved as the standard one with different objective function weights, we either solve it as a MIP or try to reduce to the 0-1 case by applying a simple linear transformation.
We report and discuss on computational results for the separation of rank-1 Chv\`atal-Gomory (CG) cuts for Max Clique. Computationally, we also investigate the trade-off between cut violation, cut diversity, and cut sparsity.
|
2011 | |
---|
30.11.2011 | 10:15 - 11:15 SG 23 | Michael Bodewig Lehrstuhl II für Mathematik, RWTH Aachen University | 3/4-Vermutung für fixfreie Codes |
22.11.2011 | 12:00 - 13:00 SG 23 | Marjan van den Akker Universiteit Utrecht | Recoverable robustness by column generationRecoverable robustness by column generation
Real-life planning problems are often complicated by the occurrence of disturbances, which imply that the original plan cannot be followed anymore and some recovery action must be taken to cope with the disturbance. In such a situation it is worthwhile to arm yourself against common disturbances. Well-known approaches to create plans that take possible, common disturbances into account are robust optimization and stochastic programming. Recently, a new approach has been developed that combines the best of these two: recoverable robustness. In this paper, we apply the technique of column generation to find solutions to recoverable robustness problems. We consider two types of solution approaches: separate recovery and combined recovery. We show our approach on two example problems: the size robust knapsack problem, in which the knapsack size may get reduced, and the demand robust shortest path problem, in which the sink is uncertain and the cost of edges may increase. |
02.11.2011 | 10:15 - 11:15 SG 23 | David Coudert Inria Sophia Antipolis | Shared Risk Resource Group - Polynomial and FPT algorithms |
01.09.2011 | 10:15 - 10:45 SG 23 | Robert Scheidweiler Lehrstuhl II für Mathematik, RWTH Aachen University | Some new estimations of the gap k-coloring number |
08.08.2011 | 10:15 - 11:15 SG 23 | Christina Büsing Combinatorial Optimization and Graph Algorithms group, TU Berlin | Recoverable Robust Knapsacks |
02.08.2011 | 11:00 - 12:00 SG 23 | Peter Damaschke Universität Chalmers, Computer Science and Engineering | Ein Projekt zu optimalen Gruppentests |
26.07.2011 | 14:00 - 15:00 SG 23 | Nicolas Bolivar
Broadband Communications and Distributed Systems, Universitat de Girona
Christelle Caillouet
Lehrstuhl II für Mathematik, RWTH Aachen University
| Minimization of the number of Control Channels in Cognitive Radio Networks with Heterogeneous Frequency Devices |
13.07.2011 | 10:15 - 11:15 SG 23 | Niels Kjeldsen University of Southern Denmark | The Boat and Barge Routing Problem |
05.07.2011 | 10:00 - 11:00 SG 13 | Issam Tahiri Inria Sophia Antipolis | Energy Saving in Fixed Wireless Broadband NetworksEnergy Saving in Fixed Wireless Broadband Networks
In this paper, we present a mathematical formulation for saving energy in fixed broadband
wireless networks by selectively turning off idle communication devices in low-demand scenarios.
This problem relies on a fixed-charge capacitated network design (FCCND), which is very hard to
optimize. We then propose heuristic algorithms to produce feasible solutions in a short time. |
29.06.2011 | 10:15 - 11:15 SG 23 | Fabio D'Andreagiovanni Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) | Pure 0-1 Linear Programming formulations for Wireless Network DesignPure 0-1 Linear Programming formulations for Wireless Network Design
Over the last years, wireless communications have experienced an explosive
growth thus leading to a dramatic congestion of scarce radio resources. In
such complex scenario, the trial-and-error approach commonly adopted by
practitioners to design networks has clearly shown its limitations. Administrators
are thus searching for more effective design approaches, also looking
on Optimization (a remarkable example is provived by the recent international
competitive tender that the Italian Authority for Telecommunications
has invited for the provision of a Digital Video Broadcasting simulator).
The Wireless Network Design Problem (WND) consists in localizing and
configuring the transmitters of a wireless network in order to provide service
coverage to as many receivers as possible. Classical optimization models for
the WND represent power emitted by transmitters as continuous decision
variables. This choice typically yields Mixed-Integer Linear Programs that
present ill-conditioned constraint matrices and include the notorious big-M
coefficients. The corresponding relaxations are very weak and the solutions
returned by state-of-the-art MILP solvers are typically far from the optimum
and sometimes even infeasible.
In this talk, we present three new formulations for the (WND) that are
specifically developed to tackle all the previously highlighted numerical issues
and that just rely upon the use of binary variables. Two of these formulations,
the Power-Indexed formulation and the Dyadic formulation, are based
on replacing the continuous power variable of each transmitter with the linear
combination of a set of boolean variables, multiplied by suitable power
values. This simple operation allows to replace the resulting knapsack constraints
by families of (strong) valid inequalities. The third formulation is
instead obtained by identifying infeasible subsystems in the so-called singleinterferer
formulation, a relevant relaxation of the original problem.
Computational experience made on realistic WiMAX and DVB-T instances
shows that the new formulations are stronger than the classical MILP
one and allow to find higher value and quality solutions, where coverage errors
are completely eliminated. |
22.06.2011 | 10:15 - 11:15 SG 23 | Jessica Emonts Lehrstuhl II für Mathematik, RWTH Aachen University | Defekte Kanten in Hypergraphen |
25.05.2011 | 10:15 - 11:15 SG 23 | Grit Claßen Lehrstuhl II für Mathematik, RWTH Aachen University | Energy Efficient Wireless Network Planning under Demand Uncertainty |
18.05.2011 | 10:15 - 11:15 SG 23 | Stephan Lemkens Lehrstuhl II für Mathematik, RWTH Aachen University | Designing AC Power Grids using Integer Linear ProgrammingDesigning AC Power Grids using Integer Linear Programming
Recent developments have drawn focus towards the efficient calculation of flows in AC power grids, which are difficult to solve systems of nonlinear equations. The common linearization approach leads to the well known and often used DC formulation, which has some major drawbacks. To overcome these drawbacks we revisit an alternative linearization of the AC power flow. Work on this model has already be done in the 1990s but was intractable at that time. In view of recent developments in the field of integer programming, we show that this model is computationally tractable. |
03.05.2011 | 18:00 Hörsaal III (Hauptgebäude) | Martin Aigner Arbeitsgruppe Diskrete Mathematik, Freie Universität Berlin | Volumen, Polyeder, Primzahlen - eine mathematische Rundreise |
27.04.2011 | 10:15 - 11:15 SG 23 | Manuel Kutschka Lehrstuhl II für Mathematik, RWTH Aachen University | Cutset Inequalities for Robust Network Design |
06.04.2011 | 18:00 R5 (Schinkelstraße) | Martin Grötschel Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) | Mathematics in Public Transport: Theory and PracticeMathematics in Public Transport: Theory and Practice
My cooperation with providers of public transport started about 20 years ago. Since then, my research group at ZIB has carried out numerous traffic, transport and logistics projects. I will report about real success stories, such as designing and implementing algorithms that help reduce the number of vehicles and/or drivers needed for a public transport system without changing the service level, or the very recent support for the design of a new public transport network in Potsdam.
There were, however, a number of cases where our approach was not too successful. In this lecture I will mention also several such "failure cases" that my research group encountered and will analyze the reasons for that. The obstacles to a successful implementation of optimization and operations research techniques are often "missing data" or "unclear specifications" and "moving goals", but may also be due to psychological reasons and basic misunderstandings of the roles the people involved play. There are also various fears and "business and power games" that one has to understand, something that scientists have not learned and have difficulties to cope with. |
16.02.2011 | 13:00 - 14:00 Sammelbau 517/518 | Enrico Malaguti DEIS, Università di Bologna | Fair Routing in NetworksFair Routing in Networks
We study how a mesh network should use the energy stored in its nodes. The solution that minimizes the total energy spent by the whole network may be very unfair to some nodes because they bear a disproportionate burden of the traffic. We explicitly aim at the solution that minimizes the total energy but we add a fairness constraint, thus optimizing social welfare and keeping user needs as constraints. We look both at centralized and decentralized algorithms to solve this problem, and show how fairness can be obtained with a limited increase of total energy. |
19.01.2011 | 10:15 - 11:15 SG 202 | Ute Ziegler LuF Mathematik - Kontinuierliche Optimierung, RWTH Aachen University | Linear Mixed Integer Optimization applied to PDE-/ODE-constraint Dynamic Network Flow Models |
12.01.2011 | 10:15 - 11:15 SG 202 | Martin Bergner Lehrstuhl für Operations Research und Supply Chain Management, RWTH Aachen University | Detecting and exploiting structures in MIPs |