Bipartite Matching & Routing
with Congestion Costs
with Congestion Costs
The following is an exposition of Kelly Ho's Master's thesis (advisors: Dan Calderone and Lillian Ratliff).
The results are in submission to LCSS/CDC 2024.
MotivationIn this setup, we consider a problem setup where we have multiple agents transversing an environment in the presence of congestion and the targets for each agent have to be assigned as well as the paths for each agent. The motivating example for this formulation is to model interactions of robots on large sorting floors such as at Amazon distribution centers.
Problem SetupWe consider an environment such as a standard grid world model modeled by a graph \(\mathcal{G} = (\mathcal{V},\mathcal{E})\) with nodes \(\mathcal{V}\) and edges \(\mathcal{E}\). In the grid world context, nodes (or states) usually refer to the cells in the grid world and the edges are the connections between them. (We give more details on some of the grid world modeling in a section below.) We consider a population of agents \(\mathcal{I}\) each assigned to an origin location in the grid world. We will often denote agents by the index \(i\). The possible destination locations (to be assigned to each agent) will be denoted by \(\mathcal{J}\) and denoted by the index \(j\). Each edge in the graph \(e \in \mathcal{E}\) is associated with a flow variable \(x_e \in \mathbb{R}_+\) that denotes how much population mass uses that edge and a latency function \(\ell_e(x_e)\) which gives the travel time for taking a particular edge. Congestion effects are modeled by the fact that the congestion function is usually taken to be positive, monotonically increasing function of flow. We will keep track of flows for each agent/population from a source \(i\) to destination \(j\) as well with edge flows denoted as \(x_{ije}\) and compute the total flow on each edge by summing over \(i,j\) \(x_e = \sum_{ij}x_{ije}\). Note that congestion will still be a function of total flow on an edge, ie. \(\ell_e(x_e\). We will denote the vector forms of the flows as \(x_{ij}\in \mathbb{R}^{|\mathcal{E}|}\) for each population and \(x \in \mathbb{R}^{|\mathcal{E}|}\) for the total flows. We will also represent the latency functions in vector form as vector valued function \(\ell(x):x\mapsto \mathbb{R}^{|\mathcal{E}|}\). In the formulations that follow, we will use several matrices (from algebraic graph theory) to encode the connectivity structure of the graph \(\mathcal{G}\). Specifically the traditional node-edge incidence matrix for a graph is denoted \(E \in \mathbb{R}^{|\mathcal{V}|\times|\mathcal{E}|}\) and encodes which edges are connected to which nodes. We give many more details about this matrix and it's properties in this blog post: Incidence Matrices
Convex Optimization FormulationsHere we detail several well-known convex optimization optimization formulations for routing games and matching problems. Our work in this project has been to combine these formulations into a single optimization problem framework for solving a matching+path planning problem in the presence of congestion.
Routing Game FormulationShortest path problems without congestion can be framed as solving a linear program. We detailed this linear program as well as discussed using the simplex method to solve it in this blog post: Shortest Path LP . (As a side note, the simplex method used in that post was a visualization and conceptual exercise; in practice Dijkstra, A\(^*\), or other grapth theoretic algorithms are far more efficient.) In the presence of congestion, we can modify the problem by introducing routing game potential function $$ \begin{aligned} F(x) = \sum_e \int_0^{x_e} \ell_e(s) \ ds = \sum_e \int_0^{\sum_{ij} x_{ije}} \ell_e(s) \ ds \end{aligned} $$ giving the following optimization formulation. $$ \begin{aligned} \min_{x_{ije}} & \quad F(x) \\ \text{s.t.} & \quad Ex_{ij} = S_{ij}M_{ij}, \ x_{ij} \geq 0 \ \ \forall ij \\ & \quad x = \sum_{ij}x_{ij} \end{aligned} $$ The vectors \(S_{ij} \in \mathbb{R}^{|\mathcal{V}|}\) are source-sink vectors that encode information about origin \(i\) and destination \(j\). Again, we refer to the shortest path LP blog post above for more details. \(M_{ij} \in \mathbb{R}_+\) is the population mass starting at origin \(i\) and traveling to destination \(j\).
Linear Assignment Matching ProblemThe linear assignment matching problem finds minimum weight complete matching for a completely connected weighted bipartite graph. Applications include many problems where resources need to be allocated optimally such as assigning \(n\) workers to perform \(n\) tasks in the most efficient way. A summary of this problem formulation is given in this slide: Linear Assignment Matching Convex Program Common solution techniques include using the Hungarian/Munkres algorithm or using linear programming techniques. We detail the the linear programming technique here.
This problem can be framed as the following linear program. Consider the scenario where we want to assign \(n\)-agents to \(n\)-tasks. Assigning agent \(i\) to task \(j\) incurs a cost of \(C_{ij}\). Organizing the costs into a matrix \(C \in \mathbb{R}^{n \times n}\) where the \(C_{ij}\) is the cost of agent \(i\) performing task \(j\), we can formulate the following linear program. $$ \begin{aligned} \min_M & \quad \text{Tr}(C^\top M) = \sum_{ij} C_{ij}M_{ij} \\ \text{s.t.} & \mathbf{1}^\top M = \mathbf{1}^\top, \ M \mathbf{1} = \mathbf{1}, \ M \geq 0 \end{aligned} $$ Here \(M \in \mathbb{R}^{n \in n}\) is a matrix where \(M_{ij}\) says how much mass or "effort" from 0 to 1 (100%) agent \(i\) puts toward task \(j\). The constraints ensure that each agent gives 1 unit of effort ( \(M\mathbf{1} = \mathbf{1}\)), that each of task only gets 1 unit of effort assigned to it ( \(\mathbf{1}^\top M = \mathbf{1}^\top \)), and that all efforts are positive (\(M \geq 0\)). Algebraically, here \(M\) is constrained to be a doubly stochastic matrix (positive elements, both rows and columns each sum to 1). From the Birkhoff-von-Neumann Theorem, we know that the space of doubly stochastic matrices is the convex hull of the space of permutation matrices. If \(M\) is a permutation matrix, then each agent is assigned to one and only one task and all the tasks get an agent. By allowing \(M\) to be a doubly stochastic matrix (instead of a permutation matrix), we are solving a convex-relaxation of the matching problem where agents can split effort across multiple tasks. This relaxation is actually tight in that there will always be a vertex (a permutation matrix) that actually achieves the optimal cost.
Combined Matching and Routing Formulation In our work, we combine the above routing game formulation and the matching formulation to solve a task assignment problem where the costs for each task are shortest path costs through a congested environment. The joint formulation is as follows.$$ \begin{aligned} \min_{x_{ije}} & \quad F(x) \\ \text{s.t.} & \quad Ex_{ij} = S_{ij}M_{ij}, \ x_{ij} \geq 0 \ \ \forall ij \\ & \quad x = \sum_{ij}x_{ij} \\ & \quad \mathbf{1}^\top M = \mathbf{1}^\top, \ M \mathbf{1} = \mathbf{1}, \ M \geq 0 \end{aligned} $$ Note here that we have simply organized the mass on each route \(M_{ij}\) from the routing game into task assignment mass matrix for the linear assignment convex program introduced second. By analyzing the KKT conditions of this problem, we can show that at optimum this problem finds an optimal matching between agent starting locations and task destinations where the costs are given by shortest path costs with congestion.
KKT Conditions and Dual Variable DescriptionNoting that \(\frac{\partial F}{\partial x} = \ell(x)^\top\), the stationarity portion of the KKT conditions for the joint optimization problem are as follows. $$ \begin{aligned} \ell(x)^\top - v_{ij}^\top E_{ij} - \mu_{ij}^\top & = 0, \ \ \mu_{ij} \geq 0, \ \ \mu_{ij}^\top x_{ij} = 0 \quad \forall ij \\ C - \mathbf{1}u^\top - w \mathbf{1}^\top - U & = 0, \ \ U \geq 0, \ \ \text{Tr}(U^\top M)=0, \quad C_{ij} = v_{ij}^\top S_{ij} \end{aligned} $$ Here we have included the following dual variables. \(v_{ij} \in \mathbb{R}^{|\mathcal{V}|}\) are dual variables for each incidence matrix constraints that have interpretation as the cost-to-go to the destination for that population. \(\mu_{ij}\) comes from the positivity of \(x_{ij}\) and can be interpreted as the inefficiency of traveling on each edge. From the double stochasticity constraints, we get dual variables \(u \in \mathbb{R}^{|\mathcal{J}|}\) and \(v \in \mathbb{R}^{|\mathcal{I}|}\). \(u\) is a "column potential" variable that gives a lower bound on the cost of performing each task adn \(v\) is a "row potential" variable that gives a lower bound on the cost incurred by each agent. (See explanations of the Hungarian/Munkres algorithm for more details.) \(U \in \mathbb{R}^{|\mathcal{I}|\times|\mathcal{J}|}\) comes from the positivity of \(M\) and \(U_{ij}\) can be interpreted as the inefficiency of agent \(i\) performing task \(j\). The interpretation of the dual variables as well as the point of the optimization becomes clear from the following arguments.
Consider any path from origin \(i\) to origin \(j\) represented by an indicator vector \(\xi_{ij} \in \mathbb{R}^{|\mathcal{E}|}\) where \((\xi_{ij})_e = 1 \) if edge \(e\) is in the path, \((\xi_{ij})_e = 0 \) otherwise. Note that by construction \(E_{ij}\xi_{ij} = S_{ij}\). Right multiplying this indicator vector by the first KKT condition above we get $$ \begin{aligned} \ell(x)^\top \xi_{ij} & = v_{ij}^\top E_{ij}\xi_{ij} + \mu_{ij}^\top \xi_{ij} = v_{ij}^\top S_{ij} + \mu_{ij}^\top \xi_{ij} \end{aligned} $$ Note that here the left-hand side (LHS) is the cost of the path (with congestion). The term \(v_{ij}^\top S_{ij}\) is independent of the path and only depends on the origin/destination encoded in the source-sink vector \(S_{ij}\). Since the term \(\mu_{ij}^\top \xi_{ij} \geq 0\) the term \(v_{ij}^\top S_{ij}\) is a lower bound on the optimal path cost; since \(\mu_{ij}^\top x_{ij}=0\) if the support of \(\xi_{ij}\) is contained in the support of \(x_{ij}\) then \(\mu_{ij}^\top \xi_{ij}=0\) it is actually an optimal path with cost \(v_{ij}^\top S_{ij}\) which we can define as \(C_{ij}\). We now have that the cost matrix \(C\) in the second KKT condition contains the minimal path cost information for each pair. We can now evaluate the cost of a particular matching by right multiplying the second KKT condition by the appropriate permutation matrix \(P\) and taking a trace. Since \(P\) is a permutation, we have that \(P \geq 0 \), \(\mathbf{1}^\top P = \mathbf{1}^\top \), and \(P\mathbf{1}=\mathbf{1}\). Applying this we get $$ \begin{aligned} \text{Tr}(C^\top P) = \text{Tr}(u \mathbf{1}^\top P) + \text{Tr}(\mathbf{1}v^\top P) = \mathbf{1}^\top u + v^\top \mathbf{1} + \text{Tr}(U^\top P) \end{aligned} $$ Note again that \(\mathbf{1}^\top u + v^\top \mathbf{1}\) is independent of the matching and also a lower bound on the optimal matching cost since \(text{Tr}(U^\top P) \geq 0\). Since \(\text{Tr}(U^\top M) = 0\) if the support of \(P\) is contained the support of \(M\) then \(\text{Tr}(U^\top P) = 0\) and \(P\) is an optimal matching.
Frank-Wolfe Algorithm Implementation: Combining Dijkstra/A\(^*\) and MunkresIn the monograph listed above, we give an algorithm for how to implement the Frank-Wolfe algorithm to solve this joint routing matching problem with congestion. The solution to each subproblem LP can be implemented using efficient shortest path algorithms such as Dijkstra/A\(^*\) and the matching portion can be solved using Munkres algorithm. For further details the reader is referred to Kelly's master's thesis (linked above).