Capacity Scaling | Network Flow

Graph Theory Algorithms Graph Theory Algorithms
10 minutes
Share the link to this page
Copied
  Completed
You need to have access to the item to view this lesson.
This is a free item
$0.00
د.إ0.00
Kz0.00
ARS$0.00
A$0.00
৳0.00
Лв0.00
Bs0.00
B$0.00
P0.00
CA$0.00
CHF 0.00
CLP$0.00
CN¥0.00
COP$0.00
₡0.00
Kč0.00
DKK kr0.00
RD$0.00
DA0.00
E£0.00
ብር0.00
€0.00
FJ$0.00
£0.00
Q0.00
GY$0.00
HK$0.00
L0.00
Ft0.00
₪0.00
₹0.00
ISK kr0.00
¥0.00
KSh0.00
₩0.00
DH0.00
L0.00
ден0.00
MOP$0.00
MX$0.00
RM0.00
N$0.00
₦0.00
C$0.00
NOK kr0.00
रु0.00
NZ$0.00
S/0.00
K0.00
₱0.00
₨0.00
zł0.00
₲0.00
L0.00
QR0.00
SAR0.00
SEK kr0.00
S$0.00
฿0.00
₺0.00
$U0.00
R0.00
ZK0.00
Already have an account? Log In

Transcript

Hello, and welcome back. My name is William. And today we're still talking about network flow. And in particular, we're going to cover something called capacity scaling, which is really more of a heuristic than it is an algorithm. Capacity scaling is a heuristic which says that we should attempt to push flow only through the largest edges first and then allow using edges which have smaller capacities and do this to achieve the maximum flow more rapidly. This video builds off the max flow tutorial, I made a few videos back so I highly recommend you go back and watch that before proceeding.

Just before we dive into capacity scaling, I want to quickly revisit finding the max flow using a depth first search and the issues surrounding that I keep coming back to this because I think it's important that we understand the intuition behind why all these new Maxwell algorithms were developed and why they came about. When we're looking at finding augmenting paths. The worst case is when we can only augment the flow by one unit. And it goes something like this. We start at the source node, we take any edge with a remaining capacity greater than zero. And we just keep going until we reach the sink.

And once we've reached the sink, we find the bottleneck value that is the edge with the smallest remaining capacity on our augmenting path, which in this case happens to be one. Then we augment or update the flow by adding the bottleneck value to the flow along forward edges and subtracting flow by the bottleneck value along the residual edges. However, we're not done so we're going to start once again. At the source and start finding another path. Suppose this time we take the edge going down, then take the residual edge going up and sideways and then down again. And now we have found another augmenting path and we can find its bottleneck value.

Recall that the remaining capacity of an edge is calculated as the capacity minus the flow. This allows residual edges with a negative flow to have a positive remaining capacity. Notice that yet again, the bottleneck value for this path is only one unit of flow. Now, update or augment the flow do this by adding the bottleneck value to the flow along the forward edges and subtracting the flow by the bottleneck value along the residual edges. You could imagine the depth first search algorithm repeatedly taking an edge with a capacity value of One each time, which would ultimately limit how much flow can push through the network in each iteration, as shown in the next few slides. So it would look like this, we just keep alternating between the forward and the residual edge with a capacity of one.

Capacity scaling is the idea that we should prioritize taking edges with larger capacities first to avoid ending up with a path a small bottleneck. If we adjust the size of each edge based on its capacity value, then we can more easily visualize which edges we should give more attention to. The capacity scaling algorithm is pretty straightforward. But first we need to define two variables that we will need let u equal the value of the largest edge capacity in the initial flow graph. And also let Delta be the largest power of two Which is less than or equal to the value of u. The capacity scaling heuristic says that we should always take edges whose remaining capacity is greater than or equal to delta in order to achieve a better runtime.

But that's not everything to the algorithm. The algorithm will repeatedly find augmenting paths through the flow graph, which have a remaining capacity greater than or equal to delta. Until no more paths satisfy this criteria. Once this criteria is no longer met, what we do is decrease the value of delta by dividing it by two. And we repeat this process while delta is greater than zero. So the reason you would want to implement capacity scaling is because it's very easy to code up and it works very, very well in practice, in terms of time complexity, capacity scaling with a depth first Search runs in big o n e squared log Q and in big O of V times v log u if the shortest augmenting path is found, which is basically Edmonds Karp, but with capacity scaling, although I have found that to be much slower, so I would recommend the depth or search if you are going to implement this.

Let's do an example, let's find the maximum flow of the following flow graph using capacity scaling. First, compute you as the maximum of all initial capacity values. In this example, use the maximum of 614 157 10 111 and 12, which happens to be 14. Next, compute the starting value for delta, which is the smallest power of two less than or equal to u, which we know is 14. Therefore the starting value of delta is eight since the next power of two after 816 but six is larger than 14. Now that we have delta, we can start finding paths from s to t which have a remaining capacity greater than or equal to eight.

Start our search at the source from the source, there's only one edge which has a remaining capacity of eight or more, which is the edge with the capacity of 14 going downwards. Then there's the edge sideways with the remaining capacity of 10 we can take and finally an edge with the remaining capacity of 12 going upwards which we can also take. Now we've reached the sink. So we can find the bottleneck value which is 10 because 10 is the smallest remaining capacity along the found path. Next augment the flow along the path. I scaled down the size of each edge to reflect how much remaining capacity they have left.

You can analyze the flow graph, but there are no more augmenting paths from s to t which have a remaining capacity greater than or you to eight, so the new delta is tapped into and is now equal to four. One path we can take with all remaining capacities of four or more is the following. Start the source, go up, sideways and sideways again, then do the usual find the bottleneck and augment the flow. There is also another path with all remaining capacity is greater than four, which we can take from s to t, which is down to one diagonally up to node two and to the sink again, find the bottleneck value which we know to be four, because four is the smallest remaining capacity along a path and we can augment the flow. If you now inspect the flow graph, there are no more paths with a remaining capacity with all values greater than or equal to four from s to t. So half the value Delta two.

However, there are also no paths with the remaining capacity of all two or more, so we need to have the value of delta again. So now delta is equal to one, I believe there is one remaining path we can take before the graph is fully saturated. Let's start at the source and find it. Alright, now we've found the path. And we can also find the bottleneck which has a value of one. And now the last step is to augment the flow.

And now there are no more paths from s to t, which have a remaining capacity greater than or equal to one. So the new value of delta is zero, which terminates the algorithm, we can compute the maximum flow by summing up all the bottleneck values we found in each iteration, which we know to be 10 Five, four and one for a total of 20. We can also compute the maximum flow by summing the flow values going into the sink highlighted in red. So in summary of what we have learned about, we know that Ford Fulkerson implemented with a depth first search can result in having a bottleneck value of one in each iteration, which kills the time complexity. Capacity scaling is when we push flow only through larger edges first, to try and achieve a better runtime. One approach to capacity scaling is to maintain a decreasing parameter Delta, which acts as a threshold for which edges should be accepted and which should be rejected based on their remaining capacity.

This is a pretty simple but extremely powerful idea that greatly speeds up finding the maximum flow. Awesome. That's all I have on capacity scaling. Thank you for watching. next video, we'll take a quick look at some source code. So stick around until then, please like this video and subscribe.

For more mathematics and computer science videos, thank you

Sign Up

Share

Share with friends, get 20% off
Invite your friends to LearnDesk learning marketplace. For each purchase they make, you get 20% off (upto $10) on your next purchase.