# How to understand the dynamic programming solution in linear partitioning?

Be aware that there’s a small mistake in the explanation of the algorithm in the book, look in the errata for the text “(*) Page 297”.

1. No, the items don’t need to be sorted, only contiguous (that is, you can’t rearrange them)
2. I believe the easiest way to visualize the algorithm is by tracing by hand the `reconstruct_partition` procedure, using the rightmost table in figure 8.8 as a guide
3. In the book it states that m[i][j] is “the minimum possible cost over all partitionings of {s1, s2, … , si}” into j ranges, where the cost of a partition is the larges sum of elements in one of its parts”. In other words, it’s the “smallest maximum of sums”, if you pardon the abuse of terminology. On the other hand, d[i][j] stores the index position which was used to make a partition for a given pair i,j as defined before
4. For the meaning of “cost”, see the previous answer

Edit:

Here’s my implementation of the linear partitioning algorithm. It’s based on Skiena’s algorithm, but in a pythonic way; and it returns a list of the partitions.

``````from operator import itemgetter

def linear_partition(seq, k):
if k <= 0:
return []
n = len(seq) - 1
if k > n:
return map(lambda x: [x], seq)
table, solution = linear_partition_table(seq, k)
k, ans = k-2, []
while k >= 0:
ans = [[seq[i] for i in xrange(solution[n-1][k]+1, n+1)]] + ans
n, k = solution[n-1][k], k-1
return [[seq[i] for i in xrange(0, n+1)]] + ans

def linear_partition_table(seq, k):
n = len(seq)
table = [[0] * k for x in xrange(n)]
solution = [[0] * (k-1) for x in xrange(n-1)]
for i in xrange(n):
table[i][0] = seq[i] + (table[i-1][0] if i else 0)
for j in xrange(k):
table[0][j] = seq[0]
for i in xrange(1, n):
for j in xrange(1, k):
table[i][j], solution[i-1][j-1] = min(
((max(table[x][j-1], table[i][0]-table[x][0]), x) for x in xrange(i)),
key=itemgetter(0))
return (table, solution)
``````