TCO14 R2B

A couple of little mistakes on my behalf cost me several hundred placings in the final score board, but I don’t think I was ever going to get a t-shirt this round.  A really fast first problem was probably enough for a t-shirt this time round, with solving the second problem to make it safe.  Solving those 2 in a decently fast time was needed to advance, solving all 3 was only done by a few.

Q1) Given N steps of M conditions (true, false or don’t care), determine the minimum number of times true values have to be changed to false (in a batch) or vice versa to get past all N steps in order, starting from M values which are all false.

A1) This problem boils down to realizing that it can be solved using a lazy greedy.  If it wasn’t for the ‘don’t care’ states, the problem is trivial as every change is defined and you just don’t do any unneeded changes.  The lazy greedy approach is to pretend ‘don’t care’ means the condition of the next step that is something other than ‘don’t care’, but only if you were going to make a change from positive to negative or vice versa for other reasons.  This works because either it gets changed before you get there for free, or you avoid incurring the cost of doing it early when there was going to be a batch at the final step anyway.

So the simple way to do this is with an O(M*N^2) algorithm where at each step you determine what ‘has’ to be done to satisfy true and false conditionals, and also perform any changes of state for ‘don’t care’ conditionals where the future desired value can be put into a batch which is already going to be performed, by scanning ahead each time you see a question mark.  I wrote an O(NM) algorithm by pre-calculating the results of the scan ahead by working backwards through the steps.  This was my first mistake, as I missed a line of code in my pre-calculation loop, and so it contained garbage data, resulting in my wasting a bunch of time debugging.  The second mistake was I wrote a conditional of the form if (A and !B) B = true – which could have just been if (A) B = true.  Then I copy pasted that conditional and changed to B = false, but didn’t change !B to B.  I had to take a time penalty to resubmit because I only thought to double check my copy pastes after I pushed submit…

Q2) Calculate the sum of values which satisfy a criterion within a range.  The criterion is that they are not one more than a prime number, and that there exists exactly one pair of positive integers which sum to that number such that the product of that pair can be decomposed into 1 or more pairs that multiply to give that product, but the sums of which only have one value which is not 1 more than a prime. (Caveat that the value 2 does not satisfy the criterion either.)

A2) Believe it or not, the above description is actually a simplified version of the original description…  With the range being up to 5 million, O(N^3/2) is going to be too slow, so the algorithm needs to be O(N log N) or better basically.

A solution which passed in my room seemingly presumes that if the pairing of 1 and n-1 works, it is a distinct pairing, and no other scenario works.  It then creates a vector of the prime decomposition of n – 1 and uses a bit mask to create every pair that multiply to give n – 1 ( excluding 1 and n – 1), and if any of those pairs sum to give not 1 more than a prime, it fails.  Given the initial assumption, this matches the conditional because n – 1 is not prime by initial check, so 1 + n – 1 – 1 is not prime, so we already have 1 pair which is not 1 more than a prime, finding any more would be bad.  The weird bit however is that this algorithm is not obviously fast enough.  Non-primes are dense so the initial check doesn’t prune much and the bit mask loop means O(2^(log N)) operations worst case per number.  So you might be tempted to say this is O(N^2).  However the bit mask loop’s worst case is for powers of 2, which are not-dense.  The average number of prime factors is in fact O(log log N) which means that this algorithm does indeed kind of run in O(N log N) time after all (generating the prime factors is also a O(log N) component if you precompute a composites table with smallest prime factor – which can itself be done in O(N log log N) using the sieve).

What needs more thought is why the assumption that the distinct pair is always 1 and n – 1.  Why does 2 and n-2 always fail for instance.  We know that n-1 is not prime.   2 and n – 2 added together is n so it is 1 more than a not a prime.  All we need now is one more example.  1 and 2n-4 is 2n-3 which is one more than 2n-4 which is obviously not a prime as it has 2 as a factor.  Presto it fails.  More generally k and n-k.  Precondition of n-1 is not a prime, so the direct sum is 1 more than a prime.  1 and kn-k^2 is one more than kn-k^2 – which is k(n-k) which is not a prime if k >= 2 and again it fails.  The only case that could work is k=1.

Q3) Given a large range, determine how many numbers can be recursively mapped using the following function.  If x is not an integer, no mapping.  If x mod W is 0, no mapping, otherwise f(x) is x/(x mod W).

A3) The range is indeed huge, no way enumerating will work.  First off every positive number less than W is a recursive map to 1.  Every value x where x mod W is 1 is a recursive map to itself.  The number of these scenarios in the range can easily be calculated.

Beyond that is much more complicated.  x where x mod W is 2, where x is even and x/2 is mappable, is mappable, obviously.  2kW+2 works fully recursively because it maps down to kW+1.  But if W is even, W+2 works.  3W+2 is W+1+W/2, which looks like it won’t work…  So it looks like the factors of W need to be special cased in some fancy fashion which is non-obvious.  Maybe I’ll look at this more deeply another day – but not today.

GCJ14 R2

As usual round 2 is pretty competitive, to advance required all 4 small inputs and the 2 easiest large problems, and even then time was a minor factor…  T-shirt was the 2 easiest large problems and one of the remaining small inputs, the harder one to not have to worry about time.

Q1) Given a set of containers of equal size and a list of item sizes and a maximum of 2 items per container, determine the minimum number of containers needed to pack all the items.

A1) This can be solved by a greedy approach.  Take the largest item, if there is another item which fits in the container with it, take it.  If you want you can choose to take the largest of such other possible items for the ‘most greedy’ approach.  But I am pretty sure you can just take the smallest and it makes no difference.  This later scenario makes the problem trivial, as you just walk inwards from the ends of a sorted version of the list of items, advancing when you can take them to count the items.  This works well for the large input since it is an O(N) approach (after the O(N log N) of sorting).  But even with 10^4 items the simple O(N^2) implementation of the ‘most greedy’ approach will run in time with a decent compiler.  Or it could be implemented with a sorted dictionary (specifically one with a ‘find value or prior’ method) for O(N log N), but in this case you have to be careful how you handle multiple items of the same size, since they will have the same key.

Q2) Determine the minimum number of neighbour swaps to make a sequence be sorted such that it has a single maximum, that is it is either all descending/ascending or ascending followed by descending.

A2) I had to go remind myself about minimum neighbour swaps to work out how to do this question.  Even the small input looked tough, brute forcing all possible swap locations and orderings was way way too slow.

So this question boils down to two parts.  Minimum neighbour swaps to make a specific permutation, and deciding which permutation takes the minimum neighbour swaps to satisfy the criteria.

Minimum neighbour swaps to make a permutation is pretty straight forward.  But it requires knowing things which aren’t exactly obvious.  For instance the bubble sort is optimal in the number of swaps, so for the small input you can just use the bubble sort to sort the items, relabelled by the desired index to move them to.  For large input you need to know that a merge sort can be used to count the number of swaps a bubble sort would have done, but with batches rather than 1 by 1, allowing it to run in O(N log N) even though the number of swaps is O(N^2).  Some Google searching found me the algorithm, which is not that complicated.  Every time an item in the merge is taken from the top half instead of the bottom half, you add the distance it would have had to move assuming the halves were joined in sequence rather than being merged into a destination array.

Deciding the permutation had me stumped for a while.  Obviously the largest value in the input is the pivot point, but which elements should be on its left, and which should be on the right?  In hindsight the small input brute force is obvious here, just try every combination of values other than the largest being assigned left, or right.  Once they are labelled, sort the lefts ascending, and the rights descending and that determines the desired permutation to calculate the minimum swaps for.  The large input on the other hand is I think far from obvious.  I think it can be done just by using a hill climb.  For each value see if the minimum swaps decreases by switching sides, repeat until you can’t make it any better.  I can’t prove this however…  This hill climb could be O(N^3 log N) (which is quite slow for N=1000) if switching sides made things better to switch back for others, but I think it is much closer to O(N^2 log N) in practice.

Looking at a solution on the other hand shows another option which is much simpler, and is intuitive, but not obvious.  The idea seems to be that you select the smallest value in the item and then swap it to the closest end.   Lock that one in, and repeat the process.  Can be made to run in O(N log N) although given the large input is still only at most 1000 items, the simpler O(N^2) option is fast enough.

Q3) Given a grid of squares determine the maximum flow from the bottom edge to the top edge, if each square has a capacity a flow limit of 1 and can flow through its edges, but some cells have 0 flow capacity.  All bottom edges are an input flow of 1, all top edges are accumulated for the maximum flow.

A3)  So if you have a maximum flow implementation at the ready, the small input here is probably not too crazy.  However with 100000 nodes in the graph and each node having 4 connections, and a potential maximum flow of 100 it will need to be the Ford Fulkerson algorithm with its O(E max(f)) running time as anything O(VE) or worse will run way way way too slow.

The large input however has a grid height of 10^8 which ensures the O(W^2 * H) Ford Fulkerson approach is implausible.  However there is a limit of 1000 blocked out areas, which means there is only at most 2000 different interesting positions in the height.  However even O(W^2 * count_interesting(H)) is pretty slow and determining how the flow graph looks even for that is tricky.  I am not clear on how horizontal elements can be safely coalesced to give edges with maximum flow greater than 1, because there is a limit to how fast a given flow can migrate left or right, and so you would have to ensure that everything that was coalesced can make it to the next area in time rather than just blindly coalescing everything which is horizontally connected.

To discuss the bit I think which is interesting (but probably not sufficient to solve) is how flows can migrate left or right across a distance.  If there are 4 adjacent flows entering an ‘open’ area, one of them can migrate to anywhere after one cell down as it can flow across all the empty spaces.  Then after the second cell down another can join it, then 3rd can move after 3 and so on.  These would probably be represented with cross links in the graph with a specific capacity dependent on the height difference between the current position and the next.  If there are 2 ‘streams’ coming down to an open area these links would be set up so that flow can’t go both directions.

Q4) Determine the worst case for the number elements in the tries created by splitting the input works in to k groups, then determine how many ways this worst case can be performed.

A4)  Small input is pretty easy here, only 4 groups and 8 items,  gives only 2^16 base scenarios to consider in a pure brute force.  From there eliminate the ones where any group has no elements, the build the tries and sum the counts and if worse reset counter to 1, if equal increment counter, otherwise ignore.

The first part of this question is actually quite easy, just count how many times each substring exists in each word, choose the smaller of that and the number of servers, sum all those values and that is the worst case.  Basically this just states that any given substring can always be distributed amongst the servers to potentially be counted a number of times equal to the number of servers.

The number of ways that it can be done is a much more difficult problem.  How many ways a given substring could be distributed amongst the servers such that it is spread out as much as possible is a fairly simple combinatorics question, but you can’t just multiply these together because they have correlations because the shorter substrings have to match the servers of their corresponding longer servers.

Trying to reverse engineer the logic from reading a solution is beyond me right now, so I think I will leave this question for another day…