A couple of little mistakes on my behalf cost me several hundred placings in the final score board, but I don’t think I was ever going to get a t-shirt this round. A really fast first problem was probably enough for a t-shirt this time round, with solving the second problem to make it safe. Solving those 2 in a decently fast time was needed to advance, solving all 3 was only done by a few.
Q1) Given N steps of M conditions (true, false or don’t care), determine the minimum number of times true values have to be changed to false (in a batch) or vice versa to get past all N steps in order, starting from M values which are all false.
A1) This problem boils down to realizing that it can be solved using a lazy greedy. If it wasn’t for the ‘don’t care’ states, the problem is trivial as every change is defined and you just don’t do any unneeded changes. The lazy greedy approach is to pretend ‘don’t care’ means the condition of the next step that is something other than ‘don’t care’, but only if you were going to make a change from positive to negative or vice versa for other reasons. This works because either it gets changed before you get there for free, or you avoid incurring the cost of doing it early when there was going to be a batch at the final step anyway.
So the simple way to do this is with an O(M*N^2) algorithm where at each step you determine what ‘has’ to be done to satisfy true and false conditionals, and also perform any changes of state for ‘don’t care’ conditionals where the future desired value can be put into a batch which is already going to be performed, by scanning ahead each time you see a question mark. I wrote an O(NM) algorithm by pre-calculating the results of the scan ahead by working backwards through the steps. This was my first mistake, as I missed a line of code in my pre-calculation loop, and so it contained garbage data, resulting in my wasting a bunch of time debugging. The second mistake was I wrote a conditional of the form if (A and !B) B = true – which could have just been if (A) B = true. Then I copy pasted that conditional and changed to B = false, but didn’t change !B to B. I had to take a time penalty to resubmit because I only thought to double check my copy pastes after I pushed submit…
Q2) Calculate the sum of values which satisfy a criterion within a range. The criterion is that they are not one more than a prime number, and that there exists exactly one pair of positive integers which sum to that number such that the product of that pair can be decomposed into 1 or more pairs that multiply to give that product, but the sums of which only have one value which is not 1 more than a prime. (Caveat that the value 2 does not satisfy the criterion either.)
A2) Believe it or not, the above description is actually a simplified version of the original description… With the range being up to 5 million, O(N^3/2) is going to be too slow, so the algorithm needs to be O(N log N) or better basically.
A solution which passed in my room seemingly presumes that if the pairing of 1 and n-1 works, it is a distinct pairing, and no other scenario works. It then creates a vector of the prime decomposition of n – 1 and uses a bit mask to create every pair that multiply to give n – 1 ( excluding 1 and n – 1), and if any of those pairs sum to give not 1 more than a prime, it fails. Given the initial assumption, this matches the conditional because n – 1 is not prime by initial check, so 1 + n – 1 – 1 is not prime, so we already have 1 pair which is not 1 more than a prime, finding any more would be bad. The weird bit however is that this algorithm is not obviously fast enough. Non-primes are dense so the initial check doesn’t prune much and the bit mask loop means O(2^(log N)) operations worst case per number. So you might be tempted to say this is O(N^2). However the bit mask loop’s worst case is for powers of 2, which are not-dense. The average number of prime factors is in fact O(log log N) which means that this algorithm does indeed kind of run in O(N log N) time after all (generating the prime factors is also a O(log N) component if you precompute a composites table with smallest prime factor – which can itself be done in O(N log log N) using the sieve).
What needs more thought is why the assumption that the distinct pair is always 1 and n – 1. Why does 2 and n-2 always fail for instance. We know that n-1 is not prime. 2 and n – 2 added together is n so it is 1 more than a not a prime. All we need now is one more example. 1 and 2n-4 is 2n-3 which is one more than 2n-4 which is obviously not a prime as it has 2 as a factor. Presto it fails. More generally k and n-k. Precondition of n-1 is not a prime, so the direct sum is 1 more than a prime. 1 and kn-k^2 is one more than kn-k^2 – which is k(n-k) which is not a prime if k >= 2 and again it fails. The only case that could work is k=1.
Q3) Given a large range, determine how many numbers can be recursively mapped using the following function. If x is not an integer, no mapping. If x mod W is 0, no mapping, otherwise f(x) is x/(x mod W).
A3) The range is indeed huge, no way enumerating will work. First off every positive number less than W is a recursive map to 1. Every value x where x mod W is 1 is a recursive map to itself. The number of these scenarios in the range can easily be calculated.
Beyond that is much more complicated. x where x mod W is 2, where x is even and x/2 is mappable, is mappable, obviously. 2kW+2 works fully recursively because it maps down to kW+1. But if W is even, W+2 works. 3W+2 is W+1+W/2, which looks like it won’t work… So it looks like the factors of W need to be special cased in some fancy fashion which is non-obvious. Maybe I’ll look at this more deeply another day – but not today.