GCJ15 R3

Last round before the finals, only 25 to advance.  Looks like 1 Australian in the top 25 which is an improvement I think…

5 questions in only 2.5 hours was a tough ask. No one got a perfect score, but the top 4 solved all but the last question.

I think I could have solved the first 2 and the small of the 4th on a really good day had I been competing, which would have gotten me 120th range, which I would consider a personal best.  More likely I would have came ~250th solving just the first problem.  Small of the 3rd isn’t too bad either, but I doubt even on my best day I would write solutions for 4 round 3 problems correctly, even if 2 of them were just smalls.

Q1) Given a tree with a root, and each node having a value, determine the minimum number of nodes to remove from the tree to ensure the difference between minimum and maximum values is no more than D.  When removing a node, you must remove all child nodes.

A1) So I found this question to fall into my personal skill set.  If you sort the nodes by value, you can sequentially from smallest to largest, and in reverse determine which nodes have to be removed to in order to have the any specific minimum or maximum value.  These passes are both O(N) assuming you keep a hashtable or boolean array lookup of which nodes you have already removed.

The problem is that any given combination of minimum and maximum that is closest to D (which could be done in linear time by walking up either the minimum or maximum depending whether the current range is less or equal to D or not) might have an overlap between the removed nodes, and so you would be double counting if you just add the two numbers together.

To do the large in time, we need to resolve this double counting without slowing things down.  The trick here is when constructing the nodes to be removed, in the first two passes, don’t just keep the counts, keep the actual lists of nodes removed.  This way when walking up the maximum and you are ‘undoing’ a batch of nodes that might be removed, you can keep linear time because you can just ‘replay’ that list of nodes.  As you replay adding and removing nodes, you check the boolean lists for minimum and maximum, if the node you are removing no longer exists in both, you decrement the count, if it newly exists in only the minimum list, you increment the count.  Otherwise the count stays the same. Then whenever the current range is less or equal check if you have a new minimum removal.  Note that while you start the loop having removed everything, you will at some point reach the value of the root, and since a single node trivially has 0 range, you will never return removing all values as a result.

Q2) Given a sequence of numbers which are the running sum of length K from an original sequence of length N, determine the minimal difference between minima and maxima of the original sequence, assuming the original sequence consisted entirely of integers.

A2) I tried my hand at this for a while, but made a major mistake in thinking early on which left me stumped.  The difference between consecutive sums gives you the difference between two values K apart.  The difference between the next two sums, also does the same.  Combining these two formulas together doesn’t seem to let you derive anything.  I mistakenly then assumed that combining formulas wasn’t the way forward.

The trick is to see that if you have a difference which tells you the relationship between position 0 and K, there is another which tells you the relationship between K and 2K.  Hence you know the relationship between 0 and 2K.  Thus the problem can be reduced to K pairs (if N is >= 2K, otherwise the problem gets some slightly different corner cases I’ll discuss later).  Each pair consists of the relative maxima and relative minima, of how high or low the sequence will go assuming the first value was 0.  Now the question becomes how to align these such that they have minimal range, and their first values are integers and add to give the first sum.

So extend these pairs to triples, by adding 0, the imaginary starting value.  Then align them to have the same maximum, which could be 0, or the first maxima, doesn’t matter.  The sums of the starting values will now be X.  Calculate (X – first_sum)/K using integer division- and subtract that from all values of each triple.  Now you need to reduce the sum of starting values by (X-first_sum)%K.  Sort the triples by their minimum’s in descending order.  The aim is to avoid moving them past the last one.  If we can do that the result is the range of the last one, otherwise it is the range of the last one + 1.  This can be done with a simple greedy strategy, shift each down to be the same as the last minima until you reach the mod value, or you don’t.  If you do, success, else add the one to your return value.

If N < 2K there is a slight difference.  You get less than K pairs to start, and so effectively you have some ‘free’ starting values.  These values can be represented as having 0 maxima and 0 minima as the starting pair.  They make it a lot easier to avoid having to add 1 – but they don’t eliminate it.  To demonstrate this clearly consider the ultimate degenerate case N=K.  Here you only have 0,0 pairs.  But if first_sum%K != 0, you can’t just make all the values have equal starting value.  Hence the result will sometimes be 1.

For this problem the small input really wasn’t much simpler than the large, and it showed, only 5% of those who solved the small failed to solve the large.

Q3) Given a set of points, which all run away from you at their own individual speeds, if you have speed Y, what is the soonest you can visit all the points.

A3) First a trivial simplification, you don’t care about points which you have already visited, so running away from you can be considered running away from the starting point.  If all the points were on one side, the problem is trivial, you just run that direction until your last intercept time based on speed difference and starting positions.

The small data set is 25 points.  That gives 25! scenarios if you ignore that you might visit something on the way to somewhere else.  This is too large, even the basic dynamic programming solution is 2^25 * 25 * 25 (cheapest way to visit subset S ending at T, each state has to consider being continued to visit any other node not yet visited).  If you happen to have a small super computer, this might be fast enough if you parallelize the test cases and optimize well, but otherwise this is going to be too slow.  This ignoring whether you reach something on the way to something else or not is okay, because you can guarantee that ignoring it results in a worse time than visiting it in the right order, but it is expensive.

Looking at a solution which solved just the small, you can take advantage of the fact you visit them on the way to each other.  You will not possibly turnaround more than 25 times.  At any given point in time you will either go left or right and visit the first unvisited node you catch up to in that direction.  This gives you a brute force 2^25 with each point in the brute force having a search cost of 25 to determine the earliest unvisited left/right catch ups.  Overall cost O(N*2^N) which is okay if you have a fast machine…

Looking at a solution for the large, its a DP on having caught the fastest i points going left, the fastest j points going right, and going to catch point k.  First set every value in DP to infinite time.  Then its initialized with the time to reach each point having not previously caught anything (IE starting from 0), which seems like you are ignoring visiting things sooner vs later, but that is not the case.  From each state you consider catching the next fastest left or right point.  If your current position implies you have already gone through that item, you can improve the time for that to the current time, assuming you haven’t already found a better time to get there.  If its not you can calculate a new intercept time as given time of the current state and which node you are currently at, you know where you are.  If that is an earlier arrival time, done again.

It all seems clean and simple.  The question is why does sorting them by the fastest to slowest work even though you might catch a slower one on the way to a faster one.  I think the proof of that is somewhat involved, maybe the official contest analysis will have details… It does seem somewhat intuitive though, fast ones are hard to catch, so you should focus on them first.

Q4) Given the values and frequency of the sums of elements in each set in a power set, determine the original set that created the power set.  If that is ambiguous, determine the ‘smallest’ original set.

A4) The small vs large is two very different problems, which is shown by the vastly different solve rates.

The small problem there are up to 20 values in the original set, but they are all non-negative.  This helps a lot.  You can determine how many 0’s there are by looking at the count of 0 sums, log base 2.  The rest of the frequencies can then be divided by number of times 0 occurs. The smallest non-zero value is then in the original set, and its frequency is how many times.  Anything less than double this value is also in the original set.  As you get more and more values, you can calculate how frequently the sums they generate should be.  Basically you get a loop, what is the smallest value who’s frequencies aren’t all accounted for.  Add that to the set and for each result and existing frequency, also add result + value and the new frequency.  Loop runs at most 20 times, each time it runs its cost is proportional to the number of different values, which the problem caps at 10k.

The large introduces negative numbers.  Which complicate things enormously.  You don’t clearly know how many 0’s there are by looking at frequency of 0, because values might add together to give 0.  Starting from the smallest doesn’t work, its the sum of all of the negative values.  Starting from the closest to zero doesn’t work, some of these could be the sum of positive and negative values…

Looking at a large solution.  First thing is you can easily tell how many numbers you are trying to find.  Just add the frequencies and log 2.  Turns out you can work out how many 0’s there are, just log 2 the frequency of the smallest number (not sure why I didn’t think of that…).  Now there are no 0’s you can determine one value by taking the difference between the smallest two values. (Also obvious in hindsight…) Now we just have to update the frequencies for the removal of that value and repeat the process.   You might think you are done here having generated N numbers this way… but each of those gaps is either a positive number being added to the set or negative being removed. So generate all the possible sums (using a set to avoid duplicates since 2^60 is large, but we know distinct sums are much smaller), and while each number removed is in a sum path for the smallest value, consider those as negatives, the rest are positive.  Consider the numbers in the reverse order of discovery, since the smallest absolute values are found first and the question asks for ‘first’ under ambiguity, large negative values are good for satisfying that critieria.

Q5) Given a small section of an infinite sequence of values determine whether they can be the sum of some number of streams of 0 and 1 where the 0 to 1 transition happens every 2^K for some K between 1 and D (each stream can be different in transition frequency and no restriction on starting offset – and there are also potentially streams which never change and are always 1).  If it can, determine the minimum number of changing streams.

A5) Getting late, I’ll have to look at this another time.

GCJ15 Distributed Practice Round

So, I think everyone qualified for round 3 was eligible to do the practice round.  202 positive scores, 14 perfect scores.  I hope everyone who intends to participate in the distributed online round had a go in the practice round, as the distributed format is new and quite different – I think people who spent a good long time on the practice round will have a big advantage.

Even with 50 hours to think things over, and a testrun feature which lets you check if your solution is too slow on the large input, for test cases of your own design, pass rates for large input were quite low.  2 of the quests had less than 50% pass rates.

The top 14 perfect scores was 12 c++ and 2 java. No python.  Small sample size, but given the ratio of language use in the group that advanced to round 3, its absence is slightly suspicious.

Q1) Find the largest prefix/postfix sum for a list of numbers if the prefix/postfix are not allowed to overlap. 5×10^8 members in the list.

A1) On a single machine this problem boils down to what is the largest drop.  Once you’ve found the largest drop (by calculating running sum and keeping track of the previous maxima vs the current value) the answer is the sum of all numbers – the largest drop (which will be negative, giving you a bigger total).

Splitting this to multiple machines isn’t too difficult.  Give each machine a subsequence of approximately equal length, calculate the minima, the maxima,  the ‘local’ biggest drop and the total.  These can all be calculated in a single pass of O(N) time.  Send all these values back to the primary machine and exit.

The global biggest drop is either the largest of the local biggest drops, or it is the difference between one of the maxima’s and a subsequent minima.  First the maxima’s and minima’s need to be corrected since they are local, so add the running total of the previous totals to each local minima and maxima.  Then calculate the biggest drop, and finally subtract that from the overall total of totals.  These steps are O(Nodes) so will easily run in time.  Remember to use 64 bit integers everywhere, as the numbers can sum to 5*10^17.

Q2) Calculate who, if anyone, is a majority in a vote. 10^9 voters and potentially 10^9 candidates.

A2) So I have 2 approaches for this problem, which have different scaling functions, but are surprisingly similar in run time for the problem size.

The ‘cool’ solution is to count the frequency of each of the 32 bits in the candidate vote identifier over all votes.  If there is a majority, the set bits of the majority candidate will have over half the vote, and the unset bits will not.  So simply send the totals to central server and accumulate and compare to number of votes.  This lets you create the bit pattern of the majority, if there is a majority.  Send this majority holder back out to everyone, who then counts the frequency of exactly that candidate.  Send these new counts to central server, accumulate and check if its a majority.

This solution is linear, but requires two passes.  Given how long it takes to call GetVote you will lose 500ms of your time just doing that if you don’t store the values, and if you do the store the values you should be careful you don’t run low on memory, but it will save you 250ms…  Each value also has 32 bit extract’s to perform to potentially increment the 32 counters.

The alternative solution is to sort the segments you assign to each server.  This is O(N log N), but with at most 10million entries, the log N is only a factor of 24.  You must be sure your sort function is in place, because otherwise you will run out of memory.  Having sorted them you take every entry which got more than 20% of the vote (which is easily found now that they are sorted).  There are at most 4 of these, send their identifiers to central server.  Also send the size of the threshold you used to select them, which is 20% of the number assigned to this node.

The central server now has up to 400 candidates, which it can easily total and sort.  Any total which when added to the sum of thresholds sent back (which should be close to 20% of the overall, but might be slightly off due to rounding) gets a majority is a candidate.  There is at most 3 of these as they each have to have at least 30% of the vote.  Send these back and do a second pass counting totals for each of the 3.  Finally the majority holder either appears, or does not.

This solution has the advantage that rather than doing 32 operations per entry to start and 1 more to find the majority candidate total, it does 24 (for the sort), 1 to count run lengths and then 3 more for the totals.  In practice there is more overhead, sorting operations are compare and swap where as bit extract and increment is cheaper – memory locality of the operations is also much better in the first solution. It also has the advantage of potentially not doing the second pass.  If you get a straight majority or none get to 30%, the second phase can be skipped.  This bonus has an interesting correlation – it is more difficult to construct a fast running input which has slow sorting time and also has 30% of votes going to one person.

Q3) Find the shortest distance between two specific nodes in a circular doubly linked list.  10^9 nodes.

A3) So the single machine version is you just walk left from start until you find the other target, compare the number of steps to the total number of nodes and you are done.  The distributed version is a bit more awkward…  you need to break the input set into sections, but given the nodes are in random order, how do you do such a thing?

One option is to select evenly spaced nodes and call your section done if you ever see another node which is x mod k.  This is however fragile, its not hard to construct inputs which means regardless of your choice of x and k, one of your nodes is going to do a lot more work than the rest.

Solution is to use random nodes instead, that keeps you safer.  To be extra safe, choose 10 random nodes per server, that way if one of its segments is double length, that only adds 10% extra running time.  However you now have to compare every value you get while walking against 1k potential terminals.  This is way too slow.  One option is to sort the selected nodes and binary search.  This has potential to run fast enough.  Another option is to not use truly randomly selected nodes.  Instead divide the nodes into groups of k, and for each group select a random terminal position.  Then you can check if you are terminal by checking node value mod k vs random_nodes[nod value / k] mod k.

Potential little tricks for avoiding overhead.  Assume each machine runs the same version of libc – rather than communicating the random node selection from central server to each server, use a hard coded seed to rand and calculate it on each server.

In any case, each node returns the length of its segment, the start, stop nodes, and whether it saw either or both of the start and end nodes.  The central server can then easily and quickly construct the path from start to end node.

Q4) Determine whether it is possible to partition a set of numbers into two equal halves.  52 numbers, each of which have a large range of possible values.

A4) Unlike all the other questions, this doesn’t bombard us with lots of data.  But the question is NP-Complete, so it doesn’t really have to.  Wikipedia (see subset-sum problem) tells us that it can be solved in O(2^(N/2)) time, which is actually almost really close to fast enough for a single machine – but it would use more than the allotted memory limit.

The simple fix is to use 64 of the 100 machines, and tell each to ‘fix’ 6 of the numbers as either left or right, then run subset sum on the remaining numbers with an adjusted target based on the ‘fixed’ values.  This saves a factor of 8 in memory usage and running time, which is just about enough.  Each machine returns whether it finds the partition, and central server thus has the answer.

This only has Sqrt(Nodes) scaling, unlike the other problems which each have linear scaling – but getting linear scaling for this problem seems likely to be quite a bit more difficult…