Last round before the finals, only 25 to advance. Looks like 1 Australian in the top 25 which is an improvement I think…

5 questions in only 2.5 hours was a tough ask. No one got a perfect score, but the top 4 solved all but the last question.

I think I could have solved the first 2 and the small of the 4th on a really good day had I been competing, which would have gotten me 120th range, which I would consider a personal best. More likely I would have came ~250th solving just the first problem. Small of the 3rd isn’t too bad either, but I doubt even on my best day I would write solutions for 4 round 3 problems correctly, even if 2 of them were just smalls.

**Q1) Given a tree with a root, and each node having a value, determine the minimum number of nodes to remove from the tree to ensure the difference between minimum and maximum values is no more than D. When removing a node, you must remove all child nodes.**

A1) So I found this question to fall into my personal skill set. If you sort the nodes by value, you can sequentially from smallest to largest, and in reverse determine which nodes have to be removed to in order to have the any specific minimum or maximum value. These passes are both O(N) assuming you keep a hashtable or boolean array lookup of which nodes you have already removed.

The problem is that any given combination of minimum and maximum that is closest to D (which could be done in linear time by walking up either the minimum or maximum depending whether the current range is less or equal to D or not) might have an overlap between the removed nodes, and so you would be double counting if you just add the two numbers together.

To do the large in time, we need to resolve this double counting without slowing things down. The trick here is when constructing the nodes to be removed, in the first two passes, don’t just keep the counts, keep the actual lists of nodes removed. This way when walking up the maximum and you are ‘undoing’ a batch of nodes that might be removed, you can keep linear time because you can just ‘replay’ that list of nodes. As you replay adding and removing nodes, you check the boolean lists for minimum and maximum, if the node you are removing no longer exists in both, you decrement the count, if it newly exists in only the minimum list, you increment the count. Otherwise the count stays the same. Then whenever the current range is less or equal check if you have a new minimum removal. Note that while you start the loop having removed everything, you will at some point reach the value of the root, and since a single node trivially has 0 range, you will never return removing all values as a result.

**Q2) Given a sequence of numbers which are the running sum of length K from an original sequence of length N, determine the minimal difference between minima and maxima of the original sequence, assuming the original sequence consisted entirely of integers.**

A2) I tried my hand at this for a while, but made a major mistake in thinking early on which left me stumped. The difference between consecutive sums gives you the difference between two values K apart. The difference between the next two sums, also does the same. Combining these two formulas together doesn’t seem to let you derive anything. I mistakenly then assumed that combining formulas wasn’t the way forward.

The trick is to see that if you have a difference which tells you the relationship between position 0 and K, there is another which tells you the relationship between K and 2K. Hence you know the relationship between 0 and 2K. Thus the problem can be reduced to K pairs (if N is >= 2K, otherwise the problem gets some slightly different corner cases I’ll discuss later). Each pair consists of the relative maxima and relative minima, of how high or low the sequence will go assuming the first value was 0. Now the question becomes how to align these such that they have minimal range, and their first values are integers and add to give the first sum.

So extend these pairs to triples, by adding 0, the imaginary starting value. Then align them to have the same maximum, which could be 0, or the first maxima, doesn’t matter. The sums of the starting values will now be X. Calculate (X – first_sum)/K using integer division- and subtract that from all values of each triple. Now you need to reduce the sum of starting values by (X-first_sum)%K. Sort the triples by their minimum’s in descending order. The aim is to avoid moving them past the last one. If we can do that the result is the range of the last one, otherwise it is the range of the last one + 1. This can be done with a simple greedy strategy, shift each down to be the same as the last minima until you reach the mod value, or you don’t. If you do, success, else add the one to your return value.

If N < 2K there is a slight difference. You get less than K pairs to start, and so effectively you have some ‘free’ starting values. These values can be represented as having 0 maxima and 0 minima as the starting pair. They make it a lot easier to avoid having to add 1 – but they don’t eliminate it. To demonstrate this clearly consider the ultimate degenerate case N=K. Here you only have 0,0 pairs. But if first_sum%K != 0, you can’t just make all the values have equal starting value. Hence the result will sometimes be 1.

For this problem the small input really wasn’t much simpler than the large, and it showed, only 5% of those who solved the small failed to solve the large.

**Q3) Given a set of points, which all run away from you at their own individual speeds, if you have speed Y, what is the soonest you can visit all the points.**

A3) First a trivial simplification, you don’t care about points which you have already visited, so running away from you can be considered running away from the starting point. If all the points were on one side, the problem is trivial, you just run that direction until your last intercept time based on speed difference and starting positions.

The small data set is 25 points. That gives 25! scenarios if you ignore that you might visit something on the way to somewhere else. This is too large, even the basic dynamic programming solution is 2^25 * 25 * 25 (cheapest way to visit subset S ending at T, each state has to consider being continued to visit any other node not yet visited). If you happen to have a small super computer, this might be fast enough if you parallelize the test cases and optimize well, but otherwise this is going to be too slow. This ignoring whether you reach something on the way to something else or not is okay, because you can guarantee that ignoring it results in a worse time than visiting it in the right order, but it is expensive.

Looking at a solution which solved just the small, you can take advantage of the fact you visit them on the way to each other. You will not possibly turnaround more than 25 times. At any given point in time you will either go left or right and visit the first unvisited node you catch up to in that direction. This gives you a brute force 2^25 with each point in the brute force having a search cost of 25 to determine the earliest unvisited left/right catch ups. Overall cost O(N*2^N) which is okay if you have a fast machine…

Looking at a solution for the large, its a DP on having caught the fastest i points going left, the fastest j points going right, and going to catch point k. First set every value in DP to infinite time. Then its initialized with the time to reach each point having not previously caught anything (IE starting from 0), which seems like you are ignoring visiting things sooner vs later, but that is not the case. From each state you consider catching the next fastest left or right point. If your current position implies you have already gone through that item, you can improve the time for that to the current time, assuming you haven’t already found a better time to get there. If its not you can calculate a new intercept time as given time of the current state and which node you are currently at, you know where you are. If that is an earlier arrival time, done again.

It all seems clean and simple. The question is why does sorting them by the fastest to slowest work even though you might catch a slower one on the way to a faster one. I think the proof of that is somewhat involved, maybe the official contest analysis will have details… It does seem somewhat intuitive though, fast ones are hard to catch, so you should focus on them first.

**Q4) Given the values and frequency of the sums of elements in each set in a power set, determine the original set that created the power set. If that is ambiguous, determine the ‘smallest’ original set.**

A4) The small vs large is two very different problems, which is shown by the vastly different solve rates.

The small problem there are up to 20 values in the original set, but they are all non-negative. This helps a lot. You can determine how many 0’s there are by looking at the count of 0 sums, log base 2. The rest of the frequencies can then be divided by number of times 0 occurs. The smallest non-zero value is then in the original set, and its frequency is how many times. Anything less than double this value is also in the original set. As you get more and more values, you can calculate how frequently the sums they generate should be. Basically you get a loop, what is the smallest value who’s frequencies aren’t all accounted for. Add that to the set and for each result and existing frequency, also add result + value and the new frequency. Loop runs at most 20 times, each time it runs its cost is proportional to the number of different values, which the problem caps at 10k.

The large introduces negative numbers. Which complicate things enormously. You don’t clearly know how many 0’s there are by looking at frequency of 0, because values might add together to give 0. Starting from the smallest doesn’t work, its the sum of all of the negative values. Starting from the closest to zero doesn’t work, some of these could be the sum of positive and negative values…

Looking at a large solution. First thing is you can easily tell how many numbers you are trying to find. Just add the frequencies and log 2. Turns out you can work out how many 0’s there are, just log 2 the frequency of the smallest number (not sure why I didn’t think of that…). Now there are no 0’s you can determine one value by taking the difference between the smallest two values. (Also obvious in hindsight…) Now we just have to update the frequencies for the removal of that value and repeat the process. You might think you are done here having generated N numbers this way… but each of those gaps is either a positive number being added to the set or negative being removed. So generate all the possible sums (using a set to avoid duplicates since 2^60 is large, but we know distinct sums are much smaller), and while each number removed is in a sum path for the smallest value, consider those as negatives, the rest are positive. Consider the numbers in the reverse order of discovery, since the smallest absolute values are found first and the question asks for ‘first’ under ambiguity, large negative values are good for satisfying that critieria.

**Q5) Given a small section of an infinite sequence of values determine whether they can be the sum of some number of streams of 0 and 1 where the 0 to 1 transition happens every 2^K for some K between 1 and D (each stream can be different in transition frequency and no restriction on starting offset – and there are also potentially streams which never change and are always 1). If it can, determine the minimum number of changing streams.**

A5) Getting late, I’ll have to look at this another time.