GCJ12 R2

So, this post is a bit delayed – I actually had to do this round from a hotel room on a 13inch macbook – with no mouse.  Far from ideal conditions and maybe they contributed to my just missing out on making it through to round 3.  9th placed Australian, in 543rd place.  Tikitiki and Aussie got great starts this round, I think they must have seen some of these kinds of questions recently or something…

I started by reading all of the problems first, thinking maybe it would be useful to have my mind mulling them over in the background while I worked on the first problem.  So my first problem submission time was not great.  Second problem had me stumped for a while, but I eventually got my head on straight and realised the solution was pretty easy after all.  But it only left me 30minutes to try to get one more small input.  I figured I needed to go for Q3 to be safe as it seemed easier and was worth more points.  It probably was a good decision, but an amazingly stupid mistake while solving an equation for x left me with code that was crashing with 2 minutes left on the clock.  I realised the problem at 1minute left, but wasn’t fast enough to fix it.  (Also there was one more issue which I didn’t realise until the next day, but it is possible my solution would have passed the small input tests anyway.  Getting it right would have been a top 250 placing, which would have been great.)

In any case I have a nice new t-shirt, which was the only prize I was going to get… in this competition anyway…

Q1) There are ropes hanging down from points.  Each rope has a given length and given position.  All hanging points are at the same level, which is the same level you start at, and the same level of your destination, which is a specified location past the last rope.  You start off holding the first rope.  You can swing out and grab any rope in your path.  Having grabbed a rope you can pull yourself back up level with the hanging points to any place between the two hanging points (limited by the length of the rope you grabbed).  Given this can you reach your destination?

A1) Key to this puzzle is realising that your best way to get to a rope, is from the rope closest to the beginning which allows you to reach it.  So you create an array which is the maximum length of rope for each rope which you are able to use.  First rope is easy, since you start off holding it.  It sweeps out a section ahead of it and you can fill in the maximum lengths for those.  Then repeat for the next rope and the next rope – until you either find a rope which you can’t reach, or are able to reach the destination.  By starting your search for the sweep from immediately after the last location filled in (which are all guaranteed to already be right due to the above explanation) and exiting as soon as you exceed the maximum length, you ensure that the running time is O(N), which is kind of important if you have a slow programming language as the large input has a maximum number of ropes of 10000 and the simplest approach can be O(N^2).

Q2) Given a list of circles of specified radii, determine how to place their centres in a provided rectangle such that they do not overlap.  The rectangle area is guaranteed to be at least 5 times the sum of the areas of the circles.

A2) With the large input being up to 1000 circles, and the circles being fairly large variability in radius, it seemed likely that some kind of brute force filling was probably going to be too slow.  In hindsight, I am not sure this is actually true.  O(N^2) with 1000 is trivial enough you could probably just place them in a row across the top long edge, and then try and fill new rows in, shifting towards the top long edge as far as possible based off an exhaustive test of the existing placed circles.  I don’t see any reason why this would have troubles with space given that there is about 4 times the space you need.  But I’ll include what I did for completeness.

First sort the circles from largest to smallest (something which would probably be important above anyway).  Then start rows just like as described before, but rather than trying to push them upwards, try to fill the rows height by stacking circles above one another within the row, if the circles are small enough.  This keeps the rows self contained and stops you from having to consider any circles other than the last one you placed in the current and last ‘column’.  This gives O(N) for the filling stage, so performance is dominated by the sort.  Packing density in this approach is at worst ~pi/8 which is far greater than 1/5.  To be safe I special cased long thin scenarios by just lining all the circles in a row, but I think my implementation actually handled them just fine.

I got 2 penalties on this problem, 1 because of some mistakes in my coding, but the other 1 was because the outputs had to be in the same order as the input, and I had forgotten that I had sorted the inputs…

Q3)There are a series of hills each exactly 1km apart.  From the peak of each hill, looking out towards the next hill (from ‘left’ to ‘right’), one of the hills has the greatest angle relative to straight down.  This hill ‘appears’ to be the highest, even though it may not actually be the highest.  Given for each hill which following hill appears to be the highest, determine a set of integer heights between 0 and 1 billion which satisfy these criterion.

A3)  The large input had up to 2000 hills which I determined was going to be problematic.  My approach was to work from right to left, determine the height range for each hill which caused it to satisfy the criterion and choose any point in that range.  The trick being that the range is not guaranteed to contain integers, or be positive.  The later is easily fixed, just shift every mountain upwards.  The former is of concern.  I figured that worst case the range might involve fractions where the basis was of the order of n! where is was the number of hills.  Which for n=10 was plausible, so I could just multiply the fractions up by their basis top make them all integers.  With a maximum of 1billion, that would be fine.  (And in fact the test cases I received only needed up to about 100k).  For 2000 hills this is very unlikely to work.  Finding the smallest fraction base in a range might have made it feasible, but I don’t know how to do that quickly.

My solution failed because solving for the range involves finding the minimum height such that the current point, its highest and a point behind that are co-linear.  And the maximum height for current point, its highest and a point in between.  I wrote out the equation for two lines having same slope, which with a common point means co-linear.  I then solved for the height, but somehow forgot that the height was in both sides.  At runtime this showed up as a divide by 0 error, because my height fraction was uninitialised on the right hand side and internally was equal to 0/0.  Kind of confusing trying to debug this with less than 3 minutes left on the clock.

The other bug I had was the minimum and maximum are technically inclusive and exclusive bounds, because the conditions state that in co-linear case you see the front-most.  But if you actually use that inclusive bound (and you don’t have to) you can later incorrectly decide that there is no valid option because the middle of that co-linear can no longer be the highest for any future point, but if you didn’t make them co-linear you could point at it directly just fine.  I think a few more minutes would have let me solve this one…

The large input…  I’ve had a few ideas, the best of which is as follows.  Every position which sees a given peak as the highest, can be given equal height.  Starting from the right, choose the minimum positive slope which lets all of them see that peak and nothing beyond and gives them an integer value.  By construction it should be fairly(?) obvious that this solves the problem as all the problematic cases are where its impossible to solve anyway…  The only question is whether the range of heights constructed exceeds the 1billion.  I think it can be shown that the worst case has a height difference of O(N^2).  Running time is theoretically O(N) if you first group them by which mountain they can see.  You also need to first validate the input that it doesn’t make impossible claims, which is that the ranges which claim a two different peaks as highest are mixed, not adjacent or fully contained.

Q4) Given a rectangular grid of squares, some of which are ‘caves’, others are walls and others are just empty, determine two things.  Firstly which (how many) empty locations can reach each cave given down/left/right movement (no up) and secondly, does there exist a sequence of moves which moves every element which can reach the cave, to the cave.

A4) I am certainly glad that I choose to work on Q3, because even the small input of a 10×10 grid (which is 8×8 in practice because the outside edges are always walls) has solutions which are ~32 steps in length, and there are 3 possibilities at each step, and all of them can be possible.  So a pure brute force is something like O(n^2*3^(n^2/2)) which is insane.

I thought about this for quite a bit after the competition, and didn’t come up with anything which I was confident of.  So I consulted the contest analysis.  Key points are there is no need to back-track, you just need to ensure you don’t move in an invalid way. (Which gives a randomized solution, you randomly choose between valid moves until you win, but it could take a while…)  If you can move down, and there is something to move down, you might as well move down.  Moving left/right never take you outside the valid states, so you can move all left/all right to consolidate each row into a single value.  Then it just becomes a matter of moving those rows to some point where you can move down.

So, to recap.  First move everything full left, if you can move down usefully, then move down.  Otherwise step right and try again.  Keep trying until you hit a right wall on one row, then sweep left, remembering where you were, then go back to where you were and go right, and sweep left again, until you get to full right and a final sweep left.  This covers all the possible scenarios.  If you manage to go down, go back to the beginning until you win or fail to find progress.  What is the running time of this?  I think it is O(N^5) which should be fine for the large input which is N=30.


I got a successful challenge this round, don’t know how long it has been since that last happened… 347th before systests, I managed to move up to 292nd afterwards.  Quite a few failures in the second question, wish they had been in my room so I could have challenged them too… 😛

My TopCoder rating is now up to 1836, almost a personal best… (although still quite a ways short of where I want it…)

This is the second time I’ve scored as well as was needed for a t-shirt in previous years (292nd is a virtual 342nd after you allow for the 50 already through – although I got a higher score today than 3 of the 11 who bothered to compete in the parallel round for advancers).  If I make it 3 from 3 and don’t win a t-shirt I’ll be rather disappointed – and I didn’t even sneak a completely invalid solution past the system tests this time 😛

Q1) Given the numbers 1-N which are placed in some order where you only know where some of them are, what is the minimum number of picks of spots to guarantee you reach a target value. (Target value is large.)

A1) I think this problem is relatively greedy.  First up there are either 2 things you are going to do, pick the highest known value, or rotate through all the unknown values.  It never helps to choose an unknown value over and over, because it could be the minimum of the unknowns.  So I can see 3 strategies.  1) You simply use the highest value forever. 2) You simply rotate through the unknowns forever, presuming you get the smallest of the unknowns on your last rotation.  3) You rotate through the unknowns for as many full cycles as possible, then use the highest known value until finished, because the last cycle as partial, might have a lower worst case average than the highest known.

So try each of the 3 scenarios, and choose the best.  Handling corner cases like no unknowns or all unknown.

Q2) Given items of known weight, and the following game, determine the outcome if both players play optimally.  Each turn players can move a specific number of items from their pile to the opponents pile, these numbers are provided.  If their pile doesn’t contain that many items, all items are moved.  The first turn however is player 1 chooses which of the starting set of items (also provided) is placed into player 2’s pile.  Assume that the aim of the game is to maximise the weight of your opponents pile minus the weight of yours, and in equal situations both players aim to maximise the total weight. Items have a large weight range.

A2) I managed to solve this problem, but I didn’t get my implementation written in time.  This was also the problem I managed to get a successful challenge on.

First up is to realise that the game is entirely greedy and easily determined after the first player chooses the initial items.  At any point in the game you will always give your heaviest items to the opponent, there can be no benefit in giving the lighter ones.  So you can take a list of indexes for a virtual sorted array and simulate the game to see which indexes end up in each players pile.  Now player 1 has to maximise their score, and for equals maximise the total weight, by assigning individual items to these indexes in sorted order.

So start by sorting the provided items by weight- it is now a fairly standard DP on maximising a valuation of selecting an ordered subset.  However there is a small complication that some indexes cause negative value, and others positive.  But it doesn’t actually change the DP structure, you just want to build a lookup table of index to whether it is positive or negative multiplier.

Small tip I got from one of the solutions in my room was to make the DP table member type a custom class indicating the piles and write a custom comparator to compare the difference, or sum if equal.  This means you don’t have to reconstruct the piles from scratch after you have finished your DP like I was going to have to.

Q3) Given numbers of the form a*x+b from x=1 to x=n, concatenated in their minimal length binary representation, how many times does the sequence change from 1 to 0 or vice-versa. b and n are both huge, a, not quite so much.

A3)  This was only rated at 900 points, but I have no clue how that makes any sense…

The last digit is going to have a trivial sequence,  so tracking how the concatenation affects the result isn’t too crazy, but tracking the internal changes is still going to be problematic.  Definitely going to be some pattern, but it isn’t going to be trivial.

Looks like the correct approach is to process the multiples in sections of powers of 2, but from there it gets complicated and I definitely need some sleep.


So I didn’t have to do this round either, but it had some nice questions.

Q1) Given an acyclic directed graph determine whether there are any pairs of nodes with multiple paths between them.

A1) So the obvious approach is to depth first search starting from each node, and if any DFS ever sees the same node twice there is multiple paths.  This is O(N^2) since you have to run DFS over and over.  Given constraints that will be fast enough, but certainly isn’t optimal.  One sub-optimal improvement is to only run the DFS from nodes that have no nodes pointing to it. I think the optimal approach is to compare the sum of edges to the number of nodes  If it contains no multiple paths the number of edges will equal the number of nodes – 1. (But my graph theory is a bit rusty, but I think that criterion applies in this case.)

Q2) Given a car which has a maximum positive acceleration (spontaneous reductions in velocity are fine) and a lead car which cannot be passed, if the lead car travels at constant velocity between known points at known times, what is the earliest the first car can get to the end of the road.  Answer the same scenario for multiple different maximum accelerations.

A2) This question is a ‘simple’ simulation, checking for quadratic equations crossing linear equations of motion to know when the lead car is limiting the first car.  The large input is 2000 known points of time and 250 different maximum accelerations, and there are up to 100 test cases, but even so I don’t see there being a performance problem using brute force.

Update: Actually it isn’t as simple as I make out above, the most obvious approach is in fact incorrect, which is possibly why this question was the least commonly answered, and not just that the question text was presented in a way which seemed to maximise confusion.  If you try to move as quickly as possible at all times, you will in many scenarios end up slower then optimal.  The thing to optimise is not your forward position at each point in time, but instead your velocity at the time when the lead car increases speed.  If you catch up to the lead car you are guaranteed to be doing the same speed, when if you waited at the start, you could be travelling faster.  In fact the optimal strategy is to delay start for as long as needed, to maximise your use of acceleration to ensure you are always fast, but never hit the lead car.

Checked the solutions, I see brute force simulation.  The simulation can be simplified to tracking the maximum difference in arrival time at a point of the lead car and the time the other car would have gotten there if the lead car was not a problem.  Then just adding this to the minimum travel time for the car. But that isn’t significantly faster then the full simulation – the only question is whether you could get accuracy issues accumulating rounding error over 2000 segments doing the full simulation.

One other sneaky point is that the places and times for the lead car may include points beyond the ‘end of the road’, they need to be truncated appropriately to get it right.

Q3) There are two sequences of elements, which are very long, but only change element types at most 100 times, so there are very long runs of elements.  You start by taking the first element from both sequences, and proceed by either pairing matching item and taking 1 new from each sequence, or by throwing one item away and taking the next item in that sequence.  What is the maximum number of pairings you can perform?

A3) At first glance this appears to be a classic dynamic programming question, but it is not quite so trivial.  The trivial DP on index in each sequence, won’t perform usefully since the sequences are up to 10^18 elements long, and a DP purely on the segments isn’t going to work since one segment from the first sequence might match against 4 segments in the right sequence in the optimal solution.

So what can we do?  Well I think what we want is something in between.  Do a cached recursion based on index, but use acceleration to avoid the 10^36 running time that brings.  So no array cache, this will need a dictionary cache.  To simplify the logic the recursion parameters will be current segment and offset into the segment, since raw offsets will require searching the list to determine the segment details, which will likely cause performance to suffer by too much.

So what do I mean by ‘acceleration’?  We need some strategies that deal with a lot of elements in bulk, and do so in a consistent manner which doesn’t lose the optimal solution.  First up is if the current segments match, we will pair, we will never throw away, so we can accelerate forwards on both sequences to the end of whichever of the two segments has the least remaining.  If they don’t match there are 3 options, throw away every element of the left segment, or every element of the right segment, or both segments.  This later scenario can be ignored as it will be found by the other two unless it results in mandatory matching of elements, in which case it wasn’t the optimal solution anyway.  The reasoning here is that throwing away part of a segment now, is the same as not throwing them away and then throwing away the leftovers after doing some matching, just a different order of operations with the same outcome.

The running time of this algorithm isn’t immediately obvious, but I think it is no worse than O(N^4) where N is the number of changes in the sequences, which should be good enough.  You can even remove the first assumption that you always pair if available, although I think it is fine. Update: Actually it might be O(N^6) which would be a problem, for two reasons, one it would be too slow and two it would use more ram than my machine has available.  I did see a solution which passed which used a very similar approach, so I think the theoretical scenario I am thinking of may be practically impossible and the solution space is actually very sparse. Specifically if you do include the greedy match assumption, it probably makes it very difficult to break.

Another approach I found looking at solutions is to use the explicit DP on the segments, but to handle matching segments in a special manner.  Rather than storing an index into the segment to handle partial segments, realise that in order to use that segment to match against multiple other segments on the other side, you have to throw away any non-matching segments in between.  So whenever you have matching segments, you can try O(N^2) scenarios to skip both sides ahead gaining whatever matches of that type in the area skipped.  By trailing all these scenarios, you still get full coverage.  This solution is clearly O(N^4) which is likely a bit faster than my first solution, and definitely uses less memory for the cache.


So, I didn’t have to do this round, but looking at the scores it was very competitive.  Probably harder to advance than R1A.  Looking at the problems I think there were all theoretically in my grasp, except for Q3 large input, I might have missed the key insight, and probably wouldn’t have been game to try a solution based on random numbers.

Q1) If each item receives 2 scores – 1 fixed, and 1 a variable fraction of the total of the fixed scores, what is the minimum fraction for each item to guarantee that regardless of how the other items receive their fractional scores, that item will not have the distinctly lowest total.

A1)  I see a couple of options here.  But first some analysis of the question.

If we give a specific fraction to an item, what is the worst case competition for it?  Giving points to everything lower than it until they all equal the new value, and still having a non-zero fraction left. (Since that non-zero fraction can be divided amongst all equals to push them above.)  So, if we have the items sorted by their fixed values, we can go from smallest to largest, skipping the current item under consideration, subtracting from the available percentage as needed until we get to an item which is above the target sum.  This is O(N).

The above suggests we can use a floating point number binary search to find the threshold, giving about a 20*O(N^2) to get sufficient accuracy for all of them. This would be fast enough given the maximum constraint of N=200.

A more direct approach solve numerically.  In between each step from one distinct fixed score to the next, the problem is a linear equation.  If the answer for the linear equation lies outside the range, you move to the next step.  Again this is trivially O(N), for each item, making O(N^2).  In fact we can do a binary search on these segments, giving O(log N) for a total of O(N log N) for that component, but only if you can change the segments from one scenario to the next in O(log N) time or better as simply constructing the segments is O(N).  If you construct the segments for every item, then the segments for each individual scenario is either the same with the counts above a given point reduced by 1 or two adjacent segments are joined.  This later scenario can actually be ignored and treated like the first, the two linear equations  will be the same, but testing them both is no significant loss.  It is easy to embed the former logic into the linear equation, so we can indeed reuse the same segment list, giving us our O(N log N) running time.

Q2) Given a rectangular grid of cells where each cell has a floor and ceiling height, and a water level, and a requirement that to get from one cell to the next you can’t pass through an edge without at least a 50cm gap (either current ceiling/destination floor/water or current floor/water/destination ceiling or destination ceiling/floor) you want to get to the south east corner of the grid as fast as possible once the water starts dropping.  Every second at some future point the water level will drop 10cm.  It takes 1 second to move from cell to cell if there is 20cm of water above the ground in the current cell, otherwise it takes 10 seconds.  Starting from wherever is currently reachable from the north west corner of the grid, what is this shortest path? (No diagonal movement.)

A2) Ahh, interesting problem.  Especially since the large input makes the grid up to 100×100 in size.  Suggests a shortest path algorithm, but the edge cost changes as a function of arrival time.  Lets do some more thinking.

Firstly, it should be apparent that we never go back the way we came once we start.  More specifically because the best travel times only ever get worse the later we arrive, there is never any benefit to being somewhere at a later point in time then the earliest arrival.  This suggests shortest path will be fine, with some small modifications.  Finally there are no negative travel times, so we don’t have to worry about that complexity when it comes to shortest paths.

We can reach anywhere reachable from the start at time 0, so simply depth first search to determine what is reachable and initialise a grid with 0 at those locations and maxvalue at all others.  Add all reachable locations to an updateable priority queue.  Now we perform an earliest first search, at each location we determine how long to reach each of its 4 neighbours relative to the earliest arrival time for that cell which will either be wait+1 second or wait + 10seconds, where wait is how long until the water drops low enough to make it passable.  Wait could be 0 seconds if already passable, or infinite if it will never be passable.  If the new arrival time for the destination is lower than the current value, update the priority queue.

The code to calculate the wait isn’t exactly complicated, but it does have a number of criterion and would be a likely place to make a mistake.  It is also where you have to handle not leaving the edges of the grid.

This gives us the all points earliest arrival time, including the south east cell, which now contains the answer.  So what is the running time?  Initial search for starting points is O(N*M) or O(V) (V is total cells, or ‘vertexes’ in the graph.)  From there we run an earliest first search using a priority queue.  Lets assume we use the LookupHeap from TMD.Algo, this has O(log N) add/remove of any element.  The number of priority queue updates/adds/removes will be at most O(E).  E is the number of edges, which has an exclusive upper bound of 4V, effectively making it O(V) priority queue manipulations for a total of O(V log V).  Despite V being 10k this is quite reasonable.  The other component of running time the determination of wait, is easily O(1) per edge, so it is not significant.

Q3) Given a set of distinct positive integers, determine if there are 2 distinct subsets which have the same sum.

A3)  Pretty simple problem statement, but the large input has 500 integers each up to 10^12 in size.  Small input is more reasonable, 20 integers with 10^5 each.  Only up to 10 test cases in both scenarios, so plenty of cpu time available for each test case.

Small input approach.  The most(?) obvious brute force is going to be too slow, dividing into each possible set of 2 groups, then determining the intersection of every possible sum of those groups is something like 2^40.  However almost as obvious an approach is to break it into 3 groups, if the sum of all elements in group 1 equals all elements in group 2, we’ve found the solution.  Group 3 is ignored.  This can have a running time of O(3^N).  Which is quite high, but given only 10 test cases, and 4minutes to submit, it is plausible if optimised. Important part of the implementation is to not calculate the 2 sums directly for every scenario, instead, mutate them as the scenarios change, adding and subtracting.  This ensures the implementation isn’t O(N*3^N) which would almost certainly be too slow.

Large input approach… hrmm.  Well caching on the small input implementation gives us O(N*distinct number of sums) which doesn’t immediately give us anything, but we know the range of the sums which gives us an easy upper bound of O(N*(kN)^2) where k is the maximum value.  For the large input this is a considerable improvement over the O(3^N), but still longer than life of universe type processing times expected…  But it does tell us that as we get large inputs, the probability of there being a common sum approaches 1, maybe even is 1 by pigeon hole principle.  Maybe this will be useful.

So I looked up an answer to get a view into this question and I found a very important detail which I overlooked.  If you have 2 sets which are not equal which have the same sum, you can remove all the elements in common to get 2 new sets which are distinct and have the same sum!  Now we just have to find 2 distinct sets with the same sum.  This insight also allows the small input to be done in O(2^N) by using O(2^N) space as well as time.  It also lets us prove that the probability of common sum is 1.  2^500 is greater than 500*10^12, so there are at least 2 different subsets which have the same sum, and hence 2 disjoint non-empty subsets with same sum by removing any common elements.

Here is where the certainty comes in, if we do random guesses, and store the sums in memory until we find a collision we can get a very high likelihood of passing tests just by making enough guesses.  Birthday paradox means that despite the potential 10^14 different sums, the probability of collision is quite high with only 10million repetitions, and extremely high at 100million. (And even higher considering the difficulty of constructing a set of numbers which evenly spreads sums across the entire range.)

Another approach I saw in solutions with less random in it…  If we sort the input then break it into sections of length 20, then determine every subset sum, then choose the 2 closest sums we end up with 25 sets of distinct subsets where we can use the sum difference as individual values to calculate every subset sum.  If we find a collision then we can construct the subset.  The idea here appears to be that we’ll end up working with a much smaller size numbers by finding the smallest difference in each section, so the probability of 0 sum is increased significantly in the second set of summations.  Again this really seems to be a probabilistic approach, but apparently good enough that you don’t need to randomise the data.