So… I am out.

840th, definitely had a bad round.(Did I ever mention I hate competitions scheduled for 2am…)

Questions

1) This was an a pretty easy question, but… I took almost an hour to do it because I was being brain dead.

To start with I miss read the question, thought you could swap any 2 rows and spent a while trying to solve that. Which I think may be a harder problem…

Then I tried some really stupid greedy heuristics (at least I realised greedy was correct this time). My first 3 implementations looped infinitely because I was swapping items which were equal!

Anyway, finally this question can be resolved down to, the first entry must be satisfied, find the closest item which can be moved in to satisfy it and move it there. Repeat for all subsequent entries. So incredibly obvious but I sucked.

I skipped ahead to question 4 at this point.

4) Circle covering problem, but the small data set constraints have at most 3 circles. Trivial brute force!! Knocked that over to get myself into the top 400 briefly since I had taken so long to do Q1. However it was clear that I needed to do another problem to get to round 3. (Or do the hard for Q4… but that was beyond me…)

And here is where I lost the competition for sure.

3) I noticed a lot more people had solved the easy for this question instead of Q2 at this point, so I decided I would go for it. I wrote what I thought was going to be a solution for the small data set, but it turned out to be wrong. Fairly obviously, so with 30minutes remaining I converted it into a cached recursion problem to ensure it solved the problem. However due to my brain dead state I removed the greedy heuristic entirely, rather than just considering all equal greedy scenarios. Result was program ran in 10minutes instead of a few seconds, and so I failed. (Looking at the leader board, getting the small out for Q3 was not sufficient, you had to solve it prior to the 90minute mark – so I wasn’t even close really.)

The real disappointment was that I spent the first 10 minutes looking at this problem going, I am sure I’ve seen this before, there is some cool way to do it. What I didn’t notice was that this was a disguised variant of a problem from last year’s round 3! Specifically one which I thought was impossible, and spent quite a bit of time learning the solution because it was cool. The number of sets is related to the maximum bipartite matching of the non-constrained edges. I even thought maximum matching might be related, but I failed to recognise how similar it really was to last year’s problem. Admittedly on further inspection it is a pretty good disguise since the bipartite graph choice is much more obvious in last years problem, so I don’t feel too bad about missing it.

Above I mentioned that doing Q3 was where I lost the competition. Because doing Q2 would have gotten me through to the next round.

2) I looked at this problem when I first failed on Q3, but concluded that I probably didn’t have enough time to risk it, and I had just realised that a cached recursion for Q3 would probably work.

Ultimately I should have gone to this question immediately after Q4, because it really doesn’t look that hard, it is just one of those problems where coding errors consume time. I liked the problem, and with the forward only nature of the problem, the fact that the environment of the problem was being modified as you progressed shouldn’t be the problem it first seemed.

The problem was a nice easy shortest path, the only problem was constructing the graph. Graph nodes were not simply locations in the maze, but also related to what places have dug already. But at each node, the only digging which matters is any done on the current row and the row immediately beneath you. Furthermore, patterns of digging were irrelevant, it never helps to have sparse digging so it can be simplified to start and end point of the dug area. This is sufficient to solve the small data set, but to solve the large the trick is to realise that the digging of the row beneath you can be ignored as part of the node definition, if you always transition to a new row between nodes. (That is, reduce the number of nodes by combining multiple digs into a single edge.)

Q4 large data set, is tricky. with 40 plants, there are 2^40 ways to divide them into 2 groups which you could apply a minimum radius calculation to. First guess would be to binary search the sprinkler radius with a ‘can it be done’ approach. This appears to be the correct approach from looking at a solution, but the mechanism of determining the can it be done defies my sleep deprived comprehension. It appears that the possible locations for the water sprinklers needing to be considered can be constrained to being either directly above a plant, or at the extremities of the sort-of-midpoint line for which a sprinkler covers each pair of plants. I can see this as being plausible, but I certainly don’t understand why yet.