TypeForwardedTo

When I first heard about this attribute I was excited, the refactoring potential was great!

Then I read a bit more and was disappointed to see that while you could forward types, they had to maintain their full type name, including namespace, only the assembly could change.  This was disappointing but acceptable.

Today I discover that you can’t move types to assemblies you don’t reference.  This is because the TypeForwardedTo attribute takes a Type, not a string containing a type name.  There is no way to define a type which resolves at compile time without referencing the containing assembly.  This was a bit surprising to me since the metadata for a custom attribute which has a parameter of type Type, stores the fully qualified type name, not a reference qualified type name, so you don’t need a reference in order to create the required metadata.  In fact based on similarities in description I suspect the runtime basically calls the exact equivalent of Type.GetType(string) passing the deserialised metadata directly, during assembly load.  The reason I know that it doesn’t use a reference qualified type name is because I have many times in the past disassembled an assembly changed the versions of its references and reassembled it.  This usually works quite well, but it doesn’t fix custom attributes, which can occasionally cause problems.

For now I think a work around of referencing a pre-compiled version of the assembly using an alias (to avoid accidental use of that assembly by developers outside of the declaration of the TypeForwardedTo attributes) is going to have to suffice, but I just wanted to post that I think that c# syntax is lacking in the declaration of type based attributes.  While I suspect it would be an unusual case, I think that Type.GetType(string) should be considered a compile time constant for attribute parameters, or some other nicer syntax which does the same thing.

Silverlight 4

So I’m a couple of weeks behind the times but the final silverlight 4 tools were released, so it was time to give it a spin.

The result is http://www.themissingdocs.net/LoopDeLoop.html

Its a cut-down version of my win forms LoopDeLoop, but it also has a couple of features the old one didn’t – like click-drag to apply changes to multiple locations, and proper fix position support (ctrl-F/ctrl-R/ctrl-U).

Silverlight 4 is noticeably better than Silverlight 2, but the errors on the designer when your custom controls fail in design mode, are a bit cryptic.  It also took me a while to work out my ellipses were not showing because I hadn’t measured them during the measure phase.

I haven’t made any serious changes to my actual game engine, other than having to replace SortedDictionary with something else. (I reimplemented my approximatepointset using a dictionary an equality comparer which used fmod to round values, not quite as correct but it seems to still work well enough.)

It was interesting to see that attempting to manually capture a stacktrace (which happens just fine when you throw an exception) throws a security exception.  Also Debug.WriteLine caused me some grief, had to remove those.
Also it might be my imagination, but it seems like the silverlight runtime is noticeably slower than the normal .Net runtime.  And the fact that ctrl-f5 on a page containing a silverlight object doesn’t cause cache-invalidation of the silverlight object itself is quite annoying.

TCO10 QR3

So I blinked and I missed the fact that qualifying round 3 had already happened.  I thought it was this coming weekend.  Having got through in QR1, forgetting about it is no drama, but it does mean my analysis of the problems is a bit late.

Q1) Given the top row and left column of a grid of numbers, where each number is supposed to satisfy the property of being equal to the sum of the 3 numbers to the bottom/right/bottom-right of it, determine the value of the bottom right number.  If the result cannot be uniquely determined return 0. Maximum grid size 10×10.

A1) First to get rid of the trivial cases.  If row length is 1, return last column entry.  If column length is 1, return last row entry.  Otherwise we actually have to fill in the grid.  This is pretty trivial since each cell is the sum of the 3 bottom/right/bottom-right, if you know the cell, the bottom and the right, you can subtract the bottom and the right to get the bottom-right cell.  Given you have the top and left sides, you can fill 1 cell immediately.  That cell lets you fill 2 more, and those, 3.  So you could diagonally fill the grid.  Or you could just fill row by row, or column by column.  In any case the cannot be uniquely determined scenario doesn’t exist so it can be ignored.

Q2) Given 6 numbers representing number of semitones to offset a guitar string (or -1 if that guitar string is not played), determine whether a major or minor cord is being played and if so which one.

A2) When I first started reading this question I had dreaded flashbacks to an old Top coder problem which was very similar.  However this one is relatively easy.  Modulo arithmetic comes into play.  Simply add the offsets (where not -1) to the numerical value of each guitar strings open note (as given in the problem for those who don’t play guitar) modulo 12.  Then reduce your (upto 6) notes to the unique values.  If you have more or less than 3 values you can return not-a-chord immediately.  Otherwise, sort the 3 values.  For each of the 3 values, check to see whether the next one (wrapping round if you start with the greatest value) is equal to the original + 4 or 3 modulo 12.  Then check if the remaining value is equal to the original + 7 modulo 12.  If both conditions are matched, return Major ‘letter’ by mapping the original value to the letters if the first condition was +4, or Minor ‘letter’ if the first condition was +3.  Otherwise return not-a-chord.

Q3) Given a pane of glass with width and height (both at most 1000), a starting position, and up to 2500 LRUD unit movements which cut the glass, starting from the starting position, how many pieces has the pane of glass been cut into?

A3) This is a simulation problem combined with a disjoint set enumeration.  The thing I find most annoying with problems like this is the data structure to store the results of the actions.  So used to a grid, storing details about edges is just annoying.  I usually end up storing the 4 edges of each cell and dealing with the fact that most edges belong to 2 cells by ensuring I always update both.  A grid of numbers can be used with numbers between 0-15 indicating which combination of surrounding edges have been cut, you can store these in bytes even making the grid require 1meg of ram.  Once the simulation is complete, calculating the pieces comes in to play.  I would do this with a disjoint set tracker (copying my code from TMD.Algo – since this is top coder… or maybe just writing it from scratch using arrays instead of dictionaries), creating a node for every square, then for each square unioning it with every neighbour which is on the other side of a non-cut edge.  Then for each square get the representative, and use a dictionary to accumulate the count of unique representatives.  Disjoint set tracker is practically O(1) for union and get operations, so even though there are a million cell locations the running time will not be a problem.  Memory usage will be a few megabytes, but within the 64meg limit.  This is really just showing my bias to disjoint set tracker, which I think is awesome.  For a similar approach which uses a tad less memory, you can just do floodfill counting using breadth or depth first searches which can’t pass the cut walls, which is also O(1) per grid cell.

Alternatively one could construct a graph out of the path followed and the edges of the puzzle.  Due to the limit on 2500 moves and 4000 edge positions, the memory usage will be much lower.  You can then walk the graph starting at each edge using a turn right at intersection approach, mapping each location visited (and handling dead-ends by turning around and keeping on walking).  Once you find a loop where you enter an edge from a direction other than the one you left it, you increment a counter and in any loop found case go back to starting at the next edge not already visited.  Once done the counter is the number of pieces. Each edge is visited at most three times, so O(N) in edges (moves + perimeter), which is far more efficient.  I would however consider it more complicated to implement, so I doubt I would have bothered.

All in all this looks like it was a pretty easy set.  Although being a qualifying round that is probably expected.

Dynamic Programming – the random guide

Since I got a comment asking about it, I thought I would write down what I know about dynamic programming.

What is it?

When I think of dynamic programming I think of filling in an array using other already filled in elements as part of the inputs to each new element filled in.  However, Wikipedia tells me that the term is actually much more generic.  Dynamic programming is an optimisation technique for solving recursive problems where each recursion leads to a ‘smaller’ problem and that while following the recursion specific sub problems occur multiple times.

The optimisation technique is one of avoiding calculating the results to repeated sub problems more than once.  This covers memoization, which is the act of first checking whether a given problem has been solved and is in a cache, if so, return the result, otherwise calculate result and store it in the cache.  I personally think of memoization separately, with dynamic programming only being what Wikipedia calls bottom-up dynamic programming.

Bottom-up dynamic programming is about realising that if your cache is some form of array and there is a (preferably easy) order you can traverse entries of that array such that all sub-problems required to fill in a given entry, are already filled in, you can do away with the recursion entirely.  Just fill-in the cache ‘bottom-up’ from the smallest sub-problems to the actual problem you want to solve.  The code to do this is almost always simpler and cleaner and avoids potential stack overflows for large inputs.  It also usually performs a bit better because the way you access the cache table is more likely to be sequential, giving better cache hit ratios for the cpu cache, not to mention avoiding all the function call overhead.

Why use it?

Well for Wikipedia’s general term, you use dynamic programming because otherwise the code will perform too slowly.  Specifically you want to use it when the recursion repeats sub problems not once or twice, but maybe exponentially in the size of the original problem.  Even if the number of repeats isn’t so huge, if it is greater than constant, dynamic programming gives you a significant win in terms of how large an input you can handle within a given time limit.  For bottom-up dynamic programming you use it because it reduces the amount of code you have to write (good for coding competitions) and usually performs better (sometimes good for coding competitions).

Why not use it?

For general dynamic programming, you don’t use it because either a) it isn’t applicable or b) there is a better approach.  Sometimes a problem for which you can work out a recursive solution  is still to slow even with dynamic programming.  For a coding competition, given you know that a solution exists, this is sometimes a sign that some non-recursive approach can be used to solve the problem (although for me it is often a case that I simply haven’t got the best recursive method to solve the problem and looking at the problem differently will solve it with dynamic programming).

For bottom-up dynamic programming, you don’t want to use it if for a given input problem, the sub-problems actually required to solve it are only a small subset of the total size of the array needed to be used as a cache (memoization using a dictionary for a cache instead of an array is the better option in that case).  An exception to this is when you are given multiple inputs and the same fully populated array can be used for all of them.  Otherwise bottom-up dynamic programming is what you want when the recursion for a problem is dense in all possible sub-problems.

How to use it?

And here comes the tough question…  Dynamic programming as a technique is so simple that most of the time how to use it isn’t a problem.  The problem is ‘how do I turn problem A into a recursive problem with repeated sub problems where I can then take advantage of dynamic programming to solve efficiently’.

So all I can do is give a few hints and examples.

Hints

First hint is consider the input size (this is applicable for coding competitions only obviously).  If the product of the size of the range for each of the inputs is > about 50million, then your problem either can’t be solved by dynamic programming, or you need to work out some way to simplify it first.

Second hint is a problem which contains a list of objects, where the maximum list length is ~16 or less might be a recursion on whether each element is included or not.  In that case one of your array indexes will be a number constructed with each bit corresponding to one of the elements.  Usually in these cases the recursion is over removing one element, and you can fill in the array in ascending numeric order, as every number which contains an additional bit filled in, is greater than the current number.  This is not a case where you need to use memoization instead of bottom-up dynamic programming.

Third hint, if your recursion involves 4 or more parameters, you might be doing it wrong.  This is not a definite but coding competition problems rarely require more than 2 indexes for their dynamic programming cache arrays, very rarely more than 3.

Fourth hint, sometimes the problem can be broken into a number of smaller problems, each of which can be solved quickly using dynamic programming (often all at once), and then each problem recombined to give the final result, where a direct attack on the problem will not result in a useful recursive definition.

Examples

Here are a few of classic problems.

1) How many ways can a string (specified) be made by taking characters in order from a document (also specified)?

2) How many ways can you choose k items from a set of n?

3) Given n items of varied value and (positive integer) size, and a (moderate) limit on the total size you can take with you, what is the maximum value you can take with you?

4) Given a grid of cells which are either filled or empty, what is the largest square of empty cells which can be found?

A1) So what recursion can we define to solve the problem?  How about defining f(n,k) as ‘how many ways can the first n characters of the string be made by taking characters in order from the first k characters in the document?’.  Then f(n,k)=f(n, k-1) + document[k-1]==string[n-1]?f(n-1, k-1):0.  We’ll need to define termination conditions as well, so f(0,k)=1 for k>=0 and f(n,0)=0 for n>=1. If the string is ‘aaaaaaaaaaaaaaaaaaa’ and the document is 500 repetitions of the character a, this recursion will quite obviously explode  as every recursion will call 2 recursion calls and the minimum depth before hitting termination is the length of the short string.  Obviously repeated subproblems occur frequently.  We could use memoization or we could do bottom up programming.  This problem is a special case of bottom up dynamic programming, because the recursion only refers to elements with one less value of k.  Therefore for each cell in an array f[n,k] we only need the values in f[n,k-1] to work it out.  So if n is row and k is column, we can do each column one at a time, and after we do each column the next one only needs that column, nothing before it.  Therefore rather than using a rectangular array, we can use 2 column arrays, current and next, filling in next based off current, then switching them, repeating until we reach the column we need.  This is especially useful if the document is very long, because it saves a lot of memory.

A2) f(n, k)=f(n-1, k)+f(n-1, k-1) – I included this question because it is the ‘choice’ function, and values of the choice function map to binomial coefficients, which are famously described by Pascal’s triangle.  Thus when you write a bottom-up dynamic programming solution you end up implementing a Pascal’s triangle generator. It also happens to be practically the same problem as the one before, in the case where the string and document are repetitions of the same character.

A3) Define f(n, s) be the maximum value of a subset of the first n items with total size less than or equal to s.  Then f(n,s)=max(f(n-1, s), f(n-1, s-sizes[n-1])+values[n-1]) with termination condition f(0, s)=0 and f(n, s)=-infinity if s<0.  This scenario is a bit different, because the function potentially recurses to negative indexes which would make our cache array problematic.  We can solve that problem by simply removing the right hand side of the max function if s-sizes[n-1] is less than 0.  I included this example because it uses max.  Quite often a coding competition problem which requires you to optimise some scenario will end up as a dynamic programming implementation where each entry requires the choice of the maximum or minimum of multiple options.

A4) Define f(x,y) be the maximum sized square with its bottom right corner at position x,y.  Then f(x,y)= 1+min(f(x,y-1), f(x-1, y), f(y-1, x-1)) if grid[x,y] is empty otherwise 0.  Termination condition f(x,y)=0 where x or y<0.  This works because the first 2 of the min’s give side lengths-1 and the 3rd gives diagonal count to the diagonally opposite corner-1, it shouldn’t take too much convincing to see that a square 1 greater than the smallest of these 3 is guaranteed to be all empty, and it obviously can’t be larger than that.  Finally, no specific value of f(x,y) is actually the solution to the problem.  The solution to the problem is the maximum over all of them.  This is a case of hint number 4.

And that’s about all I can write up for tonight.  I might add some more hints and examples another time.

GCJ10 R1C

This round needed at least one problem out entirely, and if it wasn’t the third problem you needed the whole second problem or the small input for the third.  So it looks like it may not have been quite as easy as R1B.

Q1) Given n lines which all start and end with one of 2 x coordinates, and the list of y coordinates for each endpoint which are all distinct, and a guarantee that there will be no coincidence of intersections (no places where 3 or more lines cross), how many intersections are there. Constraint of n < 1000 for the big input.

A1) This is a simple O(N^2) problem, for each line it crosses another if the y coordinate order of one end is the opposite of the other.  I think I recall there being a way to do this faster… indeed the contest analysis mentions that this can be done in O(N log N), it is called the number of inversions in a permutation.  That is first sort the lines by their first end point, then determine how many pairs switch ordering when you sort by the second end point.  Apparently a merge sort can be adapted to perform this calculation, which is equal to the number of neighbour swaps required to sort them, which I have seen in previous problems which I have also solved using O(N^2) because the problem input size has never needed anything else.  I should probably look at adding this merge sort trick to TMD.Algo.

Q2) Given a known range which contains the maximum number of simultaneously connections a computer can handle (the upper end is known failure, lower end known success) and a requirement to reduce that range such that the top end is less than C times larger than the lower end (where C is an integer between  2 and 10, inclusive), what is the minimum number of tests required to guarantee that in the worst case.

A2) Bit of an odd question, one would first think a binary search would be idea, the size of the range would reduce consistently.  However, our target is not the size of the range but ratio of top to bottom.  Therefore we want a skewed binary search where the selection point is the one which results in A/S = S/B so our selection point is the sqrt(AB).  Now since the load tests can only be performed on integers, we’ll have to choose one of the 2 integers near the value of the square root.  In order to be ideal we must choose the one which has the best worst ratio, might as well use fractions just to be safe in comparing them. Then we simulate this process until the ratio is less than or equal to C.  The sample answer is not quite the same, however given that based on the selection point chosen, the ratio is sqrt(A/B) after n steps that is (A/B)^(1/(2^n)),  If we want that to be <=C then A/B <= C^(2^n), which is exactly the formula in the sample answer, so I think my approach is correct and should produce the required results.

Q3) Given a rectangle patterned with white and black squares, you need to make chessboards, starting with the largest chessboards (which can be larger than 8×8) all the way down to the smallest chessboards (1×1).  The largest chessboard available is always taken first, if there are multiple of equal size, top most and then left most are subsequent tie breakers.  Small problem set the input size is 32×32, large problem set the input size is 512×512.

A3) The small problem size isn’t too bad, we can simply brute force it, with a bit of caching to speed things up by a constant factor or 2.  The large problem size however is an entirely different story, an O(N^5) solution most certainly isn’t going to cut it.  Even an O(N^3) solution will likely fail to run in time.  This probably has something to do with the low success rate that this problem had.  So can the problem be done in O(N^2) ?

One trick is to construct the chessboards efficiently, every chessboard is itself a lot of smaller chessboards.  We require overlap to ensure consistency, so if you first identify all 2×2 chessboards. Those 2x2s can be used to create 3x3s.  3x3s can be used to create 4x4s and 5x5s, 4x4s can be used to create 6×6 and 7×7,5×5 can create 8×8 and 9×9.  If we try to race to the largest size and then work our way down filling the gaps in, this has potential. However each time we cut a chessboard out, we need to invalidate efficiently as well, all chessboards which overlap with the area need to be invalidated.  I still am not sure this is fast enough – it looks awfully like even with the best care and data structures it’ll be O(N^3) in the worst case.  Time to check out the sample solution I think…

So yes it can be done better than O(N^3), O(N^2 log N) in fact and the solution is very smart.  First of all we use dynamic programming to find the largest chessboard which has its top left corner positioned at that location.  I’ve seen this before but I’m so rusty I had clean forgotten it.  First check if it is the top left of a 2×2 chessboard, if not the answer is 1. Otherwise it is 1+ the minimum of the largest chessboard sizes for each of the other 3 locations in that 2×2 chessboard.  If we store each cell into a heap ordered by its largest size, y and x coordinates, we have the order to remove things in, the trick becomes the invalidation due to removal.  When you remove an area, every cell in that area now has a largest square of 0, but other cells may need their largest square being updated as well.  As it happens since we just removed the largest square, we know that no locations more than double that square away overlap.  So the number of locations to invalidate, recalculate and re-add to a heap is 4 times the number of locations removed, and since each cell can only be removed once, the number of invalidations is worst case 4*N^2.  Since each invalidation is a heap removal and addition, which is O(log N) each, we get O(N^2 log N)

All in all I suspect I would have placed in the top 300 in this round. top 400 in the previous. Even allowing for the fact I had a below average run in round 1A I probably would have only made top 400 in that round (maybe 250 on a good day).  So it doesn’t look good for getting top 500 and a t-shirt this year unless I do some serious improving.

GCJ10 R1B

Not sure whether attendance for 1B was higher or the problems easier, but the cut-off for getting through was a much higher score…  Unlike my round where solving any single problem with a decent time was sufficient, this round you needed at least 2.

Q1) Given a command for making new directories, which will only work if the parent directory exists, a list of current directories, and a list of directories required, determine how many command calls are required.  Constraints are up to 100 existing paths, 100 desired paths and each path at most 100 characters (and hence at most 50 deep)

A1) This problem makes R1A Q1 almost look tricky – just maintain a dictionary of existing paths and for each new path add/trim/add/trim until the add fails – and count each add.  The constraints are so small that the cost of generating alot of garbage strings isn’t even a problem – so no need for any kind of optimisation.  Do need to be careful you don’t try and add the empty string, but otherwise trivial.

Q2) Given a set of ‘points’ at different locations, and their velocities, and the position of some goal, and a restriction that in order for one point to pass another requires a ‘cost’ of 1, what is the minimum cost to have at least N points at the goal by time T.

A2) Okay, so this one is an actual problem… We can start by considering for each point, whether it can reach the goal based on its own velocity. If not we can kind of ignore those points.  If the number of points which can reach it based on own velocity is less than N, we can fail immediately.

At this point I would think that the right most N points which can reach the destination in time will be the ones we want.  Then the cost is the number of non-reachables to the right of each of those N points, as summed for each point.  If we were to choose a point which is more left, it would have to pass more non-reachables to get to the goal in time, increasing the cost.  This greedy approach seems sound, I’ll just check someone’s solution… yeap it is sound.  I now can see why you had to solve 2 problems to get through…

Q3) Given a set of numbers 2 to N, determine the number of subsets which contain N and its rank (ordered inclusive position 1-based), its rank’s rank, its rank’s rank’s rank, and so on until you get to the first element in the set.

A3) A math problem… and by far the hardest problem.  Constraints are N is 25 or smaller, or 500 or smaller for the large.  Quite a few got out the small constraint, but not so many for the large.

That being said, it isn’t really that hard of a problem.  Its a dynamic program over sets of length k, which end with m and satisfy the input constraint given m.  Both k and m are limited to being less than or equal to N, and the recursion is going to have to consider each shorter length, so O(N^3) is in play.  With N of 500, that is 125million steps, so each step itself better be quick.  So we’ll need a closer look to be sure.

Recursion shall be defined as the sum over each shorter length with m equal to the current k.  However each of those sum components need to be counted by the number of possible ways to choose the difference in lengths -1 extra numbers which are greater than k and less than m.  A direct approach results in that being O(difference in lengths) which is worst case O(N), which is nowhere near fast enough.  However all of the possible choose calculations can be precalculated in O(N^3) (or O(N^2) if you cache factorials and inverses mod p).  The important bit here is that the value of n choose k might be huge and the problem requires results modulo 100003.  Using TMD.Algo this will not be a problem, as I have an efficient n choose k mod p implementation already done and 100003 is prime (see, implementing stuff useful for last years GCJ, also useful for this year…).  Net result is a couple of hundred million multiplications  and additions and modulos.  The same lookup table can be used to answer all 100 inputs, so 8minutes is plenty of time.  In fact that last realisation is one I would have been at risk of missing during the contest, or I would have said I would have gotten all 3 problems out for sure if I was in this round.  Without reusing the table, the time constraint might be an issue, possibly depends on how fast your computer is… the look up tables will be close to fitting into cache on a good processor, but even then it might not be enough.

GCJ10 R1A

So, google code jam round 1A was today, and while I got through in 876th place (and so I won’t need to stay up for 1B or 1C), I feel the need to grumble a bit.  I started with the first problem, it was nice and simple (Especially considering I wrote AppleHunt…), I then went to the 3rd problem because I noticed that it was doable with the small input constraints and the constraints on the large input were the same as the small input.  By the time I had written my first non-functioning prototype for the solution, they had changed the constraints for the large output and my code wasn’t going to even come close to running in time. (and it would have required a couple of terrabytes of ram…)  I spent a few minutes working on the a different approach, since I had already done a bit of work on the problem, and the relaxed constraints actually suggested how the problem had to be solved. However, it quickly became apparent that I should go back to problem 2.  Another look at problem 2 and I was convinced that it was a simple dynamic programming problem.  However despite 3 submissions (including me noticing a blindingly stupid bug with <2min left on the clock) I wasn’t able to get something which solved the problem.  I have no clue why…  If I hadn’t of been mislead into to trying problem 3 first, I almost certainly would have either gotten problem 2 out, or at least submitted a solution for the small input size which used brute force rather than dynamic programming.

So I just had a look at a solution to problem 2 and I worked out why my solution failed, a subtle error in my thinking while constructing the dynamic programing resulted in it failing to consider the combination of 2 scenarios I had considered.  I also looked at problem 3 and have to say that the cool solution is excessively mathematical, but there is an option which performs reasonably which is more about programming…

So on to my analysis…

Q1) Given a board up to 50×50 with red/blue pieces on it – cause them to slide to form stacks coming left from the right hand side of the board.  Then determine for each of blue or red whether there exists k or more in a row either horizontally, vertically or diagonally.

A1)This problem is easily implemented in O(N^3) (where N is the width of the board) and that is actually sufficient, however I saw an O(N^2) solution and took it for the hell of it.  First step is to slide all the pieces – for each row you simply perform an order maintaining sort over empty-non-empty – or given that empty has no state, you maintain the last moved to position, move each piece to one left of that as you find them, either using actual moves, or simple assignments with a clear of all positions to the left of the top of the created stack afterwards.  O(N^2) for this step or up to O(N^3) if you ‘stable bubble sort’ the pieces in to position for each row.

Once the board is set, you have to determine who has ‘won’.  A simple walk of every square, and foreach walk in each of the 8 directions will determine this, or even only 4 directions.  This gives O(N^3).  I used DisjointSetTracker from TMD.Algo to create disjoint set collections for each of the 4 types of win.  For each square I unioned it with its win type neighbour set if they were the same colour – O(N^2).  I then determined the size of each disjoint set, using dictionaries to accumulate how frequently each representative member appears for every possible input square – O(N^2) – if the disjoint set size is greater than or equal to k, we look up who owns the representative member and they are a winner.

Q2) Given a sequence of numbers between 0 and 255 inclusive, and the ability to delete (for cost D), insert (for cost I), or change (for cost absolute difference between value before and value after) each number, determine the minimum cost to make every number have an absolute difference with its neighbours of less than or equal to M.

A2) The small problem constraints had a maximum of 3 inputs, something you could brute force.  However with up to 100 numbers, the large problem (which is the only one I actually attempted) required some other solution.  It was fairly obvious that you could define the problem recursively being the minimum cost to satisfy the first n numbers ending with value k.  And that is where I made my mistake.  I defined ending with value k to be after inserts were added, which is incorrect, it has to be ending with value k after modifying the value only.  With that correction we can proceed.  Cost for 1 element is 0, otherwise it is the minimum of satisfying scenarios with 1 less element.  The are 3 scenarios which need to be considered. Deletion, which is cost of the same final value with 1 less element + D. Change, which is the cost of each possible previous ending value which is within M of the current target value + the absolute difference between the current actual value and target value. Insertion, which is a bit more subtle – so long as M is non-zero it is the cost of each possible previous ending value + the I times the the difference between previous and target divided by M (using ceiling rather than floor to convert to integer) – I + Change cost.

Using dynamic programming you have a table of size 100*256 and a net cost of 100*256*256 since each entry requires an inner loop over previous values.  Apparently there is a way to calculate the table entries in average O(1) time each, but it is beyond my skill level.  On the other hand it actually starts with making the same mistake I did, but fixing it using some serious smarts.

Q3) Given 2 numbers A and B, you can subtract a multiple of the smaller from the larger (even if that multiple is 1) and end up with a non-negative number.  If we turn this into a game, where the aim is to not be left in a scenario where you are forced to end up with one of the numbers being 0, determine how many winning scenarios there are for you given all combinations of 2 input ranges for A and B (both being at most 1 to 1000000).

A3) The small input constrained the 2 ranges to being at most 30 wide, meaning only 900 games needing solving for each problem.  Given that I was looking at GCD in qualifying round for another problem I immediately recognised the potential for GCD to be related to this problem.  I determined that there is no difference between Solve(A, B) and Solve(A/GCD(A,B), B/GCD(A, B)), and I was going to use this to accelerate my solving of the problem. Assuming A>=B, A=B is a loss, A%B=0 is a win since you can leave B = B.  I then used GCD as an accelerator to reduce the number of subproblems to solve and Memoitzation.  Obviously I made a mistake as it didn’t produce the right results for the test inputs, but I think the approach would have been sufficient for the small input.  What I had failed to see was that there was a much better minimum condition for obvious win than A%B=0.  A >= 2B is a win.  This can easily be seen because either the smallest possible next non-zero value for A is a win, or it is a loss and smallest next value for A + B (which is <2B) will force force the opponent to create the small next non-zero value of A, B combination which you just found was a loss.  This is much more powerful and certainly puts the small solution easily within reach.

So what about the large problem set, where the ranges for A and B in inputs both can be up to 1 million wide?  Testing every possible scenario is … infeasible, so there obviously has to be some approach for determining whether a combination is a win or loss, for each possible value of A, in constant time for the entire range of the B inputs at once.  This is where I think the problem becomes a failure. If A >=2B is a win, then 2B>A>B is the interesting range for A. In this scenario we only have one possible move, which is to end up with B and A-B (which is now the smaller one).  If B>=2(A-B) it is now a win, which means the original position was a loss. Thus if A < 3B/2 the original position is a loss.  And we can keep recursing like this, chipping away at either side of the interesting range until it disappears.  That is okay and you can perform that recursion in log time – which while not constant time, is plenty good enough.  Where the problem fails is that this recursion can all be thrown away by a single comparison to the golden ratio, which is what the outside edges of the interesting range converge on, from above and below.  Why does that make the problem a failure?  Because its interesting enough that it has been solved before – so you have a number of people who will just pull the golden ratio out of nowhere and submit a solution, which is all math and requires no algorithms.

An alternative approach which is inspired from seeing that the interesting area can be recursively chipped away on the left and right, is that there is ultimately a single number which is the transition from win to lose.  You can then binary search to find that transition point, which is again fast enough, I think that solution ends up being O(N log^2 N) – maybe better with memotization…

R1B is tonight – I’ll do a write up for it tomorrow.

Odd bug in 64bit .Net runtime

Was investigating a bug today and found some very peculiar behaviour in the 64bit .Net runtime.

Lets say we have a thread running the following loop.

while (true)
{
 try
 {
  Thread.Sleep(10000);
 }
 catch
 {
 }
 try
 {
 }
 catch
 {
 }
}

Okay so that code is not something you would write, I’ve reduced it down to the bare essentials to demonstrate the problem.

If you throw a thread abort at this thread (for whatever reason…) most likely that is going to land in the sleep statement.  Thread will be woken up and the catch block invoked, then at the end of the catch thread abort rethrown and the thread will exit … right?  Not on 64bit runtime.  On 64bit runtime with some strategic debugging you can see that the thread abort does not get rethrown (something to do with the second try catch block… since if you take it away it works), not until the while loop has gone round and you hit the sleep statement.  The net result is a tight loop continually throwing ThreadAbortException.  If you put some code in the try section of the second try catch, that will often cause ThreadAbort to get rethrown and the loop terminates.  Unless the code is in a finally block or is somehow otherwise not suitable for thread abort raising, in which case it won’t raise ThreadAbort and the loop continues.  The second try catch block can be a try finally instead and it still loops forever.

Final trick, if you step through in the debugger rather than letting the code run, it behaves correctly!

All in all a strange bug, it affects both the .Net 2 and .Net 4 runtimes, although i’ve only checked the most recent releases of both on windows 7/2k8r2 environments.

TCO10 QR2

So QR2 for TCO10 was yesterday, and I decided to take a look at the questions today.

Q1) Given a list of sell prices and buy prices for a specific type of object, and a tax on selling, determine maximum profit given you start with none of the item.

This is pretty easy.  Calculate the tax for each sale and reduce the sell price accordingly.  Then sort the adjusted sell prices descending and buy prices ascending, then while adjusted sell price is greater than the buy price, add the difference to the profit.

Q2) Given a square array of live, dead, unknown, determine the optimial values for unknown to ensure the maximum number of live cells after 1 round of the rules for Conway’s game of life.  Important restriction that no cell will have 2 or more unknowns in its set of neighbours.

Again this is pretty easy, al because of the restriction, but there are a few tricks.  First you need to put the the input array into a larger array so that you have a border (for easiest implementation a border of width 2), since cells may come to life outside of the input array.  Because of the restrction, every unknown can be considered seperately, which means the algorithm runs O(n) in cells. For each neighbour of an unknown, calculate whether it will be alive or dead for each of live and dead options for the unknown.  Do the same for the unknown itself. Chose the maximum sum of live cells for each of the two cases and set the unknown to that case.  Once all unknowns are set, simulate the entire board for one step and return the live count.

Q3) Given a paragraph (upto 1000 characters) with all the spaces and puncuation removed, and a list of up to 50 words (each up to 50 characters) determine a ’tiling’ of the paragraph using the words (each word may be used 0 or more times) which maximizes the value of the square of the longest consecutive tiled sequence minus the number of uncovered characters.

This is a big jump in difficulty and I have to say that I probably would not have gotten this one out.  It is fairly obviously dynamic programming, the problem comes in determining the right way to achieve it.  My first guess was to dynamic program over the first N characters, with the last M covered, but I realised I also needed have the largest consecutive cover P as a variable parameter too.  This meant that the dp space was length cubed, way too large.  After looking at another competitors solution I now see the correct approach.  Use dynamic programing to calculate the minimum number of uncovered tiles for a range starting at i and finishing at j.  This can be achieved by starting with the minum for a range of one less length + 1, and then minimizing by checking which words match at the end of the range and using the minimum for the range minus the matched word.  Note that the act of checking the words is quite expensive in the worst case, so pre caching whether each word matches for each location keeps the runtime under control.

Once we have the minimum number of uncovered tiles for each possible range, we loop over each possible range where that minimum is 0, square the range length and subtract the minimum number of uncovered tiles before and after the range. Return the maximum of all of those possible values and -length.  This works because although it doesn’t descriminate as to whether the range selected is the maximum consecutive uncovered section, when the maximum consecutive uncovered section is selected, it will receive a larger score because the number of uncovered tiles will be the same and the selected range length will be larger.

GCJ10 Qualifying Round

So the qualifying round is over and ~1500 people (including me ) got all the questions out successfully. 8523 people got the required 33 points to advance to the first round.

Questions were very mathematical, only the 3rd question really had any algorithmic design to it.

Q1) This was an amazingly convoluted way of asking when an n bit integer is all 1’s.  It boiled down to k % (1 << n) == (1<< n)-1, a single line.  I suspect the problem was inspired by reading Knuth.  He has a section where he describes the bit effect of adding 1 to a number (among other things…) the least significant 0 and all less significant bits toggle.  Which is the same as how a chain of ‘snappers’ was described to work.

Q2) This was a very mathematical question, with a barb to annoy anyone who’s programming language doesn’t include a big integer.  The question was given a set of up to 1000 numbers, what is the maximum common divisor you can get by increasing them all by the same a non-negative integer, and more specifically what is the minimum non-negative integer which gives that maximum common divisor.  The kicker being that the numbers were anything up to 10^50, so they won’t fit in a 64bit integer.

The solution comes from identifying an upper bound on the common divisor of n numbers is the GCD of the absolute differences between the numbers.  Then if the minimum number is a multiple of the GCD the rest of the numbers will be as well, which is always achievable for any set of numbers.  For .Net 4 the problem was trivial – BigInteger has a GCD method, so you just parse the numbers, calculate the absolute differences, calculate the gcd, calculate the mod of any entry with respect to the gcd, and if non-zero subtract that from the gcd to get the minimum shift.  GCD can be implemented using euclid algorithm if it wasn’t there.  And if you didn’t have biginteger, good luck…  My code had one change compared to the description above.  I sorted the entries rather than calculate absolute differences, but I’m pretty certain that isn’t required.

Q3) This problem had the longest implementation, but is mathematically simpler than problem 2.  The question was given a roller coaster with k seats and n groups of people of specific sizes in a specific order and the roller coaster does R runs per day and every paying customer is worth 1 dollar and everyone who went on the roller coaster goes back on the end of the queue in the same order they went on to the rollercoaster (no one ever leaves), how much money will the roller coaster operators get in a given day?

So it is a simple simulation problem, but R can be extremely large, making simulating all of them too expensive.  Also k*R can be larger than 32bit integer, so you need to be careful you don’t overflow using 32bits at inappropriate times.  The algorithmic key here is to identify that the simulation of a run is always the same for any given starting offset into the queue, so you only have to consider at most 1000 simulations (the maximum value for n).  The final piece is to realise that if you ever get back to the same starting person, you have started a loop and that sequence of simulations will be repeated.  The loop may not start from the first simulation, but a loop will exist eventually, the loop cannot be longer than 1000 entries.  So to calculate the answer you break it into 3 parts.  1) Simulate until loop discovered. 2) Calculate the sum of the loop and determine how many more times it needs to be run to get up to R (using integer division).  Calculate that product. 3) Calculate the modulo to determine the final partial loop and sum those steps.  Those 3 numbers are summed to give the answer.  Of course if R is too small you may not need step 2 or 3.