GCJ16 R1C

So, I’ve been a bit busy, so I didn’t get around to writing this up earlier.

Quite a tricky third problem with two problems that were reasonably easy left the cut-off being quite a fast time on the first two, or at least solving the small of the third.

I was amused to read the name of the person who came 1005th. ‘i.will.get.a.t-shirt’ missed out on getting to round 2 by 15 seconds…

Q1) Determine how to reduce a set of counts so that no specific member has a majority at any point, up to 2 can be removed at any point.

A1) This problem has two parts, the concept that avoiding a majority comes by reducing the largest count, and the realization that if there are only 2 non-zero counts they must be equal, and hence you have to remove one from each or a majority is formed.  From there its straight forward.

Q2) Given a set of nodes, create a directed acyclic graph which has exactly M paths between 2 specific nodes.

A2) For me this problem was pretty easy, but I could see it would be easy to get stuck thinking about it when you should be investigating to discover the trick.

My initial thought was recalling that its possible to number the nodes in a directed acyclic graph such that every node strictly points to nodes with a larger value.  The scenario with maximum number of paths would therefore be the one which maximizes the number of edges while satisfying that all edges are forwards.  So node 1 connects to 2 through N, node 2 connects to 3 through N, etc.

The number of paths in such a graph can be calculated easily enough – set 1 to node 1, then for each node in ascending order, sum all the values associated with nodes that connects to it, which is all of them seen so far.  This gives the sequence 1, 1, 2, 4, 8, 16, …

At this point the trick is to realize that the final node is connected to all the previous nodes in this fully connected graph, hence why its value is 2^(N-2), if we were to break one of those links, its value would be reduced by the corresponding value of the node it was linked too.  Obviously if M = 2^(N-2) we need all of them , but otherwise we’ve got all the smaller powers of two and we just need to connect them based on the bit pattern of M.  Or, just start from the largest and if its smaller than M, connect it, and reduce M, otherwise don’t connect it.

The final result is a triangle matrix, excluding the connects to the last node, which is either all of them, or a 1 shifted copy of the bit pattern of M.

Q3) Determine how to repeatedly choose from 3 separate piles of distinct items such that you never choose the same the same set of three more than once, and that you never choose the same subset of two more than K times.

A3) Its very tempting to just greedily choose the next possible combination while it doesn’t violate the conditions – but it can easily get stuck.  This explains the large number of failing attempts on the small input, or so I think anyway.

To try and understand this problem I implemented the brute-force for the small input.  Despite what it suggests in the contest analysis, it is possible to (with some care) write a brute-force solver which runs in time.  You do have to be sure not to allocate any memory in the loop…

I didn’t really find my understanding of the problem significantly improved by being able to compare the basic greedy to the correct brute-force, but it did manage to reinforce my thought that greedy was the right technique, I just needed to tweak it to avoid getting stuck.  So, I tried selecting from the third pile in reverse, alternating reverse every second time, and finally (I thought somewhat at random) by using a rotating offset driven by the selection of the first two piles.

This final option worked, much to my surprise – although now I’ve read the contest analysis it makes much more sense – the rotation clearly ensures it won’t get stuck.

 

TCO15 R1B

Having passed through already I didn’t compete in this round, but I was interested to see how the turn out was without a Google Code Jam happening at the same time.  Only 1023 people registered, which after allowing for the 700 advancers last time is only ~400 new people (assuming everyone who competed last time and didn’t advance, competed again) that became available without the code jam running.  Looks like round 2 will probably be quite short of its 2500 target if this keeps up.

Positive score criterion appears to have been the cut-off again, only 591 advancers.

Q1) Find the size of the largest sub range of an array where at least half of the numbers have a common divisor other than 1.

A1) With the array size limited to 50, and the size of the values limited to 1000, this seems a trivial brute force.  There are 1250 sub arrays of average length 25 and (obviously) less than 1000 prime numbers to test for divisibility.  There apparently 168 prime numbers less than 1000, so its not a huge win, but it does help a little if you happen to be working with a particularly slow programming language I guess.

Q2) Determine the expectation count of objects found if each object has a probability of being found on its own, and if found a list of other objects that will definitely be found (which expands transitively).

A2) Up to 50 objects, so a brute force of every combination of basic find on its own chance is out of the question.

Each object can be expanded out transitively easily, but the question is how to combine the probabilities is not obvious.  If an object is in isolation, nothing connects to it, the probability is simple.  If the object is part of a cycle which has no external connections, the probability is 1 – product(probability of each member of cycle not being selected).  Its the external connections which are trickier.

I think all cycles can be collapsed to be replaced with a single node with a ‘base’ probability as described above.  Once the cycles are removed, the graph is now a directed acyclic forest.  The roots are obvious they have their base probability and that’s it.  Children then have probability of being chosen of 1 – product(base probability of not being chosen, and total probability of not being chosen for each parent) – remembering that each node can have multiple parents and we have to have calculated the total probability for each parent before we can start.

Apparently I’ve made the problem more complicated than required though – in effect the probability of an item being found is 1 – product(base probability of not being chosen) for every node that is transitively connected to the item in question.  I can see how the math works out to be the same, but it wasn’t an obvious starting point.

Q3) Given a tree determine the number of distinct subsets of 7 vertices which are connected to such that the second is has the first as a parent/grandparent, the 3rd 4th and 5th has the second, and 6th and 7th has the 5th.  Subsets are distinct only if they have different members, order doesn’t matter.

A3) Number of vertices in the tree is up to 2000, so brute force is trivially out of the question.  It would seem a multiple pass dynamic program is in question.

First pass is number of transitive children for each vertex.  Second pass is number of ways to select 2 children of a given vertex.  Third pass is sum of products of the second pass for each child and the number of ways to choose 2 other children which are not the selected child or its descendants.  Fourth pass is sum of the 3rd pass for each child.  Final answer is the total of all values in the 4th pass.

The 3rd and 4th pass enumerate all transitive children for each vertex, this will only perform acceptably if you store the children as an adjacency list rather than checking for every potential vertex as a child in an adjacency matrix.  Additionally the number of ways to select 2 children needs to be done as N*(N-1)/2 – not by enumerating.  Calculating N in the second pass is trivial from first pass, N in the 3rd pass is first pass value minus the first pass value for the currently selected child minus 1 more for the currently selected child itself.

Binary Functions

There are 16 binary functions over 2 variables.  This is a simple consequence of there being 4 input possibilities and only 2 output possibilities for each.  In computer programming we frequently use ‘and’ and ‘or’, less often ‘xor’, but that still leaves 13 more.

Lets take a closer look at all 16.  For brevity I will label each by their outputs for the inputs F,F T,F F,T T,T – so ‘and’ is FFFT.

  1. TTTT – ‘always true’.  Not very interesting, but it exists.
  2. FFFF – ‘always false’.  Again not very interesting.
  3. FFFT – ‘and’.  Well known and used in programming frequently.
  4. TTTF – ‘not and’.  Simple combination of the negation function with the and.
  5. FTTT – ‘or’.  Well known and used in programming frequently.
  6. TFFF – ‘not or’.  Simple combination of negation and ‘or’.
  7. FTTF – ‘xor’.  Not quite as well known as ‘or’, but simply ‘or’ where it can’t be both.
  8. TFFT – ‘not xor’.  Again a simple combination of negation and ‘xor’.  But it is also the A == B scenario and thus xor can also be described as A != B.
  9. FFTT – ‘B is true’.  First parameter is ignored, and second is just passed through.
  10. TTFF – ‘B is not true’.  Again just the negation of the one before.
  11. FTFT – ‘A is true’.  This time the second parameter is ignored.
  12. TFTF – ‘A is not true’.  Yet another simple negation.
  13. TFTT – ‘A implies B’.  If A is true, B must be true, but otherwise anything goes.
  14. FTFF – ‘A and (not B)’.  This is also the negation of ‘A implies B’ but programming sees combinations of ‘and’ and ‘not’ far more frequently.
  15. TTFT – ‘B implies A’.  If B is true, A must be true.
  16. FFTF – ‘(not A) and B’.  As for 14, but reversed.

So, why did I list all these?  So I could categorize them.

  • Really, really boring: TTTT, FFFF – No interest in the inputs.
  • Boring: FFTT, TTFF, FTFT, TFTF – Only one input is used.
  • ‘And’ variants: FFFT, TFFF, FFTF, FTFF – Only one scenario is true.
  • ‘Or’ variants: FTTT, TTTF, TFTT, TTFT – Three scenarios are true.
  • Strongly linked: TFFT, FTTF – Two scenarios are true, and both inputs are used.

I am now going to state that I think ‘and’ variants are actually boring as well.  Since there is only one possible true state, they all result in you being able to say something about A or something about B, in isolation.  The problem is separable.  Which is just the nature of ‘and’, but it makes it less interesting to me.

This leaves 6 ‘interesting’ functions.  A is the same as B.  A is the opposite of B (xor).  A implies B.  B implies A.  Not both false (or).  Not both true (nand).

Of these, ‘or’ is obviously the most commonly used in programming.  Interestingly, despite being a fundamental element of predicate logic, I have not yet worked with a programming language where A implies B has its own operator, you have to do B or not A, which is a bit clumsy.

Where does this thought train lead me… well maybe I will have more to say on this topic another time.

TCO13 R1B

As previously mentioned I didn’t have to get up at 4am to do this round, but a quick look at the final scoreboard it appears the questions were significantly easier.  Almost half of the advancers solved the 1000pt question correctly.  The 600th place cut-off required solving the 250pt problem in decent time and a successful challenge as well.

In this competitive environment only 1 Australian advanced, 2 more falling just below the cut-off having solved the 250pt quickly.

Q1) Given an even number of integers, the need to be made into pairs.  These pairs can have each member added together to create a value of that pair.  Determine the minimum difference between the lowest and highest values in such a set of pairs. Between 2 and 50 starting integers and numbers are all in the range 1 to 1000.

A1) Without justification – the simple approach is to sort the inputs and pair them from outside to inside.  Then just return the difference between the largest and smallest pairs created this way.  This is also correct, but proving it appears not so trivial. (Certainly in the competition this would have put me at risk, not being able to prove it usually delays my submission.)

Q2) Given a rectangular array which is partially populated, and the ability to clear up to R consecutive rows at a time or C consecutive columns at a time, determine the minimum number of ‘clears’ required to clear the array completely.  Rectangular array is maximum 15×15.

A2) So the first key point to this problem is that the maximum size is 15.  This means a brute force on every subset of rows, or columns (but not both) is plausible.  The next key point is that the order in which you perform your clears does not matter, if 2 clears overlap the common cells will be cleared regardless of order.  So you can do all your row clears before your cell clears, without loss of generality.

This leads to a fairly simple approach – try every subset of rows (including empty) and then use a greedy clearing to determine the minimum number of moves by columns to clear what remains, and also to determine the minimum number of moves to implement the original subset of rows selected. (Don’t worry if your greedy approach clears excess to the selected subset, taking the minimum number of moves is more important. The actual subset that you erase is a better solution anyway.)

Simple runtime is O(2^(Min(w,h)) * w*h) – each subset requires 2 greedy clearing passes, which cost proportional to the size of the full array in the worst case.  This runtime chooses the smallest dimension, but in practice the code is simpler if you just standardise on rows or columns, it will run fast enough regardless.  An ideal solution might be to implement it faster by using a sparse array representation much like the one used in the Dancing Links algorithm.  But that certainly wasn’t required here.

Q3) Given a dictionary of distinct words and the following rules, determine the minimum number of remaining words you can end up with.  There are 2 rules.  First rule is that if any time you get the same word duplicated, the duplicate and the word are removed.  Second rule is that you can replace any word with the same word with an even length prefix reversed.

A3) The key point for this problem is that the even length prefix reversal lets you create almost any anagram out of an even length word.  For odd length words you get almost any anagram where the last character hasn’t moved.  Secondly it doesn’t matter what order you remove pairs that match along the way, because there are no dead ends in the reordering and if a can be made to match with b and c, b and c can be made to match.

Almost any anagram isn’t every anagram.  Specifically it is any anagram of the original pairs, where each letter in a pair itself can be in any order.  The pairs never get split up in the process, so you can’t reorder 1234 into 1324.  We can create a canonical form for matchable groups in this subset of anagrams by sorting each pair of letters, then sorting the resultant pairs.  For example the word ‘word’ would first be rearranged to ‘owdr’ then secondly to ‘drow’ to given the canonical order. (FIFO is its own canonical ordering under these rules…)

So for each word, if it is even length, replace it with the word with its characters in canonical order.  If it is odd length, do the same but leave the last character in place.  Then do a unique count of the result words.  If the count for a unique word is even, all pairs can be removed, if the unique count is odd, you have to keep 1 copy of that word.  So simply count those odd ones up and you have your answer.

This was pretty easy for a 1000pt problem (certainly it seems easier than the 500pt problem) – but identifying the way to canonicalise the available subset of anagrams isn’t totally trivial, so it might stump a few people.

Finding a gap in an ordered sequence of distinct integers presented in random order

Unrelated to my recent interviews at Google, I saw the following interview style example question.

If you are given a random sequence of at most 4 billion 32bit integers find a 32bit integer that is not used. (There must be one, why?) Answer this in 2 parts – if you have unlimited memory, and if you have limited memory but plenty of disk storage options.

This problem presents itself to me as a case of the pigeon hole principle and the associated ‘counting sort’.  32bit integers have 2^32 options, which is a bit less than 4.3 billion – hence by the pigeon hole principle, 4 billion pigeons cannot fill 4.3 billion pigeon holes.  Unlimited memory means you can allocate the 16GiB needed for a proper counting sort – but since we don’t actually care about repetition counts in answering the question of what number is missing – you can instead allocate a 512MiB bitset and simply mark every observed number as a 1 bit, then search said bitset for a 0 bit.  And if your disk seek times are not a huge disaster, the bit set can be stored on disk instead.  This solution is O(N) in a sense, if you ignore the cost of allocating the bitset.  Obviously the limitation on size of input originally technically makes it O(1) with a massive constant – but that is a pretty non-useful viewpoint.

Usually I would have probably stopped there and not considered the question further (Except to mention that an O(N log N) solution can be done using sequential disk access and in this case log N is at most 32, so sequential read/write of integers vs random access at the 1 bit size is almost certainly going make the O(N log N) solution perform better in practice)  – but for some reason the fact that we have the fixed ‘constant’ time of allocating the bitset was sticking with me.  This morning I came up with an O(N) time algorithm to find a gap in the ordered sequence of integers without repetition presented in random order – with no 32bit restriction. (And as a bonus, it acts like the O(N log N) algorithm in many respects, so a restricted memory implementation would get the performance advantages of sequential access to disk.)  Obviously the original problem doesn’t have the restriction of no repetition and removing duplicates in O(N) uses hashing – which gives random access again and non O(N) behaviour if hashing is poor – but I think the resultant problem is still interesting.

The algorithm isn’t very complicated and probably of limited use, but I don’t often come up with something I haven’t seen before.  It uses as its basis the O(N) running time minimum, maximum and median algorithms.  The O(N) median algorithm has a fairly high running time constant of implementation – but it is only needed to be used if you want guaranteed worst case time bounds – see at the bottom for how to adjust the algorithm to give randomised expected O(N) time with a much lower running time constant.

The algorithm is recursive with inputs of ‘integer array’ and min/max of range to find the gap in.  In the original problem min/max would have been 0 and 2^32-1 (for unsigned integers), but if you are explicitly interested in gaps you can start with setting min/max to the results of running minimum and maximum on the input array.

From there the trivial algorithm is as follows – if min of range is greater than max of range – return ‘no gaps’.  If input array is empty – return min of range.  Otherwise find min/max/median of input array – if min > min of range – return min of range – if max < max of range – return max of range. Otherwise filter input array to values between min and median (exclusive) or median and max (exclusive), which ever range is has lowest density – if equal choose one.  Then recurse using this filtered array and its corresponding exclusive range.

Performance analysis is trivial – each recursion reduces the number of elements considered by half (since the median is the halfway point), and each step takes time proportional to the size of the current array – leading to the N+ N/2 + N/4 … worst case which gives O(N).

Correctness depends on there being no duplicates – obviously whenever it returns something other than ‘no gaps’ it has found a gap, but it is the no duplicates which lets us say that it will only return ‘no gaps’ if there are no gaps.  Because there are no duplicates, any given range in the sorted sequence either has no gaps (which gives a density of 1) or it has gaps (which gives a density less than 1).  When we divide the sequence into the groups smaller/larger than the median and calculate the density of each – all we really care about is whether the density is 1 or not.  Since the no gaps case always has a higher density than the gaps case, we will always recurse into a side which contains gaps – unless both densities are 1. (Indeed trivial extension of the original algorithm above is that if both densities are 1, return no gaps rather than recursing.)  If we do recurse into a side which has no gaps, we will keep recursing until the input contains 2 or less consecutive numbers – the next recurse will then consist of an empty array and a max less than min – which is the case we return no gaps.

The choosing of lower density is simply an average time performance improvement – if the input set is biased, the lower density section is more likely to terminate in less recursions.  Technically that step can just be ‘choose a side with density which is not 1’.

As an addendum – this algorithm is a bit slow if guaranteed worst case O(N) is not required (because of the cost of the guaranteed O(N) median finding algorithm) – you can drop the median requirement entirely, and just choose a random element as the pivot  to filter the array.  This still gives expected running time of O(N) based on the randomisation – but it can degenerate – and in the worst degenerative scenario you will need to deal with an extra case of a 0 length filtered array with 0 width range – which should be treated as if it has a density of 1.

Google+

So I got a Google+ invitation today.  Admittedly I am not a big social network user (not even close) but I was definitely interested to see the differences between Google+ and Facebook.  People have talked about ‘circles’ in the media a lot, and they certainly are an excellent feature but they were not top of my thoughts.  Far more than circles I think the key difference between Google+ and Facebook is the fact that I don’t need your permission to add you to one of my circles.  Admittedly unless you share a lot of items with ‘public’ this doesn’t do me much good but conceptually it just feels so much nicer… and there is something to be said for conceptual nicety.

There are definitely a few niggles – for example you can secure your links section on your profile, but there doesn’t appear to be a way to customize it on a per-link basis.  Currently I only have one link, so that is hardly an issue, but I could see it being significant to some people.  Another case which I think is probably much more interesting is the ‘people who are in my circles’ list on your profile.  You can select which circles to include, and you can select which circles can see it at all, but I could see a strong case for options like ‘anyone who is in a circle can see who else is in that circle, but not other circles’ or even much deeper customization.

It would probably be nice to define circles using other circles.  So if there is a couple of circles that I frequently want to share with, I could create a ‘union’ circle, when either dependent circle updates, the union circle membership would update automatically, reducing my mistakes.  Similarly, an except circle like ‘all my friends except …’ again, reducing mistakes.  Then again, I currently only have 1 person in my circles…

I would be interested to see whether Google+ ends up with a strong integration with the existing multiple accounts feature of Google – I definitely see that as being a potential win for people with ‘Gamer Identities’ among other scenarios.  I would test it, but somehow I think that sending an invite to yourself is not the intention!

GCJ11 R1B

Since I got through in round 1A, there was no need for me to get up at 2am and try this round.  Apparently 53 other Aussies did though.  This time round a fast time on question 1 was not sufficient, you needed to get out a small input from either the second or third questions as well.  Third question was probably a good choice there because it was a simple brute force on that input size – but it appears most people just went for question 2.

Q1) Given a matrix of teams, where each team has played at least 2 matches, and there are no draws just win/loss/hasn’t played yet – determine the RPI for every team.  RPI is defined as 0.25*WP + 0.5*OWP + 0.25*OOWP.  WP is the percentage of games a team has won.  OWP is defined as the average of the WP for each opponent of the team, where that WP has been calculated excluding the current team.  OOWP is defined as the average of each opponents OWP, defined as above.

A1) Pretty straight forward question – as the 97% success rate from 4500+ people suggests.  Large input there are up to 100 teams, but all but the most brute force of approaches will pass this.  To be sure you pass the large input, the approach to take is to not start by trying to calculate the RPI of each team, but instead trying to calculate sub problems and then build the results from that.  A memotized recursion approach would work as well, but the problem is a bit simple for that level of code complexity.

The basic sub-problem is ‘what is the WP for team a, optionally excluding team b as an opponent’.  This can be represented in a square array, using the diagonal for the WP for the team, since excluding self achieves nothing.  This can be calculated in O(N^2) time by using a multiple pass approach.  First fill the array with the WP for each team, then deviate each point by removing the a vs b game from the average.  (Use fractions rather than real numbers by storing it in 2 arrays.)  Given the constraints though, a brute force O(N^3)  algorithm is fine…

Next up is the OWP list, filled in O(N^2) trivially using the previous array.  Finally we can do the RPI, using the diagonal of the first array, the values in the OWP list, and calculating the OOWP as we go by averaging OWP entries as appropriate.  Again filled in O(N^2) so overall algorithm is achievable in O(N^2).

Q2) Given positions of groups of shop carts, and a minimum distance required between each shop cart, and a moving speed of 1m/s, determine the minimum time for them to satisfy these constraints.

A2)  Small input wasn’t too bad, at most 100 carts, at most 20 groups, at most a 5 meter separation.  Large input however was up to 1million carts, in up to 200 groups, and required separation might be 1000km.  Initial positions were all within a 200km range for both.  It is worth noting that with 1million carts and a 1000km separation 64bit integers will be required.

For a single group the answer is easy enough, (cart count-1)/2 * min seperation.  The problem comes when the groups interact, forcing the centre of the group to move.  However it seems fairly obvious to me that that is all that happens, the centre of a group moves and that adds to the min required time for that group.  The final distribution of carts from a single group, relative to that group never changes.  So we can reduce the problem to groups, just with a varying separation requirement, which is governed by the sum of two neighbours cart counts.

One more point to prove and I think this problem is solved.  That is that the centre of two groups will never pass one another while going from start position to final position.  This is pretty simple because if the centres pass then at least 2 carts have passed.   And no 2 carts should pass because it would take the same time for them to meet and turn around and go to each others former destination instead, which is obviously inefficient.

Right so we have a list of groups with a minimum separation between each neighbour, so we can determine exactly where the centre of each group will be relative to a single number, the far edge.  So now we can do a binary satisfiability search on time to work out the minimum time.  Starting with a minimum bound which is the time to spread out the largest group without moving  its centre, and an upper bound which is derived from the scenario of a single group of 1million carts with a 1000km separation requirement, the binary search then proceeds.  Satisfiability is determined by walking each group left to right – the centre of the left most group is moved as far left as the time will allow – from that onwards the time constraint and the border of the group to the left both contribute to where to place the centre of the next group.  If the problem is not satisfiable, the position of the group to the left will eventually be so close (or even past the starting position to the right) that there is no way to move the centre of the current group right far enough in time.

Binary search can be performed on floating point numbers, or it can be seen that the answer is always a multiple of 0.5, and so an integer binary search on 2*t will also work.

I think I probably would have solved this in time, whether I would have remembered to use 64bit integers (as I probably would have used the integer binary search approach) is a different question.

Q3) Given an N vertex convex polygon, and M internal non-intersecting walls (which create M+1 internal polygon partitions), determine the maximum number of different colours which can be used to paint each vertex, such that every partition has every colour present for 1 or more of its vertexes.  Also output any such colouring to prove it possible.

A3)  Obviously the answer is at most N.  We can fairly easily go further and say that the answer is at most k where k is the minimum over vertex count for each internal partition.  Minimum answer is also obviously 1, or we can go further and say that it is 1 + the minimum of the number of ‘non-shared’ vertexes for each internal partition.  For the small input the maximum for N was 8, which made a brute force of every possible colouring perfectly feasible.

Problem can be turned into a graph, but with up to 2000 vertexes, it isn’t obvious to me how this could be implemented with a fast enough running time.  Time to look at the answers… (Because I am lazy and don’t want to keep thinking about this problem at this point…)

So, the key here is realizing that each partition joins to another by a single wall, and there is no set of 3 or more partitions which join in a circle, partitions form a tree joined by specific vertex pairs.  If you choose a random partition and colour its vertexes, so long as the two vertexes on each partition wall aren’t the same, you aren’t limiting the other connecting partitions in any way, except that if you colour this partition with too many colours, the next partition might not be able to use all of those colours as well.  From this you can see that the partition with the least number of vertexes is the critical bottle neck.  If you label every one of its vertexes a different colour, any/all connecting partitions will have no problem colouring themselves.  So long as you don’t repeat the same colour two vertexes in a row you are fine.  Since the minimum number of colours is 3, you can just paint vertexes clockwise around a polygon by going through the colours in order, and if it happens you get 2 of the same colour in a row when you complete your walk, you just switch the last 2 vertexes you coloured.

The other tricky part is generating the partition graph fast enough from the input as given.  This actually isn’t too hard, but it certainly is possible screw it up.

 

GCJ11 R1A

So like last year, I scraped through in R1A with only 1 out of the available questions solved.  But this year I was a bit faster, so I got 655th rather than 800+.

I was actually very close to a respectable score – an off by 1 which took me almost 2 hours to find… *sigh*

Q1) Given an inclusive upper bound on the number of games played today (and at least 1 game was played), with an ‘exact integer’ percentage win rate for today of Pd and an ‘exact integer’ percentage win rate for all time Pg, is the scenario even possible?

A1) The large inputs put the inclusive upper bound at 10^15 – which seemed daunting at first, but fairly quickly I discovered a short cut which I thought made it irrelevant…

First we can rule out a couple of obvious scenarios – if Pg is 100% then Pd must be 100% or something is wrong.  Similarly Pg of 0% implies Pd of 0% or again we fail.  For me the next thing I realized was that if the inclusive upper bound was >= 100 then the option of 100 is available – an important option in the presence of percentages.  In this case every value of Pd is valid.  In that case all that remains is to consider whether Pg is valid…

Pg being valid is actually very simple, so long as it isn’t 0 or 100, we can add so many games prior to today, that today becomes irrelevant, other than to serve as a tiny modulation factor which needs to be compensated for to make Pg an exact integer.  This applies regardless of whether the number of games today was 100, or much smaller.

Final scenario is where the inclusive upper bound doesn’t include 100.  This is less than 100 options to step through, so why not!  For each value of n, we check whether n*Pd is an integer, or similarly, whether n*Pd = 0 mod 100.  If we find a match the situation is plausible, if not then the situation is invalid.

So as I alluded to earlier the upper bound max of 10^15 isn’t a problem, 3 if statements take care of all scenarios >= 100.  However when I went to run my code on the large input file, it crashed!  I hadn’t noticed exactly how big the upper bound max was, and was using int.Parse.

Q2) In a variant of hangman where there is no way to lose, just a score that gets worse the more useless guesses you make, you are choosing words from a dictionary and your friend is playing to a specific strategy.  You want to choose the word which will give your friend the worst score – if there are equal options the first word in the dictionary is the one you take.  If the strategy can be described by a permutation of the alphabet, where each letter is guessed in turn unless there are no dictionary words which match the previous guesses which contain the letter, in which case the letter is skipped, given a list of these permutations and the dictionary, what word should you select?

A2)  A very wordy question (pun not intended!), and for the longest time the question text itself was self-contradicting.  Large input had a dictionary size of 10k words and up to 100 permutations to try for each dictionary.  Maximum of 10 test cases meant that it wasn’t entirely insane though.  A brute force attack requires iterating over the dictionary, for each entry in the dictionary – and performing this multiple times even – it was clearly not going to work in time O(D^2*P) is billions upon billions of operations for the large input.

I spent a while considering whether there was some dynamic programming approach but eventually came to the conclusion that I was just trying too hard, and that an O(D*P) solution would be okay even with a rather high constant multiplier.  So I started implementing.  The approach is not to try each possible word choice and simulate, but instead to simulate all possible word choices at once.

At this point I should mention something which possibly isn’t clear from how I wrote the question above, which is that since the first thing you do in hangman is to write down the full list of blanks, your friend immediately knows the length of the word, so whatever word you select your friend will only consider words of that length.

Put all the words of a specific length in a list, then going through each one determine what its response to the current letter being tried in the strategy.  The responses (which can be represented by a bit field as maximum word length is small) break the words into groups.  Each of these groups is self consistent – if any of those words had been the originally selected word, the group would be same set of words that your friend would consider as still plausible and the basis of the strategy.  Once each group is created you can move to the next letter in the strategy, applying it to each group, splitting it into potentially even more groups which remain self consistent.  All that remains is to keep track of the score for each group, and then select the best score once you get to the end.  Repeat for each different word length, and select the best of the best.

Keeping track of the score for each group is fairly easy, first group has score 0.  If a group response to a letter by splitting into 2 or more groups, and one of those groups contains words that do not have that letter, that group has its score increased by 1 compared to the input group.  Key thing to notice here is that if the response of a group to a letter is to not split, and the words do not contain the letter, the score doesn’t go up – because this corresponds to the scenario where the strategy is to skip a letter, because none of the plausible words contain the letter.

I actually had this all written with almost an hour to go, I even took the time to generate the maximal input file to make sure it wasn’t too slow.  Except it didn’t produce the correct answers.  I was confused, I hunted for potential issues – only found 1, and it was still wrong.  With 30 minutes to go I decided I would write the brute force implementation, just get the points for the small input or if I had time, maybe use it to work out why the proper implementation didn’t work.  But I made copious mistakes in the brute force code – even though it was theoretically simpler, I got it wrong in multiple ways.  After the competition finished, I still had to fix 3 more bugs before it passed the practice small input.  At this point I compared it to my proper implementation, and found a specific test case to focus on.  Turned out that my proper implementation was actually much better, it hadn’t suffered from any of the bugs of the brute force code.  Just a single mistake, iterating over the dictionary of grouping words by length, I hadn’t used foreach, but had instead just iterated over the possible lengths, 1 to 10.  Except my loop stopped at 9, oops!

Q3) You are playing a special single-player card game.  You have at least 1 card in hand at the start, and the rest in the deck – with a total of up to 80 cards.  You start with 1 turn remaining until the end of the game, but each card has 3 values.  T, S and C.  T is the number of additional turns you gain by playing it, S is the score you get by playing it, and C is the number of cards you draw from the deck, if you play it.  You actually know the exact order and contents of both your hand and the deck before you start – determine the maximum score you can achieve.

A3)  This was the ‘hard’ problem and I definitely agree it was far harder than Q2.  Only 3 people got out the large input, and barely over 100 got the small.

The key difference between small and large was of course the limits imposed.  In the small input, C is at most 1, where as in the large input C could also be 2.  I think with the C is at most 1 limit, the problem can be more obviously turned into a dynamic programming problem…

I certainly had to look up the answer here, but the key comes from determining properties of the ideal answer and using those to limit your search space, which would otherwise be huge.  Firstly it is worth realizing that you should always play a T>0 card if you have one available.  Since they give more turns, there is no downside to playing them.  Next is the C=0 cards.  For a given maximal solution, any C=0 cards (which are not T>0 cards) can be reordered to the end.  Furthermore amongst C=0,T=0 cards you will always want to play them in largest score first order.  All that remains is to determine the order of playing the C=1 and C=2 (T=0) cards in between, if they are played at all.

The final and most important trick here is to determine that if you play 2 C=1 or 2 C=2 (T=0) cards, they can be reordered such that you play the one which arrives in your hand first, first.  Finally this gives enough limitations on the search for an ideal solution.  So we have a graph with status ‘index in deck of last drawn card’, number of turns remaining (capped at 80, since over 80 is pointless), the first C=1 card we might yet play, and the first C=2 card we might yet play and the first T>0 card we might yet play.  The transitions on this graph always increase one or more of these 5 values, so it is an acyclic directed graph.  Since there are no cycles you can fairly easily search this graph for the ‘longest path’ from the source, where each edge is a score and each vertex has an end bonus related to maximal use of remaining turns to play available C=0 cards.

To enumerate the possible transitions we have ‘play a T> card’, ‘play a C=1 card’, ‘play a C=2 card’, ‘decide to never play the current C=1 card’, ‘decide to never play the current C=2 card’.  A performance improvement option here is to remove the C=1/C=2 card options if there is a play a T>0 card option, since we know playing T>0 first is never a bad thing (Or alternatively, get rid of all T>0 from the status info and state transitions – presume to have used all the could be in the hand as part of every transition, and in determining the initial status). Use a DFS to generate the graph, as it is going to be sparse and maximum recursion is 80. Also add backwards pointing links because they will be useful for determining the maximum length.

Recursion with memotization will solve the problem at this point.  Starting from the implicit end node which every node connects to via the play of playable C=0 cards, recur across each back pointing link added before, using the edge weight (score increase due to that action) to build upon the result of the recursion.  Memotization to ensure the runtime is fine.