.Net 4.5.1 Preview

A new .Net preview is out, this time a minor point release, but I decided to do a reference assembly comparison anyway.

Absolutely nothing removed (as expected), but also only a handful of additions to the API.

  • Precision/Scale properties for DbParameter.
  • SqlConnectionStringBuilder gets connection retry count and interval properties.
  • EventSource (Diagnostics.Tracing) gets ConstructionException and CurrentThreadActivityId read-only properties and a couple of methods to set the current thread activity ID.  I think the current thread activity ID concept may be new to Windows 8.1 – if something else I was reading is associated with this.
  • EventWrittenEventArgs (Diagnostics.Tracing) gets ActivityId and RelatedActivityId read-only properties.
  • ActiveDirectory has new enum values for 2012 R2 Server domains/forests.
  • MemoryMappedViewAccessor and MemoryMappedViewStream both now have PointerOffset read-only properties.
  • GCLargeObjectHeapCompactionMode enum for use with the new LargeObjectHeapCompactionMode property on GCSettings.  2 values, Default and CompactOnce.  I guess this property auto-resets back to default on the next full GC if set to compact once. (Also in Core.)
  • A bunch of new generic methods on Marshal – looks like a mixture of convenience and avoiding boxing of structs. (Also in Core.)
  • SignedXml – SafeCanonicalizationMethods read-only property.
  • TransactionScope now supports the new TransactionScopeAsyncFlowOption enum in several new constructors.  I presume this controls how transaction scopes interact with async, which I presume lets you improve performance when you know a certain async section can safely be performed outside transaction scope.
  • Web.Hosting.CustomLoaderAttribute
  • HostingEnvironment.StopListening event.
  • IStopListeningRegisteredObject and ISuspendibleRegisteredObject interfaces in Web.Hosting – but I haven’t found what uses them yet.
  • Core only: AsRandomAccessStream extension method for streams.
  • Core only: System.Threading.Timer arrives.

I have listed everything I found, but if you want to look at my raw processed output files for some reason (which are much smaller than last time, due to a far smaller number of changes) there is v4.5.1 to v4.5 comparison and v4.5.1 core to v4.5 core comparison.

Binary Functions

There are 16 binary functions over 2 variables.  This is a simple consequence of there being 4 input possibilities and only 2 output possibilities for each.  In computer programming we frequently use ‘and’ and ‘or’, less often ‘xor’, but that still leaves 13 more.

Lets take a closer look at all 16.  For brevity I will label each by their outputs for the inputs F,F T,F F,T T,T – so ‘and’ is FFFT.

  1. TTTT – ‘always true’.  Not very interesting, but it exists.
  2. FFFF – ‘always false’.  Again not very interesting.
  3. FFFT – ‘and’.  Well known and used in programming frequently.
  4. TTTF – ‘not and’.  Simple combination of the negation function with the and.
  5. FTTT – ‘or’.  Well known and used in programming frequently.
  6. TFFF – ‘not or’.  Simple combination of negation and ‘or’.
  7. FTTF – ‘xor’.  Not quite as well known as ‘or’, but simply ‘or’ where it can’t be both.
  8. TFFT – ‘not xor’.  Again a simple combination of negation and ‘xor’.  But it is also the A == B scenario and thus xor can also be described as A != B.
  9. FFTT – ‘B is true’.  First parameter is ignored, and second is just passed through.
  10. TTFF – ‘B is not true’.  Again just the negation of the one before.
  11. FTFT – ‘A is true’.  This time the second parameter is ignored.
  12. TFTF – ‘A is not true’.  Yet another simple negation.
  13. TFTT – ‘A implies B’.  If A is true, B must be true, but otherwise anything goes.
  14. FTFF – ‘A and (not B)’.  This is also the negation of ‘A implies B’ but programming sees combinations of ‘and’ and ‘not’ far more frequently.
  15. TTFT – ‘B implies A’.  If B is true, A must be true.
  16. FFTF – ‘(not A) and B’.  As for 14, but reversed.

So, why did I list all these?  So I could categorize them.

  • Really, really boring: TTTT, FFFF – No interest in the inputs.
  • Boring: FFTT, TTFF, FTFT, TFTF – Only one input is used.
  • ‘And’ variants: FFFT, TFFF, FFTF, FTFF – Only one scenario is true.
  • ‘Or’ variants: FTTT, TTTF, TFTT, TTFT – Three scenarios are true.
  • Strongly linked: TFFT, FTTF – Two scenarios are true, and both inputs are used.

I am now going to state that I think ‘and’ variants are actually boring as well.  Since there is only one possible true state, they all result in you being able to say something about A or something about B, in isolation.  The problem is separable.  Which is just the nature of ‘and’, but it makes it less interesting to me.

This leaves 6 ‘interesting’ functions.  A is the same as B.  A is the opposite of B (xor).  A implies B.  B implies A.  Not both false (or).  Not both true (nand).

Of these, ‘or’ is obviously the most commonly used in programming.  Interestingly, despite being a fundamental element of predicate logic, I have not yet worked with a programming language where A implies B has its own operator, you have to do B or not A, which is a bit clumsy.

Where does this thought train lead me… well maybe I will have more to say on this topic another time.


Last on-line round of the TCO/GCJ ‘season’.  138 people eligible to compete, 125 turned up.  Unlike last time the first 2 problems were completed by a significant proportion, so advancing required a fast time for both the first 2 problems.  Only one person solved all 3, the same person also had the only solution to the 3rd problem.

Q1) Given a sequence of numbers where each element is between 1 and n, determine how many sequences of equal length with each element also between 1 and n are a ‘match’ with the original sequence if you remove one element from each sequence.  Since the answer is large, return modulo large prime.

A1) At first glance this doesn’t seem too complicated, consider the original sequence, remove each element in turn, and count how many sequences you can create by adding one element.  The problem is of course that this double counts.  Even just considering a single case, if you insert the same value before/after a given number, the sequences are the same.  So either we need to determine how many double counts occur and subtract them off, or use a different approach entirely.

Looking at a solution (because I feel lazy today) scenarios can be divided in to 2 cases. Where the number we insert is a member of the original sequence, and where it is not.  If it is in the original sequence, we have complicated possibility of double counting, if it is not, then it is much simpler.  So for each element removed (up to 50 cases) for each insertion position ( up to 50 places) for each distinct original value (up to 50 values) we have something which needs checking for duplicates.  This is a fairly small sample size, so we can easily use a set.  For each of the element removed, for each insertion position, we can also insert the n-distinct_count_options other values.  There is still the ability to double count here, but these cases can be grouped by what the inserted value is (regardless of whatever that value is), so we can use a place holder (0 for instance) and the same set, and each time we get a non-match we add n-distinct_count_options to the final result.

Q2) Given pairs of values, determine the minimum number of individual value swaps (swapping one value from one pair with one value from another pair) is required such that all pairs have a maximum difference of k.  If it is impossible, return -1.

A2) There is only a maximum of 16 pairs here, which suggests bit-masks may play a role.

But again, I am feeling lazy today, so I went straight to a solution.  The first step is to determine whether the problem is possible at all.  It seems that if you take all the values from the set of pairs, put them together, and sort them, they must fall out pairwise into values which are close enough.  The logic behind this seems to be rather straight forward – if you consider the smallest value you need a value close enough, and if there are multiple which work, choosing anything other than the smallest of those, can only potentially make it harder to form the next pair.

To find the minimum number of swaps we can look at this problem from a different angle.  Any solvable situation can easily be done in at most n-1 swaps, if we look at the sorted order pairing, we can swap the second to be next to the first, the 4th to be next to the 3rd, and so on, until the final number is already in the right place, the question is how much can we short cut that.  If we have a solvable scenario which can be divided into 2 solvable subsets, worst case we can do the first in k-1 steps and the second in n-k-1 steps, giving n – 2 steps.  What remains is to determine the most number of steps which can be saved, by following this down recursively.  The maximum number of steps which can be saved by a scenario is the best sum of the maximum number of steps that can be saved over each partition into 2 solvable subsets.  With a minimum of 1.  Since we already have worked out how to determine if a situation is solvable, we can perform that for every subset, then considering every subset, of every subset (which is not a small number, but is achievable), determine if that subset and its compliment are both solvable, and use that as the framework to build up a recursive cache of maximum number of steps to be saved for a given subset.  Then the final answer is n – maximum number of steps saved for the full set, assuming it is solvable.  If it isn’t the answer is of course -1.

Q3) Determine the minimum number of jumps to get from 0,0 to x,y if each jump can’t be more than a distance sqrt(d) in a straight line, and jumps may only be to/from integer coordinates.

A3) Unusual bit here is that rather than just asking this question once, each test case can be up to 50 of these questions, which you have to answer all at once.  For d=1 the answer is |x| + |y|.  For d=2 the answer is max(|x|,|y|).  From there the question that first needs to be asked I think is whether a greedy solution is correct.  If so we should be able to find a pattern in the jumps which repeats and use that to ‘accelerate’ to the finish.

Looking at the solution again, it appears some fancy maths comes into play.  Given d, you start off by determining the pair of maximal distance jumps which are either side of the angle connecting 0,0 to |x|,|y|, do this by constructing each possible maximum distance jump and watching the crossproduct change signs as we go from one side to the other. (May want to take care of the x= 0 or y = 0 cases separately just to be sure you don’t screw that up.)  Once you have this pair of vectors it just becomes a question of how many of one and how many of the other to add together to give the right destination.  This is where the maths seems to get a bit fancy.  Taking directly from the solution, the answer is crossproduct((x,y),V2-V1) / crossproduct(V1, V2-V1), rounded up.  I’m not quite sure how to derive this exactly, but at first glance I wouldn’t say it was wrong.

GCJ13 R3

So, out of 500 qualifiers, only 311 positive scores.  And no perfect scores.

Cut-off to advance to the world finals was having solved the first and last problems.  But given only 3 people fully solved the last problem, a safer grouping was having solved first, 3rd and the small input of the last.

Interestingly solving Q1 and small inputs of the rest got between 29th and 54th, which was a reasonably achievable score if you gave up and didn’t get bogged down on any of the tougher large inputs.

Q1) In a rigged game of roulette where the ball always lands on one of the values with minimum payout, given how everyone else has already played, determine your maximum profit you can make with your money.

A1) So for this question it seems likely that we can enumerate the scenarios of maximum winnings.  The small input certainly is trivial, with a maximum budget of 1000, you can use the strategy that placing money on an outcome which is not currently one of the lowest payout outcomes, is a wasted bet.  Furthermore when there are multiple equal, you choose the one which has the least of your money on it.  Then just apply that strategy by placing each bet of size 1, one at a time, and find which is best.   The large input however we need to enumerate the ideal scenarios more efficiently as you could have up to 10^12 to spend.

So, rather than going 1 by 1, we need to skip ahead by chunks at a time.  Specifically, any scenario which is profitable, will be more so, if we can raise all the minimum payout outcomes by 1, without increasing the number of minimum payout scenarios.  So we can seed with a few scenarios, and push them as high as we can, without reaching equality with another players bet or running out of budget.  Each time we reach another players bet, we start again assuming having brought everything up equal to that.  I think that the scenarios to seed with are the 37 equal or above equality with a bet (or equal or above 0 if there are less than 37 player bets).  Running time is thus 37*37 divisions/minimum/maximum calculations – easily running in time.

I think this would work, but I see solutions instead using binary search to find the best scenario rather than division.  It seems that there is a more aggressive strategy which always spends as much of the budget as possible for each of 37 scenarios, rather than 37*37 scenarios spending up to the next price threshold.

Q2) Given a set of vertices construct a non-self intersecting polygon which uses all the vertices with at least half the area of the convex hull.

A2) Just like Round 2, another problem where it doesn’t state maximum area, just at least half, and any satisfying result can be returned.  Being a computational geometry question, it isn’t my favourite.  So I think the approach would be to start by calculating the convex hull.  Then it becomes a question of where to insert the contained points into the order described by the convex hull.  Small input however is limited to 10 vertices, so it can be brute forced, just consider every polygon, build them up to rule out self-intersection, then calculate the area that remains.  Large input has 1000 vertices, and I want to say a greedy approach will work, where for each vertex not in the current border you calculate its minimum cut-out location to add it to the list, and perform that cut-out.  Problem is doing this in better than O(N^3) time seems like it would require some fancy spatial data structures.

Looking at the answers there is a far simpler approach which avoids the whole question of maximum and just dives straight at the ‘at least half’ criterion instead.  It is not at all obvious though…  If you calculate the convex hull, and consider connecting all the internal points in a left-to-right sequence to the outside of the hull (choosing an orientation to ensure that internal points can be strictly ordered left-to-right, you create 2 areas (both of which are valid non-self-intersecting polygons), which together add to give the area of the convex hull.  Hence one of them must be at least half.  So if you take half the convex hull, and all the other points ordered left-to-right, that can only be larger than the previous area, so one of those 2 must be more than half the area.  Since all coordinates are integer, you can select a very slight fractional angle, which will ensure they all have a distinct left-to-right order.  Calculate the convex-hull, and the left/right most coordinates of the convex hull under this order then serve as the split points to divide the hull into 2 parts, try both versions, calculate the maximum area.  Since the polygons are known to be non-self intersecting, the running cost is O(N log N) for sorting on a slant, and then O(N) to implement convex hull on sorted data and O(N) to calculate the area, so 1000 is easy.

Q3) Given a set of one direction edges on a graph and an ordered subset of them which represents a path from vertex 1 to vertex 2, determine the first edge of that subset which is definitely not on the shortest path, if any, given a range of possible costs for each edge.  When considering each edge for whether it can be on the shortest path, the shortest path must include all previous edges.

A3) This is a very strange question, on my first reading I didn’t understand it very well and thought it was about ensuring each edge specified was on a shortest path, not only on a shortest path prefixed by a given set of edges.  I think the small input can be brute forced by trying each combination given by choosing one of the extreme values for each edge and calculating the shortest path for each of these 2^20 scenarios.  Whichever one goes furthest down the specified path will define the answer.  Large input certainly needs a different approach.  I am thinking that we start with every edge at minimum value, then apply all source shortest path, then for each edge on the route we take the specified previous edges, and from the end of the edge to vertex 2 we use the previously calculated shortest path to the destination, then every edge not on that list has its cost set to maximum, and we determine if the shortest path goes through our selected edge.  Running time should be okay, O(N^2 log N) or so, but I am not certain it is actually correct.

Looking at the solution I am still not entirely sure, but I am pretty sure there are corner cases it breaks where the shortest path back from end of prefix allows for a short-cut that skips the selected prefix edges but there exists another ‘short path’ back which doesn’t cause that problem.  The solution does a simultaneous 2 starting point shortest path by priority first search.  One starting point is vertex 1 with initial cost 0, the other is the end of the current prefix being considered, with a cost implied by the minimum for each edge in that prefix (minus a half to give it priority since equally fast is still on the shortest path).  Each of the prefix edges are forced to have weight minimal.  If we are at a vertex which has a fractional best cost, we use minimal weight for outgoing edges, because our prefix shortest path got there first, so its always good.  Otherwise we use maximal weight for outgoing edges, since the raw search got there first, which is bad.  If the destination vertex ends up being fractional, shortest path has succeeded with the prefix.

Q4) Given a starting list of boolean values, determine the average sum of the number of consecutive true values starting from randomly selected locations until the first false to the right (wrapping back to the beginning if needed), if each found false location becomes true, and the process repeats until every value is true.

A4) Again this is a problem where the small input is quite easily achieved.  With only 20 entries in the list, the problem can easily be recursively defined with a cache over the possible states in the list.  Each state has 20 random approaches which have to be averaged to give the final value of that state.

Large input, I had no clue, so I went straight to the solutions.  The solution is an O(N^3) approach by calculating the probability of filling a consecutive subsequence (including wrapping) of values with a specific member being last, and the expectation value of the same.  Each of these calculations can be expressed in terms the items before the specific last entry to fill, and the ones after it and using some fancy combinatorics for the probability calculation.   I won’t go into it here, you can always read the official contest analysis for more details.


So, having been knocked out in Round 2, I didn’t have to wake up at 2am to do this round, which is good because I’m not sure I would have gotten any points anyway…  As usual, there were 3 questions, but no one solved question 3 and only 5 people successfully solved question 2, meaning you could advance to the finals with just by solving Q1 quickly and a successful challenge or 2.

Q1) Given a set of 32bit numbers, determine whether each of a second set of 32bit numbers is in the resultant of K rounds of expanding the the set using the bitwise and operation.  Specifically, every pair in the set is anded and the result is added to the new set, the input and output sets can contain duplicates. 

A1) So, my initial thoughts were around that the and operation is rather limiting, so the total count of numbers will never get very large, so you could just simulate the problem, collecting duplicate outputs into counts.  But that would be mistaken. using an appropriate set of initial numbers you can generate a very large set of outputs even once you apply distinct, and as a result simulation will take too long.

So, K=1 is easy, you just consider every pair.  K=2 there are 2 cases, the 2 input numbers are either from distinct original pairs (N choose 4) or they are from overlapping original pairs (N choose 3).  K=3 there are more scenarios.  8 distinct original numbers anded together is the maximum, you can obviously do scenarios for 7, 6, 5, 4.  What is interesting is that you can still do 3.  In the K=2 case, each scenario of N choose 3 can be created 3 different ways.  ((1,2) (2,3)) ((1,3) (2,3)) ((1,2) (1,3)).  Since all duplicates are kept this means that these would be anded together pairwise to create another 3 new copies of each case of (N choose 3).  From there on it follows that every possible combination involving at least 3 and at most 2^K distinct original numbers will be in the output.  And since the original input is at most 16 numbers, we can consider every subset easily enough.  We just need to remember to special case K=1, as it is the only case where you can choose pairs.

Q2) Given a NxN array with 3 specified positions on the edge being important, determine the minimum number of filled in positions which need to be added (some may already be filled in) such that there is no path of connection between any pair of the 3 original specified points (path consists of horizontal and vertical only, no diagonals).  If there is no way to do this is less than 100 positions, return 100.

A2) The ‘100’ limit is a misdirection, other than when any 2 of the specified positions are adjacent (and hence cannot be blocked) the maximum answer is 6, 3 for each of 2 of the original positions required to surround them against an edge – the 3rd position doesn’t matter of course once 2 of them are blocked in.  However, considering the cases of placing 1-5 options on a 20×20 grid, and for each case performing a connection check is 400^6 operations in the worst case, which is far too many to complete in the few second time limit.  It is quite feasible to check each single location, or even each pair, but beyond that is pushing your luck.

Looking at the 5 successful solutions they all seem to use the same kind of approach.  The idea is to think about the minimum cost to create a wall between 2 points.  Then the final result is either 2 walls which each block in 1 edge location, or 3 walls which meet in the middle.  The minimum cost to create a wall between 2 points is a graph problem where a block already filled in creates 0 cost edges for the paths it blocks, a block not filled in creates 1 cost edges instead.  The net result is you can run all pairs shortest path.  Once you have the all pairs shortest path, you can then consider each centre location, and for each of those consider the shortest paths to create three walls from there to any location on the edge between each pair of the 3 selected edge locations.  Finally you can choose each pair of edge positions and choose the cheapest 2 walls which go from one side of a selected location to the other. (The 2 cheapest selected must obviously be for different selected locations.)  Choose the best result out of all of these scenarios and, presto you have the answer.  In the worst case of a large empty map with 3 spread out selected edge locations the 2 walls will always win out, but on a map which is already filled in much more significantly, the 3 walls meeting is more likely.

Q3) There is a set of m positive integers , n of which are between 1 and t inclusive, and the sum of all m is less than or equal to s.  Determine how many sets satisfy these requirements (returning count modulus large prime).  Constraints, n is at most 100 less than m, t is at most 10^5, m is at most 10^9 and s is at most 10^18 and is at least n*t.

A3) So I don’t have any answers to fall back on here, what I find most interesting is the constraints, they are rather detailed – s being at least n*t for instance is suspicious.  Looking at this problem I see possible approaches with running times of at least O(s) which is obviously out of the question.  The combinatorics certainly seems beyond me at the moment.

GCJ13 R2

So, looks like this was a tough round – only 1 person solving the 4th question, and they broke the large input on question 1, so no maximal scores.  Advancement cut-off was solving the first 2 questions small/large with at least 6 minutes to spare.  T-shirt cut-off was Q2 small/large with more than 40minutes to spare.  Safe T-shirt was Q1 small/large and Q2 small.

Q1) Given that the cost to travel between any 2 points on a line in one direction is given by a quadratic formula which has a negative quadratic component in the distance between the points (hence average cost by distance is monotonically decreasing as distance increases) and a list of trips and the number of people completing them, determine how much the total cost can be reduced by maintaining the same number of exits/entries at each stop, but changing where a person’s ticket starts/stops the journey.

A1) My question statement above is a bit more generic than the specific one in the comp – but I think they are equivalent.  The first thing to identify that in any case of 2 trips that touch/overlap the best result is to have one ticket travel the long way and one ticket travel the short way.  This leads to the greedy approach – ignore the people and just watch the tickets.  At each trip start location, new tickets arrive, at each trip exit location we remove the newest tickets, leaving the old ones.  This leads to a natural implementation involving a stack – order all start/end locations from first stop to last stop, such that starts are sorted before ends when start and end locations are coincident, then walk them in order, pushing onto the stack ticket start locations and counts, and at each end location pop the stack as much as required to satisfy the exit count – putting any remnants back on the stack if we get a partially fulfilled starting location.  Sum up all the costs from entering/exiting and compare to the expected cost. This is O(N) in the number of distinct trips, so it will easily perform in time for the large input which only had 1000 distinct trips.

The real challenge for the large input is the size of the result – the number of tickets in flight can get up to 10^12,  and they can all exit at once, or all enter at once. (Problem statement does not rule out the 1000 trips actually being identical – just that each one has a maximum of 10^9 people).  So you need to remember to use 64bit integers at this point already.  But then the cost of individual trips can be ~10^18 itself, so calculating the total cost can overflow 64bit multiplication.  So you either have to remember to apply modulus before and after the multiplication, or simply do all math using BigInteger and apply the modulus at the very end.  If you take the modulus approach, you have to remember that when taking the difference of 2 modulus values (to calculate the value saved) the answer is always positive under modulus, but the % operator in many languages will give you the negative remainder when used on a negative result.  As a bonus, the small input is so constrained that it can’t even hit the modulus at all, so you can’t get any confidence in your modulus code before tackling the large input, without doing some manual testing yourself.

Q2) Given 2^N competitors where each competitor can be ranked in a strict order where higher ranked players always beat lower ranked players and p prizes to be awarded, determine the lowest rank which is guaranteed to receive a prize and the lowest rank which could receive a prize.  Players start in potentially any order, but after each of the N rounds they have a stable sort applied based on their Win/Loss record for the tournament so far (lexically ordered comparison favouring early wins rather than number of wins).  Prizes are then awarded to the top p entries in the final sort.

A2) This is an interesting competition structure – in order to be in the top half you have to win your first round.  Also of note that because there are 2^N competitors and N rounds and in each round everyone will always be playing against someone with the same win/loss record – the final results are fully ordered by the win/loss record, no reliance on the stable sort.  Small input is up to 10 rounds, large input is up to 50.  So 64bit integers are again important.

I did not find this problem very approachable for some reason, but after some thought I think the approach to take is to binary search on rank and ask the question, how high (or how low) can this rank go in the final result.  Comparing the result of that question to the prize threshold you can decide whether to go higher or lower to find the minimum rank which could (or is guaranteed to) win a prize.  So then the question becomes, for a given rank, what range of placings is possible.  Highest/lowest ranks have no options they are always stuck in their natural position.  Second highest/lowest ranks can go from their natural ranking to exactly one position past the middle in the direction away from their natural rank.  But what about the rest?  If we consider a given rank, at each round its ‘division’ (same win/loss record) will consist of a certain number of ranks higher and a certain number of ranks lower.  The actual value of these ranks is not relevant, just how many there are.  If we are trying to win we can consider what the effect of optimal strategy will have on these 2 numbers, and vice versa for trying to lose.

So if we have x above and y below and are trying to determine how high a final placing we can get, what is the ideal strategy?  I would suggest it is to maximise y at all stages.  One of the lower ranked will play you, and lose, maximising how many of the y-1 remaining win involves getting them to pair off with each other as much as possible, as every time someone from x plays against someone from y, x is guaranteed to win.  So new y = (y-1)/2 (rounding down) and new x = (x+y+1)/2 – 1 – new y.  Eventually y=0 and you have to lose, and from then on you also can’t win.  This sequence of wins/losses will give you the maximum placing for a given rank.  You can also reverse everything to get a sequence of losses/wins to get the minimum placing for a given rank.  Assuming these strategies are correct it is also fairly obvious that the maximum/minimum placing never has any turning points with respect to changing starting rank, so the binary search approach I originally suggested is plausible (although due to the step-function nature some extra care might be needed). On the other hand, the fact that this strategy demonstrates that ‘ideal’ scenarios always consist of sequences of wins then losses, or vice versa (but no mixing) it should be possible to come up with a much more direct approach simply calculating the size of y backwards.  (Consistently assuming that a round down was required in the forward version, or not, in order to get the range of values that will reach a specific ‘limit’ in the ideal scenario.)  I think this direct approach can be performed in O(N) with a sufficiently smart implementation, as compared to O(N^2) of the binary search approach.

Q3) Given a table of lengths of the longest increasing sub-sequences ending at i and a table of the longest decreasing sub-sequences starting from i, determine the lexicographically smallest permutation of the numbers 1 to N, which satisfies both tables.

A3) Inspecting the tables gives us a bunch of inequalities regarding elements in the final sequence, from there we just find the smallest permutation which satisfies the inequalities and presto we are done?  Somehow I think it isn’t as simple as that sounds.  But lets start anyway.  If two consecutive elements of the first table are increasing, we know that the values are increasing at that point as well.  If they are decreasing in the table, we know the values are decreasing, and if they are equal in the table we know the values are decreasing.  Corresponding logic also applies for the longest decreasing sub-sequences.  However we can learn more.   Each value in the first table is either 1 (indicating a new lowest value) or it is precisely 1 more than a previous number.  The new value is then between that previous number and all intermediate numbers.  However there might be multiple previous numbers that are exactly one less – it appears that maybe we can assume the least constrictive of those, which will be the closest one.

Is that enough constraints?  I am not sure.  The next question is, given these constraints how do we build the output?  My guess is we build a directed graph and perform a kind of topological sort.  Care must be taken to ensure that in the multiple ways the topological sort can be performed, we choose the one which gives the best lexicographical output.

Looking at a solution, I think that I have covered all the constraints, although it can be stated more simply.  For each entry, walk backwards through the table, the first one you are equal to, you are less than that, the first one you are exactly one more than, you are larger than that. (And vice versa for the second table).  When stated this way, you can more clearly see that the resultant graph is fully connected and so the way to populate the topological sort is clear, leaf pruning, where the leaf to be trimmed next is always the leaf which corresponds to the earliest position in the output.

Q4) Given a multi-player version of pong where the paddles are points and can pass by each other and each team has to rotate through its paddles (in order) to hit it back, determine how long the game goes for (and who wins, if it ever finishes).  You are given the initial position and velocity of the ball, the size of each team, how fast each team can move its paddles, and the size of the game world.

A4) Reading the description I thought, sure this question sounds reasonable.  Then I saw the small/large input constraints.  Even the small inputs contain numbers that won’t fit in 32bit integers.  Large input contains values which barely fit in 384bit numbers…

Small input does at least look feasible – each team has at most 1 million players, so as long as your code is fast, you can consider each player and think about bouncing cycles and reflection, identify the few scenarios to determine whether the paddle can make it in time between each turn it has to take – calculate the smallest failure out of the up to 2 million players and return.  The large input however has 10^100 players per team.  The only way this version is plausible is if you can consider ever player in a team all at once, identify the worst case within the team (maybe using a binary/ternary search?).  Given there was only one successful solution, I don’t feel too bad at leaving my discussion there.  In practice the implementation will require a BigRational class (or scaling all positions by the ball and paddle velocities so they are all integers in seconds per unit, cementing the need for BigInteger even for the small input).