Less than 500 people turned up for this round, so of course positive scores advanced, of which there was 312. Almost 100 of those solved the 1000pt question, with the 250pt question seeming to have a higher failure rate amongst the top competitors.
Q1) Given two amounts of two different types, and two rates of consumption and the ability to convert type 1 in to type 2 at rate of 2:1, determine the maximum time before one or the other type is exhausted.
A1) So there are two scenarios. First determine the baseline time of exhaustion by dividing the amount by rate of consumption for each type. If type 1 runs out first, there is nothing to be done, return that time. Otherwise the aim is to find an exchange level which has type 1 and type 2 run out at the same time. Since each increment we convert will lengthen the shortest and shorten the longest until that point.
Thus we need to solve (A1-x)/C1=(A2+x/2)/C2 to determine x, and then recalculate either side. x =(C2A1-C1A2)/(C2+C1/2). Note that x looks like it could be negative, but that is the same case as type 1 running out first.
Probable cause of failure is failing to account correctly for the input range. C2*A1 will overflow 32 bit int if you haven’t converted to floating point. C2A1-C1A2 being calculated with small precision float could be an issue as well, but I think using a 64 bit float might be sufficient, definitely fine with 80 bit float. Obviously it could all be calculated using BigInteger fractions, but I’m pretty sure that is excessive.
Q2) Determine a set of positive integers such that the number of distinct sums which can be constructed from them (using each number at most once) is exactly k. Maximum number of values in the set is 20 and k might be up to 1 million
A2) k=(2^N)-1 is an easy first scenario, just use powers of two between 0 and N-1 inclusive.
As an extension of this observation, it can be seen that given an existing set S, you can create a new set S2 which has double + 1 as many distinct output values, by adding a new value which is one more than the sum of the existing values. If instead we add a new value which is equal to the sum, the number of distinct outputs doubles. Now if we can work out a way to go from one set to another such that only 1 new value is added it would seem we are done.
If we can assume that the existing set of outputs when sorted forms the range 1 to x, then by adding the value 1, the new output is 1 to x+1, and we’ve added exactly one new value. Helpfully the proposed doubling scheme above when given the range 1 to x creates 1 to 2x.
Thus we have increment and double operators, which lets us create any number in logarithmic number of steps. However, with a target size of up to 1 million, its easy to construct a scenario where just double and increment take 38 values. But we’re limited to 20.
So we need to be more greedy. Note that we can actually extend our logic above to say that given a set of size x we can create any new size between x+1 and 2x+1, just by adjusting the new value added. Or in reverse, given a current size x, we create it from any size between x-1 and floor(x/2). So rather than building up to the target number, we could instead tear it down. If the target is odd, choose a value for your output set which is half rounding up and solve for half rounding down, if the target is even choose the value equal to half and solve for half. Repeat until target is 0. Now you are done and its obvious that 20 values is sufficient!.
Q3) Given a tree with weighted nodes, determine the set of all sub-tree weights, then determine the sum of x raised to the power of each member of that set. Return the answer modulo 1 billion and 7. Weights of each node may be large, x itself may be large, but there is at most 50 nodes.
A3) Since the weight of each node can be large, and the tree could be very flat, the number of distinct subtree weights could be huge. So enumerating them all is out of the question. So it would appear that the problem needs to be built up from the leaves.
As a simple starting point, consider a node with one child and an unknown number of sub children. The answer consists of the answer ignoring the parent plus the answer including the parent added together. Assume we have the answer for ignoring the parent (since we are planning on building up from leaves) the answer including the parent consists of, the parent on its own (x^(parent weight)) and, the parent plus every subtree including the child (x^(parent weight + subtree weight) summed over every subtree). We already have x^(sub tree weight) summed over every subtree, so this is (1+subtree answer)*x^(parent weight).
So we can now answer a linear tree, but that isn’t very interesting. We need to handle branching. So parent plus two children. (1+subtree1 answer + subtree 2 answer)*x*(parent weight) handles the scenarios of only choosing to include one of the children, but what about some subset of both. Thinking for a bit this should be (subtree1 answer * subtree 2 answer)*x^(parent weight). Subtree 1 is a sum of all the different subtrees, sub tree 2 answer is the sum of all those sub trees, so every combination of them can be formed by calculating the product, and each term of the product is correct as its x^weight we are trying to calculate so x^(weight1+weight2)=x^weight1*x^weight2.
This can obviously be expanded to an arbitrary number of children, but as written its got 2^(number of children) terms – which is far too many given tree branching factor can be as high as 49. So we need to simplify. 1 + subtree1 + subtree2 + subtree1*subtree2 = (1+subtree1)*(1+subtree2). And this pattern holds as number of children increases. So now its a product of a linear number of terms – which can be easily calculated.
Finally once we build all the way up the tree, we then need to sum all the values over the tree, as each node is only calculating the sum for subtrees which are rooted at that location.
Of course the answer can be huge, so we need to consider how the modulo comes in to play. Obviously adding two values is easy. Multiplying two values needs care to avoid 32 bit overflow. But the raw values of x^(current node weight) where both x and weight might be large requires both care for 32 bit overflow, and to perform accelerated exponentiation. But those 3 scenarios cover all the operations needed. No division or subtraction to be handled. Pretty easy as 1000pt questions go.