An interesting new style of question turned up in this round, where you could potentially fail even if you do everything right…
Top 1000 advancing, cut-off ended up being Q1 small/large, Q2 small in quick time, or Q3 small as well.
Q1) Determine how many rings on a bullseye can be painted using a specific amount of paint.
A1) Conveniently the ratio of paint volume to surface area involves pi, so this problem ends up being integer. First off you calculate the area of an annulus with inner radius r and outer radius r+1 – this is pi*(2r+1). Paint volume per ring is thus 2r+1, and we can get all rings by summing from the initial r, increasing r by 2 each time. For small input you can just keep adding until you don’t have enough paint for the new ring and return the count of rings so far. Large input paint volumes need 64 bit integers, so brute force needs over 1 billion iterations in the worst case, so that is unlikely to work fast enough.
So for the large input you need to be a bit smarter. The sum is a classic arithmetic series total, which has the closed form (a+b)(b-a)/2k – where a is the smallest element, b the largest element and k the element spacing. First option (which I would suggest is the best choice) is to binary search on the number of rings, using this formula to calculate the sum. Being very careful to avoid integer overflow. (So long as your binary search range is selected very carefully (a+b)*(b-a)/2k should able to be made to work with 64bit integer, as k is 4 and a+b and b-a are both guaranteed to be even – but it is simpler to just use BigInteger.) Alternatively you can expand the closed form to give a quadratic formula in terms of the number of elements and the target paint usage, solve for n using double precision floating point, and then check which of the nearby numbers is the actual maximum using the closed form (to avoid floating point error triggering the floor function to incorrectly give you the one less than the answer).
Q2) Given an ordered list of tasks where you can spend your energy to create value, and you have a maximum energy E and a regeneration between each task R, and each task has a value per energy expended V_i – determine the maximum value you can get out of your energy, assuming you start at maximum energy.
A3) Small input the maximum energy is only 5, so going into any given task there are at most 5 possible scenarios, and only 10 tasks, gives a true brute force approach of 5^10, which is only a few million. Obviously you are always going to spend all remaining energy on the last task, so its only really 5^9 even. But you could use a cache to get the state space down to ~50 entries.
Large input maximum energy is 10^7, maximum value per energy is 10^7 and maximum number of tasks is 10^4. So even the cached approach is ~10^18 runtime operations. (Which is coincidentally also the range of the maximum answer, so 64bit integers are back in play.) Looking at the problem more closely it seems that a greedy approach is more likely – you want to use your energy all up on the most valuable tasks, unless you could have conserved some of that energy for a later even more valuable task.
Looking at someone’s solution this appears to be the case. You have an array of energy used at each location, starting off by using all energy you have for the first task, and everything you regenerate at the second. And so on, using all the energy you have as fast as you can. But also at each location after the first, you look backwards 1 by 1 to steal some energy forwards if you are higher value per energy. If the total energy used between your current steal candidate and yourself is less than or equal to max, you can safely steal it all forwards. But if it exceeds max, you take what you can, and leave the rest – and stop looking further back. Also if you ever find an element more valuable than you, you stop going back because everything prior to that will have already been stolen forwards to that element as much as possible. This greedy solution looks worst case O(N^2) which is ~10^8 – but the case to trigger that is first task is most valuable, rest are in ascending order, maximum energy is greater than N^2 * regen rate – so there probably weren’t too many inputs that were that bad. I suspect that an O(N) (or close to it) solution is possible by tracking whether we have pulled all the energy forwards in certain ranges, and thus skipping them during any walks backwards.
Q3) Given a set of products selected at random by choosing random subsets of N cards with randomly selected 1 digit numbers (between 2 and M inclusive) take a guess at what the original cards were.
A3) Yes, that’s right, the question doesn’t want you to give the answer or -1 if it isn’t possible to tell for certain, it wants you to take a guess if you aren’t sure. It then specifies that you pass if you have a certain percentage correctness. Since that is a bit nasty, there is no large input, just 2 small inputs that you can repeatedly try until you succeed. Time penalty still applies (which is kind of unfortunate).
Helpfully the first small input is very constrained, only 3 cards, only 4 possible values per card, giving 64 scenarios to consider as to whether they are possible given the presented products. Then to maximise your chances, if there are multiple possibilities, you choose the ‘best one’, weighted by the probability that a given choice of cards resulted in that products given – grouping them by the sorted order (of which there is only 20 cases). Also, since you are given 7 products, your probability of being given a product which is maximal for the value of the cards is actually pretty high. (You could run a bunch of simulations and determine probability for each possible product as to what the original input was, then combine them to choose the most likely input.)
Looking at solutions, it appears that even that is more complicated than required. Success could be achieved by looking at the products – highest power of 3, start your answer with that many 3s, highest power of 5, add that many 5’s to the answer – then look at the maximum power of 2. If you only have 1 digit remaining, you know the answer and return. If you have 2 digits remaining and the power of 2 is 8, then add a 4 and a 2, otherwise add 2 2’s. Finally return 3 2’s otherwise.
The second small input however has 7 possibilities per card and up to 12 cards. This one really needs some smarts, but 7^12 is 17billion, so you can’t consider every starting hand, let alone the probabilities of every subset. Also since there are only up to 12 products given per case, probability of getting the maximal product is quite low. While we can’t consider every starting hand, if you group them by repeats it should be much better. So if you generate the non-descending only hands, and calculate the number of repeats using the standard combinatorics (which I would have had to look up…) and then looking at the probabilities for each possible product (by trying the 2^12 multiplications) for frequency you can generate a big database which can be reused for each sub-test case. For each possible answer you calculate the probability by multiplying all the probabilities of each product together under each potential initial scenario group, and which ever answer is best, you return it.
Looking at a solution it seems I was on the right track. Only thing I saw was that rather than calculating actual repeats, they calculated inverse repeats divided by 12! and then used that as a division factor in the probability rather than a multiplication. Probably avoids requiring to normalize the probability calculations and worry about range errors.