guesses.

that is, k - 1 initial guesses;

for each position, issue a guess with two candidate colors,

one as background color.

for example,

X Y X X

after this, we can know whether this position is color X or color Y

or neither.

This brings me 85 scores. I didn't make full use of the 20 secs.

in fact, for most cases it finishes in 1 sec...

If i make use of previous guesses by introducing a formula like yours,

the result may turn out better.]]>

I did not participate in this match, but I thought of the first phase of finding the counts of colors:

First try k-1 guesses with 2 colors in each guess (half of one color, half of other, randomy sorted).

let`s call the numbers of pegs in each color A,B,C,D. Then (for 6 pegs) You have:

guess: ababba, result:1 miss,1 hit, equation: C+D=6-2

guess: ccacaa, result:1 miss,0 hit, equation: B+D=6-1

etc..

and last equation: A+B+C+D=6

Assuming that no peg color constitutes more than a half of the key, this will give the number of pegs and some information about placement in k guesses. But I guess the further part of telling which peg goes where was the difficult one :)]]>

By the way, I think your idea of giving more weight to hits+hints is brilliant (sure, I wish I'd had come up with it myself :-)]]>

hits = feedback[0] = pegs with correct color and correct position; hints = feedback[1] = pegs with correct color but incorrect position

Here is one more difference. I've used total number of matching colors as value hints in cost formula, no matter if it's they are on correct place or not.

It can be calculated as results[0]+results[1] from nextGuess argument.

This way hints is plain measure of how good is your guess in terms of choosing various colors of pegs, and hits is the measure of order quality.]]>

Sum(Abs(Expected matched - Actual matches)), and how the new guess is selected?

The method is very simple (in theory): always return a guess that is completely consistent with all the previous feedbacks. It goes like this:

a) A code is

b) The postulated code is used as the key to score each of the previous guesses and the resulting scores are compared against the codemaker's feedbacks for those previous guesses

c) If any of the comparisions fails (ie. we calculated a score of [3,5] but the codemaker's feedback was [4,5] for that guess) then it's said that the postulated code is "inconsistent" with the feedbacks (thus it can't possibly be the secret key) - another code must be choosed (repeat from step a)

d) Otherwise, if all the comparisions are successful, then it's said that the postulated code is "consistent" with the feedbacks (thus it can possibly be the secret key) - the postulated code is returned as the next guess

There are two important observations to make about this method:

1)

2) The total number of consistent codes

It's because of these two features that the method is tipically called "stepwise optimal strategy" and it has been already found to be near-optimal on average (of course, optimal in this sense means relative to other algorithms - not to the ultimate optimal of always finding the answer in the first guess using shamanic powers). Strategies that are not stepwise optimal include "human" approaches (ie. slowly changing the guess to arrive at the answer by reasoning), and all the approaches that use a "background" color (ie. a color which is known to not occur in the answer or else that it's known where it's located within the answer).

As good and simple as the stepwise strategy is, it has a very important complication: altough finding a consistent code for small problems is rather easy, for larger problems is

* Divide-and-conquer: use a background color or an analytical method to divide the problem in smaller problems and apply a stepwise approach for the smallest problems; see the approaches of lyc1977 (background color; discarding the impossible combinations is a way to find the consistent codes) and OvejaCarnivora.

* Least inconsistent code: instead of finding a completely consistent code with all the previous feedbacks, find the code that is the least inconsistent (within a subset of the search space). Essentially, a cost formula is used that measures how much inconsistent a code is, and any optimizing algorithm is used to minimize the cost. venco used the following cost formula (note: hits = feedback[0] = pegs with correct color and correct position; hints = feedback[1] = pegs with correct color but incorrect position):

cost = sum { abs(hits - hits') + 10000 * abs((hits + hints) - (hits' + hints')) }

The 10000 factor is used to give more weight in the first moves to the total number of correct colors (hits + hints) than to the number of correct positioned colors (later, once the color frequencies are discovered the term becomes automatically zero). Now, since both vdave and I started the search already knowing how many pegs of each color were in the answer, the number of hints was completely irrelevant (because

cost = sum { abs(hits - hits') }

Furthermore, knowing the frequency of each color greatly limited the search space and also simplified the operation of finding the closest neighbours of a code (ie. codes with an approximate cost) to swaps. But these advantages didn't come without a price: as venco himself has already stated, his 14+ points advantage over vdave (in the tests) are very likely due to his algorithm skipping the color's frequencies discovery phase - yet he doesn't say that achieving that was far away from being an easy task at all (I can bet I wasn't the only one that unsuccessfully attempted that, so I'm looking forward to read exactly how he did it in the match editorial -

It's interesting to notice that the first and second places used a stepwise strategy consistently during the whole game (ie. a wrong move was never knowingly played); what it's more interesting to notice is that the third place (flagitious) and probably the fourth place too (cae) used the almost exact opposite strategy: playing knowingly wrong positions throughout most of the game! I believe that each of these two strategies have strengths and weaknesses in different areas so none can be considered better "overall" than the other. In particular, I believe that flagitious's approach has a bigger potential to be much more scalable than any stepwise approach, but I don't want to speculate too much before the final results are published and we can see the results...

Edit - Fixed venco's cost formula]]>