

WooT! Looks like I just squeeked into the top ten anyway!
Must have been a good 4000 example cases for me, or just bad for the five people I passed in the system tests.
[TheFaxman: looks like the plan of running systests until I win is working out. Keep up the good work!] 

It's possible that lack of precision on earlier binary searches could lead to some minor concavities, I'd agree that the effect is probably pretty minimal. I'd guess that your biggest losses come from making unnecessary measurements as previously mentioned, and by not establishing an upper bound on area and using that to refine your final return more accurately. 

for paranoia: there will be no concavity. Because I start out with a triangle with his vertex' on the polygon sides. When I expand a side with binary search the new point will be "on" polygon side, and so on... So there will be no concavity :) 

I ended up doing something quite similar to vlad_D.
 Find a point inside (similar fractallike search pattern)  Binary search in a few directions to approximately (something like +/ 0.04) find some edges and make an initial polygon (convex hull).  Estimate the possible reduction in error available by splitting each existing edge, then split the juiciest one using binary search. Also estimate the expected reduction in error of refining the location of a "known" point on the convex hull and refine that instead of splitting if it was juicier.  Keep splitting and refining until I run out of samples  Each point on the convex hull is actually an interval with a sample point known to be outside and a sample point known to be inside. Using this I estimate the area using a weighted average of the inside and outside limits of each point (3/4 outside + 1/4 inside seems to work well).  At this point I am still pretty consistently underestimating the area (by about 0.1) so, I then add the median error observed by running lots of tests. I fit a function to the number of samples for this. I probably should have fit it to the area and number of samples (subtracting any samples wasted finding the polygon at the start).
The main data structure is just an ordered vector of "points" on my estimated convex hull, where a "point" is actually two points as mentioned above.
Most of the thinking and coding went into estimating the error reduction expected by refining an edge into two, and estimating the most extreme location of the expected but unknown actual polygon vertex, using the neighboring "points" on the convex hull. The final code ended up being pretty simple. I tried some edge tracking in the last couple of days, but I couldn't get it to improve my score.
For reasons of simplicity early on I started using four initial points for the convex hull, planning on trying three later, but never got back to actually try that. I tried eight at one point when had a tricky test case with the first point way in a very acute corner of a small polygon. It fixed that case but made everything else worse so it was quickly abandoned. Not trying three initial points probably cost me.
It was clear from the beginning that with the scoring specified in the rules the performance with few sample points would totally dominate the final score. So I never put any effort into anything that took lots of samples.
Provisional score: 2484.76 (getting the next few points looked really hard, although trying starting with a triangle and calculating the median error differently would have probably helped a bit, maybe got me into the top 10, but I was burning out by then) 

That was my first thought at a solution too. I decided the halfway around problem was a fatal flaw before I started implementing it. Since it was apparent that the near 100sample test cases were going to decide the match. Clearly everyone was going to do well on the near 300sample cases and the errors on those would be very near to zero. 

I don't think you have a bug, I think the problem is that you don't seem to keep track of points that fall outside of the polygon, so many of your queries during the binary search could be answered without actually using up one of your measurements. If adding a point would cause your polygon to include a point you already know is outside, then you know that query will return false and can save a measurement. It's also possible that turning an edge into a triangle could affect more than just that edge (if you push it out far enough, it could cause a concavity to occur, which means you have to remove some points to keep the polygon convex), if you didn't handle this case it could lead to an underestimation of the polygon size, and some other interesting artifacts stemming from the generation of concave polygons when you might have assumed it would stay convex. 

My approach was to find a point inside the polygon.. first you have the square 10 X 10, take the point in the middle, check it, and divide in 4 new squares.. and so on until you find a point inside.
from this point with some BS. search 3 points on the perimeter of the polygon. This is a triangle. take each side, take the middle, and try to do an other triangle with BS on the height going outside of the triangle you are searching on. and so on.
I think I have a bug... because this just got: 2222.07. I did some tests.. and I putted some if's and made my score 2239. 

What's wrong with firefox or konqueror? EDIT: I see you have already noted that your problem was merely due to an error in the generated svg
Btw. that image really shows well how you pinpoint the polygon. 

One last thing, I wrote a little SVG library to help visualise the state of my solution. It was invaluable. Yep, I did that, too, and agree completely. Seeing how the boundary of forbidden area could suddenly grow beyond the square it was formerly confined to was instructive and seeing how my triangulation method suddenly walks backwards was invaluable. 

I used a simple Monte Carlo method (100.0 * pointsInside / pointsTotal) with a test proxy.
The test proxy was similar to others' approaches. It formed a convex hull of all the points inside the polygon and it formed a bunch of wedges from all points outside the polygon. If the Monte Carlo program asked to test a point inside the local hull, the proxy immediately returned true. If the program asked to test a point inside one of the wedges, it returned false. Only if it had no idea did it use up one of the queries.
Later, I used max(monteCarloResults, localHullArea) to ensure I didn't waste the local information about the hull I had.
Even with a method as unoptimized as Monte Carlo, I got decent example results :
4.238, 4.864, 3.962, 4.957, 4.981, 4.788, 4.313, 4.722, 4.961, 3.961.
Unfortunately, I made a bad full submission and didn't have enough time left in the contest to resubmit. 

So I tested my approach with a change to the initial code, so that instead of picking random points until two are inside the polygon, it picks points by choosing 100 random points then taking the one that maximizes the distance to the closest already chosen point (with points put at the corners to begin with). This led to an average gain of .007 points per test case, or 3.5 points on the preliminary tests. Too bad I didn't implement this earlier :/. (Note to self: write down *all* the ideas you have when you have them). 

I did the same as venco but I tried to maximize distance to inside and outside regions 

Well, I guess I have nothing new or brilliant to share, as most of the (relevant) ideas were already described, but here goes...
1) I start with a big cross and a big "X" around the middle point (5,5) 2) If I have no hits, I try smaller crosses centered on each of the 4 quadrants. 3) If I have no hits, I try shooting the center of each 1x1 square on the first/last column and row of the 10x10 initial square. This is to try and recover from VERY degenerate cases where polygons were too small, too thin or too evil ;)) 4) If I still have no hits, I return 1.99, because it seemed like a fun value, and not far from the area most of the polygons like these I found had, or the area that was sort of uncovered by my initial search.
All pointsearching used a binary search, stopping when the distance between two consecutive shots were below a given threshold. I had different thresholds depending on the step I was on.
At this point, I should have something in the sorts of a bounding box, and I apply the following steps:
1) Expand the initial polygon in the 4 directions (pick the highest point, and start searching to the left and to the right, no farther than the leftmost and rightmost point... do the same in other directions) 2) Compute the convex hull (this is done many times along the way, actually, but from this point on becomes extremely relevant and necessary) 3) Given three edges 1, 2 and 3, try expanding from the middle point of edge 2 towards the intersection point of edges 1 and 3. This helped quite a bit on trianglelike polygons. (I can't quite recall why I started from the midpoint instead of just following an edge, since step 4 will do it quite similarly, but anyway...) 4) For each edge, from its midpoint, search perpendicular to it for points inside the polygon. (I've found this to work better after the above step instead of the other way around. The line intersection from above was a later addition) 5) Go to 3, and repeat until no new points inside the polygon are found, or run out of shots (M <= 0)
I've added obvious checks (is point already inside polygon? is this outside the 10x10 square? have I already shot this place, as a hit or as a miss?) and a heuristic for saving up shots: If I had 3 or more points outside the polygon within a given distance from the point I'd like to test, I automatically consider it a miss as well.
There were cases where I had left over shots, but I had no idea what to do with them :(( Thankfully, these were rare enough for me to worry with bigger things.
I thought of computing what I called the "inverse convex hull" on the outside points. You guys called it "baloon/blown up hull/polygon" or simply a boundary/useful area. It was the smallest polygon such that it contained my polygon and no points inside it were misses (if it were, it had to be a vertex), but I had insane ideas for computing it (really inverting the hull algorithm! dear god... *shame*), and eventually gave up without being able to do it... Seems so simple when you explained it :P
Lastly, I had an "error" function... 0.25 * (300M)^2 is what worked best for me (since I did not have the "inverted hull"). I added this to the area of the resulting polygon and it seemed good enough, with not so good results only on weird polygons or when using a low M on an easy to find polygon (that is, with an already very accurate guess).
Someone said something about nearzero probability of not finding the polygon... Well, I have found some quite odd cases, I'd like to see the ones you guys have to share aswell... How zeroprobability are these? ;))
seed 13072 N 5 (this one inspired step 3 of the initial search)
seed 38169 N 5 (getting thinner...)
seed 2031 N 12 (...and thinner!)
seed 49095 N 5 (literally, needle in a haystack)
I really liked the idea of generating SVG/PostScripts, I missed that since the Visualization window gets a bit cluttered fast! (unless I use very small, hard to see points) I plan on looking into your codes to learn more about this :))
This was my debut in Marathon Matches, and I liked it very much. The problem is very simple, yet challenging, and I couldn't stop thinking about it, and trying every little drop of optimization I could think of (some of which made the code worse, too!). This is addictive, and I should be careful from now on ;))
I loved to see all of your ideas explained, way better than cracking my head open looking at source codes. Please keep them coming!! :))
EDIT: typos... :/ 

That is exactly the problem I had with it. To compensate I had to reduce accuracy of angle computations with small M which, of course ended up giving very poor estimates for such cases. 
