Archive for October, 2006


Monday, October 9th, 2006

Our AI has come so far that it doesn’t make sense to do any more work on it before the enemy men can be recognized. Before this can happen however, the camera should calibrate itself automatically. We’ve chosen to do this by attaching little white stickers to the red corners of the table. These are easy to recognize and with a little bit of math, the program will calculate the position of the corners of the actual playing field.

These stickers are only used once on startup – there’s no reason to do this every frame. Especially not since the opponent and spectators might stand in the way of the camera when playing. You can see one of the stickers here:


The next step is to find out where the actual rods are at. This serves two purposes:

  • We know at which pixels not to look for the ball. This is important since the rods can easily be very light and be mistaken for the ball.
  • We know where to look for the human players.

After this we might want to look into also recognizing the angle of the human players. Also some hardware work needs to be done, and advanced features of the AI can be implemented.

The AI

Tuesday, October 3rd, 2006

Work on the actual AI has begun now. But first we had to make a big decision on which approach to use. There are two main options:

  1. A purely reactionary AI, with all strategies hardcoded.
  2. A more advanced AI that can learn from its mistakes, invent new things on the fly etc.

Each option has its pros and cons:

  1. A reactional AI is easy to create – just put your own strategies into code. It basically looks like hundreds of if-statements about the position and velocity of the ball and the players. If something doesn’t work, you can pinpoint the exact line where it fails and correct it. On the other hand, if the opponent figures out a way to outsmart the AI, it will have no chance of winning since it doesn’t learn from its mistakes.
  2. This is very difficult to create. You have to come up with a system that can learn from it’s own mistakes and it has to be general enough to cover all situations in the game. It’s not at all obvious how to do this. A neural network could be an idea but the downfall of this, we think, is that an error in the code or simply a wrong approach to the whole thing would be difficult to find and correct. If we could get it to work however, it would have the potential to be much better than the reactional AI.

We decided to use option 1. Our reason is that option 2 is too risky. We could easily end up not having a working AI at all when our project is due. With option 1 we are sure that the AI can play, at least to some degree.

Also, by recording the number of successes and failures of different strategies – both the strategies that the AI and its opponent uses, a reactional AI can learn at least a little bit. Not much but perhaps enough to seem like an intelligence and not just a bunch of if-statements.