It's another year with five Thursdays in November, so Thanksgiving in the United States comes well before the end of the month, on November 22. This column will first appear during Thanksgiving weekend.
We've always said how much we love Thanksgiving; its appeal to everyone of every faith and background makes it truly American in spirit. We hope you are enjoying the weekend in whatever way pleases you most, whether it's a large family gathering, a small intimate group, or just some days off to relax. But do remember to be thankful for what you have. While we may not have everything we want, we always have a lot more than we think.
Make checkers part of your weekend with a Tom Wiswell problem, one that he calls "The Follow Through." In his description Mr. Wiswell notes that sometimes a player will give up on a problem just a move or two short of finding the key move. In today's study, staying the course will get you there.
White is a man up but is about to lose it back. How can he win?
Take a break from turkey sandwiches and pumpkin pie, and follow through to the solution. Then let your mouse follow a path to Read More to see how it's done.[Read More]
Computers have progressed a very, very long way since their earliest days. It may not be quite as well known, though, that computers have been programmed to play games since almost the very beginning. But we doubt that the hardy coding pioneers of the time would have dreamed just how far the state of the art has come since then.
Certainly well known today are Google's phenomenal Alpha game playing programs, which contain self-teaching or "machine learning" methods. After years and years of computer Go programs barely reaching respectable playing levels, AlphaGo appeared on the scene and defeated one of the world's highest ranked Go players, something no one ever expected. And AlphaChess very quickly became able to defeat even the strongest chess playing programs around. Machine learning is here to stay, and the results are phenomenal.
Of course, it's not new. The idea was proposed by an IBM scientist over 60 years ago. But implementing it successfully was the issue, for to succeed, the computer programs would have to "train" on millions and millions of different game positions. That wasn't a realistic possibility until relatively recent years.
The method worked well at first for non-deterministic games--- games with an element of luck. Gnu Backgammon played at the master level, as did others. But applications to checkers largely failed. Blondie 24 was a lot of fun but never a serious competitor, and NeuroDraughts wasn't fully developed.
All that has changed, though, with renowned checker engine programmer Ed Gilbert's latest developments for his world class Kingsrow computer engine. Ed was kind enough to send us the details. The following was written by Ed Gilbert with input from Rein Halbersma.
Until recently, Kingsrow used a manually built and tuned evaluation function. This function computes a numeric score for a game position based on a number of material and positional features. It looks at the number of men and kings of each color, and position attributes including back rank formation, center control, tempo, left-right balance, runaways (men that have an open path to crowning), locks, bridges, tailhooks, king mobility, dog-holes, and several others. Creating this function requires some knowledge of checkers strategy, and is very time consuming.
The latest Kingsrow has done away with these manually constructed and tuned evaluation features. Instead it is built using machine learning (ML) techniques which require no game specific knowledge other than the basic rules of the game. It has learned to play at a level significantly stronger than previous versions entirely through self-play games.
In a test match of 16,000 blitz games (11-man ballot, 0.3 seconds per move), it scored a +72 ELO advantage over the best manually built and tuned eval version. There were more than 5 times as many wins for the ML Kingsrow as losses.
The ML eval uses a set of overlapping rectangular board regions. These regions are either 8 or 12 squares, depending on whether kings are present. For every configuration of pieces on these squares, a score is assigned by the machine learning process. A position evaluation is then simply the sum of the scores of each region, plus something for any material differences in men and kings. In the 8-square regions, each square can either be empty or occupied by one of the four piece types, so there are total of 5^8 = 390,625 configurations. In the 12-square regions there are no kings, so there are 3^12 = 531,441 configurations.
To compute values for each configuration, a large number of training positions are needed. I created a database of approximately one million games through rapid self-play. Each game took about 5 seconds. The positions are extracted from the games, and each position is assigned the win, draw, or loss value of the game result. Initially the values in the rectangular board regions are assigned random values. Through a process called logistic regression, the values are adjusted to minimize the mean squared error when comparing the eval output of each training position to the win, draw, or loss value that was assigned from the game results.
Similar machine learning techniques have been used in other board game programs. In 1997, Michael Buro described a similar process that he used to build the evaluation function for his Othello program named Logistello. In 2015, Fabien Letouzey created a strong 10x10 international draughts program named Scan using an ML eval, and around this time Michel Grimminck was using a ML eval in his program Dragon. Since then other 10x10 programs have switched to ML evals, including Kingsrow, and Maximus by Jan-Jaap van Horssen. I think that the English and Italian variants of Kingsrow are the first 8x8 programs to use an ML eval.
Ed's new super-strong version of KingsRow is available for free download from his website. Combine that with his 10-piece endgame database, and you'll have by far the strongest checker engine in the world, a fearsome competitor and an incredible training partner.
Let's look at a few difficult positions, some of which were analyzed by human players for years and even by reasonably strong computer engines for hours. KingsRow ML solved each and every one of them virtually instantly.
First, the so-called "100 years problem" (as in Boland's Masterpiece, p. 125 diagram 1).
Next, the Phantom Fox Den, from Basic Checkers 2010, p. 260.
And finally, a position suggested by Richard Pask, from Complete Checkers p. 273 halfway down, where Mr. Pask notes: "12-16?! has shock value ..."
Surely we don't expect you to solve each of these (unless you wish to), but do look them over and at least form an opinion. Then click on Read More and be amazed.[Read More]
"The Last Song" can mean a lot of things; the end of a concert, maybe even the end of a career; or more metaphorically, the end of a relationship, an era ... the list goes on, and it's a bit too melancholy for our tastes. But in the world of checkers, we're looking at a much better interpretation for today's column.
We're going to hear, or at least see, the last song from Mr. G. M. Gibson, the author of our recent few Checker School "snappy" problems propounded by our friend Skittle to the aspiring neophyte Nemo. We'd rate this one as a little above average in difficulty; the theme is one we've seen a few times before.
Don't let this be your last song; whether you solve it or not we hope you'll keep coming back to visit with us and that you'll keep on playing checkers. When you've sung the last verse (i.e., come up with a solution), let your mouse sing out on Read More to see how it's done.[Read More]
Perhaps we say this every year --- but winter is speeding in North America, and by the time this column appears it may already be here. At our offices in Hawai`i, we'll soon have those nights when the temperature dips down below 70F, and we understand that in places such as Michigan it gets even colder than that.
And with winter speeding in, it's time for a nice little speed problem sent by regular contributors Lloyd and Josh Gordon.
Got it? We thought so, but you should still click on Read More just to be sure.[Read More]