This weekend The Checker Maven completes an unbelievable 14 years of continuous publication with never a missed deadline. Week after week we've brought you something about checkers, and from what you've told us, you've seemed to enjoy it.
Originally we were going to publish for 10 years. We upped that to 15 and called that a "hard" limit. That leaves us one year to go. But we turned to Mr. Bill Salot for inspiration; he's in his eighties, going strong in every possible way, and makes no excuses about age or health as he continues to support our game of checkers.
So we're going to continue publication. There's no saying how long that will be--- your editor has serious eyesight issues, for one thing--- but we won't quit as long as we can physically continue.
It seems only fitting to celebrate this anniversary and this announcement by going back to our origins, with a "Coffee and Cake" problem from Brian Hinkle. Recall that a "Coffee and Cake" problem is one that you show to your checker friends and bet them coffee and cake that they can't solve it. Brian calls this one "Trumped" (no political reference intended).
Stay the course. Don't make excuses. Carry on. We wouldn't call this an easy problem, but--- like publishing The Checker Maven every week--- your efforts will be well rewarded. When you've found the right moves, click on Read More to check your solution.[Read More]
It's another year with five Thursdays in November, so Thanksgiving in the United States comes well before the end of the month, on November 22. This column will first appear during Thanksgiving weekend.
We've always said how much we love Thanksgiving; its appeal to everyone of every faith and background makes it truly American in spirit. We hope you are enjoying the weekend in whatever way pleases you most, whether it's a large family gathering, a small intimate group, or just some days off to relax. But do remember to be thankful for what you have. While we may not have everything we want, we always have a lot more than we think.
Make checkers part of your weekend with a Tom Wiswell problem, one that he calls "The Follow Through." In his description Mr. Wiswell notes that sometimes a player will give up on a problem just a move or two short of finding the key move. In today's study, staying the course will get you there.
White is a man up but is about to lose it back. How can he win?
Take a break from turkey sandwiches and pumpkin pie, and follow through to the solution. Then let your mouse follow a path to Read More to see how it's done.[Read More]
Computers have progressed a very, very long way since their earliest days. It may not be quite as well known, though, that computers have been programmed to play games since almost the very beginning. But we doubt that the hardy coding pioneers of the time would have dreamed just how far the state of the art has come since then.
Certainly well known today are Google's phenomenal Alpha game playing programs, which contain self-teaching or "machine learning" methods. After years and years of computer Go programs barely reaching respectable playing levels, AlphaGo appeared on the scene and defeated one of the world's highest ranked Go players, something no one ever expected. And AlphaChess very quickly became able to defeat even the strongest chess playing programs around. Machine learning is here to stay, and the results are phenomenal.
Of course, it's not new. The idea was proposed by an IBM scientist over 60 years ago. But implementing it successfully was the issue, for to succeed, the computer programs would have to "train" on millions and millions of different game positions. That wasn't a realistic possibility until relatively recent years.
The method worked well at first for non-deterministic games--- games with an element of luck. Gnu Backgammon played at the master level, as did others. But applications to checkers largely failed. Blondie 24 was a lot of fun but never a serious competitor, and NeuroDraughts wasn't fully developed.
All that has changed, though, with renowned checker engine programmer Ed Gilbert's latest developments for his world class Kingsrow computer engine. Ed was kind enough to send us the details. The following was written by Ed Gilbert with input from Rein Halbersma.
Until recently, Kingsrow used a manually built and tuned evaluation function. This function computes a numeric score for a game position based on a number of material and positional features. It looks at the number of men and kings of each color, and position attributes including back rank formation, center control, tempo, left-right balance, runaways (men that have an open path to crowning), locks, bridges, tailhooks, king mobility, dog-holes, and several others. Creating this function requires some knowledge of checkers strategy, and is very time consuming.
The latest Kingsrow has done away with these manually constructed and tuned evaluation features. Instead it is built using machine learning (ML) techniques which require no game specific knowledge other than the basic rules of the game. It has learned to play at a level significantly stronger than previous versions entirely through self-play games.
In a test match of 16,000 blitz games (11-man ballot, 0.3 seconds per move), it scored a +72 ELO advantage over the best manually built and tuned eval version. There were more than 5 times as many wins for the ML Kingsrow as losses.
The ML eval uses a set of overlapping rectangular board regions. These regions are either 8 or 12 squares, depending on whether kings are present. For every configuration of pieces on these squares, a score is assigned by the machine learning process. A position evaluation is then simply the sum of the scores of each region, plus something for any material differences in men and kings. In the 8-square regions, each square can either be empty or occupied by one of the four piece types, so there are total of 5^8 = 390,625 configurations. In the 12-square regions there are no kings, so there are 3^12 = 531,441 configurations.
To compute values for each configuration, a large number of training positions are needed. I created a database of approximately one million games through rapid self-play. Each game took about 5 seconds. The positions are extracted from the games, and each position is assigned the win, draw, or loss value of the game result. Initially the values in the rectangular board regions are assigned random values. Through a process called logistic regression, the values are adjusted to minimize the mean squared error when comparing the eval output of each training position to the win, draw, or loss value that was assigned from the game results.
Similar machine learning techniques have been used in other board game programs. In 1997, Michael Buro described a similar process that he used to build the evaluation function for his Othello program named Logistello. In 2015, Fabien Letouzey created a strong 10x10 international draughts program named Scan using an ML eval, and around this time Michel Grimminck was using a ML eval in his program Dragon. Since then other 10x10 programs have switched to ML evals, including Kingsrow, and Maximus by Jan-Jaap van Horssen. I think that the English and Italian variants of Kingsrow are the first 8x8 programs to use an ML eval.
Ed's new super-strong version of KingsRow is available for free download from his website. Combine that with his 10-piece endgame database, and you'll have by far the strongest checker engine in the world, a fearsome competitor and an incredible training partner.
Let's look at a few difficult positions, some of which were analyzed by human players for years and even by reasonably strong computer engines for hours. KingsRow ML solved each and every one of them virtually instantly.
First, the so-called "100 years problem" (as in Boland's Masterpiece, p. 125 diagram 1).
Next, the Phantom Fox Den, from Basic Checkers 2010, p. 260.
And finally, a position suggested by Richard Pask, from Complete Checkers p. 273 halfway down, where Mr. Pask notes: "12-16?! has shock value ..."
Surely we don't expect you to solve each of these (unless you wish to), but do look them over and at least form an opinion. Then click on Read More and be amazed.[Read More]
"The Last Song" can mean a lot of things; the end of a concert, maybe even the end of a career; or more metaphorically, the end of a relationship, an era ... the list goes on, and it's a bit too melancholy for our tastes. But in the world of checkers, we're looking at a much better interpretation for today's column.
We're going to hear, or at least see, the last song from Mr. G. M. Gibson, the author of our recent few Checker School "snappy" problems propounded by our friend Skittle to the aspiring neophyte Nemo. We'd rate this one as a little above average in difficulty; the theme is one we've seen a few times before.
Don't let this be your last song; whether you solve it or not we hope you'll keep coming back to visit with us and that you'll keep on playing checkers. When you've sung the last verse (i.e., come up with a solution), let your mouse sing out on Read More to see how it's done.[Read More]
Perhaps we say this every year --- but winter is speeding in North America, and by the time this column appears it may already be here. At our offices in Hawai`i, we'll soon have those nights when the temperature dips down below 70F, and we understand that in places such as Michigan it gets even colder than that.
And with winter speeding in, it's time for a nice little speed problem sent by regular contributors Lloyd and Josh Gordon.
Got it? We thought so, but you should still click on Read More just to be sure.[Read More]
We've published a number of fine compositions by master problem composer Ed Atkinson, and today we have one that Mr. Atkinson calls The Long Crooked Trail.
Ed tells us, "The first part is original then it runs into old published play as credited in the notes. This ending is a study in the opposition and its changes."
"I think of The Long, Crooked Trail as an endgame lesson, rather than as a problem to be solved, except, perhaps, by experts ... However, it seems instructive for a wide range of players."
We certainly agree, although we think it's worthwhile for you to think about the position and see if you have any ideas about the solution, even if you're not yourself an expert player. That will make the actual solution more meaningful when you do look at it later, by trailing your mouse on Read More.[Read More]
Two wrongs don't make a right, we're told, and if so, surely three wrongs don't, either. A third wrong will only lead to even more trouble--- or in the case of our game of checkers, a loss--- and that leads us to this week's four-fold problem.
We'll look at a published game from years back, in which three wrongs weren't counterbalanced by a right (until today, at least).
At this juncture, White played 23-18, and annotator Gary Garwood called it a weak move. He suggested instead 31-26 or 22-18. But these moves are just as bad. All three of them lose. Three wrongs, no right. But in fact there is a right move and White can obtain a draw here.
Can you find the correct move to draw for White, and then (for extra credit, if you will) show the Black wins for all three incorrect moves? It's a tall assignment, but one that will give you quite a bit of checker insight.
When you're right (and you know it, as the saying goes) do the right thing by clicking your mouse on Read More to see the solutions.[Read More]
Doubling down: You're playing Blackjack at some fabulous Las Vegas casino and you think you've got two great cards. So you "double down" --- double your bet in the hopes of doubling your winnings.
Alas, it's not that simple. While under the best circumstances your chances of winning are almost 2 out of 3, most of the time you'll just double your losses. Those bright lights and free drinks are paid for by someone.
So, how does "doubling down" apply to this week's Checker Maven column? Read on.
Our Checker School columns for the last few months have featured "gem" problems by G. M. Gibson. Today we bring you the concluding entry in the G. M. Gibson problem series, and it's a practical one.
There are two ways to for White to win this. If this were found in a problem competition, that would be kind of a bad thing; dual (or "double") solutions are frowned upon.. But as a teaching position, doubling down (or should we say doubling up) can be instructive, and we're asking you to find both winning lines. Can you double down and do that? Can you find at least one solution? They're closely related, and if you find one, you might just find the other.
Try it (at least twice), and then--- wait for it--- double-click your mouse on Read More once to see all the answers.[Read More]
We know little about firearms, but we've read that single-action arms have a longer and smoother trigger pull than double-action arms, which are reputed to be at least somewhat safer but perhaps less accurate. We're sure one of our readers could clarify this easily, but we won't even try.
Returning to checkers: regular contributors Lloyd and Josh Gordon of Toronto sent in this position from one of their nightly games, and it's a position that is surely not safe for the Black forces, if White engages in accurate play.
It's not hard at all, and the title of today's column gives you a huge hint. So take a "shot" at it and after you've solved it, pull your mouse trigger on Read More to check your solution.[Read More]
The Death of Expertise, by Tom Nichols, is a book that attempts to make a case for, well, expertise. The author's main points are that in the internet age, everyone thinks they're an expert, and the democratic concept of equality has come to mean that everyone's opinion is equally valid. Mr. Nichols makes a few good points, but then he says this:
"Sensible differences of opinion deteriorate into a bad high school debate in which the objective is to win and facts are deployed like checkers on a board--- none of this rises to the level of chess--- mostly to knock out other facts."
Mr. Nichols' expertise certainly doesn't extend as far as knowing much about checkers, but that doesn't stop him from making a judgment, and thereby becoming guilty of exactly the sort of thing he condemns.
In checkers, expertise comes to the fore. You have it or you don't and there's no faking or pretending. Take, for instance, the following problem, which will require genuine expertise to solve.
We think this one will really challenge you. Black has the narrowest of draws and must make a long series of star moves (nine by our count). Rise to the level of checkers (not chess), show your stuff, and do your best on this one. Then check your expertise by clicking on Read More to see the solution.
If you haven't yet reached the expert level, though, don't worry. Working on the problem will in and of itself help you develop, even if in the end you don't find the solution.[Read More]