A tale of two universities and two engines – Chess News

Posted: March 26, 2022 at 6:36 am

[Note that Jon Speelman also looks at the content of the article in video format, here embedded at the end of the article.]

Last Saturday, March 12th, I was at the RACsclubhouse (Royal Auromobile Club) in Londons Pall Mall for the annual Varsity match between Oxford and Cambridge Universities.

First played in 1873, this is the worlds oldest chess contest and was for years reported on in the pages of the famous Russian chess magazine 64. When I played for Oxford from 1975-7, Cambridge were in the ascendant and we lost all three matches: personally, I lost to Michael Stean and drew twice with Jonathan Mestel. These things swing over time, and at the moment its very close. Cambridge started as the Elo favourites, but after an endgame save in the last game to finish, Oxford ran out the winners by the narrowest possible margin of 4-3, with the overall score now 60-58 to Cambridge with 22 draws.

The 1921 Oxford team | Find more info at BritBase, John Saunders excellent games archive

The match has been at the RAC now for nearly half a century, with a dinner afterwards, and in recent years internet coverage and commentary on site. This years commentator was Mathew Sadler and for some of the afternoon I acted as sous-commentator, chatting with Matthew about the games.

At one stage I mentioned that I normally use Houdini as my analysis engine, but Matthew [pictured], who of course is immensely knowledgable about computer chess and has written extensively on Alpha Zero, told me that the latest version of Stockfish is much stronger. I therefore decided to switch to it as my default analysis engine in ChessBase, but Im now wondering (and of course this can be changed with the click of a mouse) whether I was right.

The question of course is how to use the analysis and assessments produced. Most computer engines (Alpha Zero and its daughter Leela are different) are giant bean counters which produce a maximin, maximizing the minimum score they get against the opponent's supposedly best play. Depending on the accuracy of the analysis and the size of the beans, the scores will vary, and while Houdini with its rating, I dunno, of 2700 or 2800 tends to bumble around with assessments quite close to zero,Stockfish thunders its pronouncements giving assessments like +/- 2.5 in positions which look to my human eye to be fairly but not entirely clear; and going up/down to +/- 6 or more when even my human eye can see that it oughtto be winning.

The Ruy Lopez Breyer Variation

Pavel Eljanov explains in depth what Gyula Breyer already saw in 1911 and what became an opening choice of the likes of Kasparov, Kramnik, Anand or Carlsen. The Breyer Variation, which is characterised by the knight retreat to b8.

The certainty is wondrous but rather unsettling. When I was a kid, I no doubt made the mistake of trying to play the best moves. Nowadays, of course, I know better, and while I will stop and indeed try to work out the best solution in an obviously utterly critical position, most of the time I poddle along choosing decent moves without worrying too much about whether there are better ones. To do this, Ive created a story for myself that I can quickly select goodish moves in reasonable positions (of course its much harder if youre under heavy pressure). But gazing into the face of God, I have to be careful not to be blinded and to undermine this essential fiction.

So Im still thinking about what to do. Perhaps with enough time available I should use both, analysing both with St Houdini and the deity Stockfish. Certainly when Im streaming I try much of the time to use my own carbon-based resources and sometimes dip into a fairly hobbled version of Stockfish which isnt too scary. But occasionally, when I want to know the truth I turn to My Lord Sesse (the Norwegian-based fusion of Stockfish and ridiculously powerful hardware).

One point I should make in general is not to take too much notice of computer assessments, even if they are right. They are extremely relevant to the worlds top players when they are doing opening preparation, but for the rest of us they are just a tool. In particular, Ive noticed that when people check their games after playing online, there are some engines which dish out ??s like confetti. Of course people do play some terrible moves, especially at blitz, but ?? should mean a move that loses a piece or maybe even a rook or at a higher level makes a complete mess of the position. It shouldnt mean that the assessment has dropped drastically without in human terms affecting the result.

One reason I go to the Varsity match is to help choose the Best Game and Brilliancy Prize often with Ray Keene, in this case with Matthew. Both receive works by the artist Barry Martin and, in this case, since the Brilliancy Prize was shared, both players got prints.

Cambridge team: back, left to right: Miroslav Macko, Matthew Wadsworth, Imogen Camp, Harry Grieve. Front, left to right: Jan Petr, Declan Shafi (captain), Ognjen Stefanovic, Koby Kalavannan. | Photo: John Saunders

For the best game, we decided on the board 1 win by Oxford, and Ive annotated it, out of interest, using both engines. Ive given them a fairly short time to make an assessment, so they might have changed their minds had they worked for a longer period of time but this experimentnonetheless gives an indication of the huge difference between them.

Select an entry from the list to switch between games

Understanding Middlegame Strategies Vol.3 - The Hedgehog

Throughout my playing career I have found the Hedgehog one of the most difficult type of positions to master. The basic aim of this video is to improve understanding of these complex positions and to help tournament players score better.

Follow this link:

A tale of two universities and two engines - Chess News

Related Posts