Daily Archives: November 3, 2019

GM Larry Kaufman Interview: ‘New Repertoire For Black And White’ – Chess.com

Posted: November 3, 2019 at 2:48 pm

GM Larry Kaufman's bookKaufman's New Repertoire for Black and Whiteis now available in Europe in all formats, and in the United States as an e-book. It will soon be sold worldwide in all formats, including as an e-book from Forward Chess.

The book serves as a follow-up to the excellently received The Kaufman Repertoire for Black and White,but it is not a second edition. Kaufman has completely revised his repertoire from the White side and is now recommending 1.e4 instead of 1.d4the most impactful of many changes.

Kaufman's perspective is unique among chess authors. Kaufman is a graduate of MIT who has been involved in computer chess since the 1960s. Today, most chess players will be familiar with his work on first Rybka and now the Komodo chess engine.

Kaufman first became an IM in 1980 and a GM in 2008 when he won the World Senior Championship. He is currently a vital part of the Chess.com and Komodo team.

GM Larry Kaufman: My first complete repertoire for both colors was written in 2003 (The Chess Advantage in Black and White, Random House), and was partly done as a way to force myself to work out a complete repertoire that was both sound and compact enough to fit in one book, so that I could hope to remember most of it for my own tournament games. I reasoned that many others would also like such a book. The 2012 book was a completely new one for New in Chess, but with largely similar motivation, with the difference that now the engine I had been developing, Komodo, would play a major role.

The 2019 book was largely motivated by the MCTS [Monte Carlo Tree Search, a method for computer engines to choose which moves to investigate] revolution, because MCTS engines favor lines that are apt to work in over-the-board play against humans, not just ones intended for correspondence play vs. engines. The reason is that normal engines assume "perfect" play by the opponent, while MCTS merely assumes "good" play. My hope was that the analysis would be very different from books that use traditional engines. Of course there are good and bad moves in chess, so there is considerable overlap, but I expect people will see substantial differences from existing theory.

Why did you think the time was right to release the new repertoire?

Because Lc0 [a machine-learning chess engine that uses neural networks and MCTS] and Komodo MCTS had reached high-enough level to make an MCTS-based book logical. Also because the improvement in both hardware and software in seven years made the earlier book rather obsolete.

Who do you think will most benefit from this book?

The actual analysis is at a very high level, so is suitable for strong players, even for grandmasters. However the explanations of the moves are aimed at average amateur players, maybe in the 1200 to 1800 range. So I hope that players of a wide range of abilities can learn from this book, although they will learn different things. I suppose that players in the 1600 to 2000 range might get the most benefit, as they are not too strong to need the explanations but strong enough to utilize the advantages they may get from the lines.

Chess players today often suggest that opening theory is draining chess of its excitement. Do you agree?

Yes, I do agree, even though saying so won't help sales of the book! But if you are going to play competitive chess with reasonably strong players, whether over the board or online, you will get better results playing good openings that you know than playing poor ones or good ones that you don't know. I am a big advocate of reforms in chess, whether that means balloted openings, playing Fischer Random, Armageddon playoffs, special anti-draw rules, or whatever. But this book is written for chess as it is played currently.

How do you think opening preparation is different today from when you first became an IM and GM?

Well, that's two completely different questions, because I became an IM in 1980, before computers were of any use to chess players, but a GM in 2008, when I got my title by winning the World Senior Championship with the help of massive computer preparation both before the event and for every game. In 1980, preparation meant carrying around a couple books and reviewing the lines in the books that you wanted to play. Two different worlds!

Which major openings are you no longer recommending in the new repertoire, and which ones are you recommending now?

For White, it was a total switch from 1.d4 to 1.e4, motivated by some positive developments for 1.e4 and some negative ones for 1.d4. For Black, the main change is that the Breyer defense to the Spanish is now my backup line, with the Marshall becoming my main one, along with a chapter on the Moller that I suggest might be better for correspondence play than for over the board. No major change vs. 1.d4, although the games and analysis are heavily revised.

In analyzing the opening, in which aspects was Komodo superior, and in which aspects was Lc0 superior?

Lc0 was generally superior on my computer, because it has a very powerful, expensive GPU with 3,000 cores; Komodo just uses six of the eight CPUs. Lc0 has some weak points though, which Komodo patches up: Lc0 is rather blind to perpetual checks, relatively weak in evaluating many endgames, and lacks specialized chess knowledge that applies in infrequent situations. Also I would say that Komodo is generally superior when it is necessary to see a long, precise series of moves to justify the initial move choice. Lc0 will usually find good moves quicker with my GPU, but will be slow to admit that it is wrong.

Is Lc0 as strong as AlphaZero now?

If they both ran on the same or comparable hardware, I think they are close in strength. But even Chess.com doesn't have hardware like Google. Probably a five or 10-minute Lc0 think on my computer is comparable to a one-minute think of AlphaZero on Google's hardware.

Given your expertise with engines, have you any interest in competing in the International Correspondence Chess Federation?

The percentage of draws between top ICCF players is in the mid or high 90s, I'm told. Playing a game with such a draw percentage doesn't interest me.

How do you evaluate the future of Chess960 (Fischer Random) as a means of circumventing opening theory?

I am a big advocate of Chess960 (Fischer Random), especially after seeing the semifinals and finals of the World Fischer Random Chess Championship. I actually won the only U.S. open championship of the game ever held (I believe), about a decade ago. I don't find it interesting to watch two super-GMs reproduce 20+ moves of computer analysis in a broadcast standard chess game, but I try not to miss a single game of 960 live between the top players, knowing that they are thinking for themselves after just a couple moves. I only regret that there are hardly any live, over-the-board events that most players can join. I would love to see 960 take an equal footing with standard chess before I become too old to play.

Is the French Defense as bad as IM Danny Rensch says it is?

Normal chess engines think it's more or less as good as anything, but statistics and the neural-network engines rate it as inferior to the "big three" (1...e5, Sicilian, Caro.) I think that you need to know a lot to prove that the French is inferior, but at super-GM or correspondence level, it is just the fourth-best defense. But I owe my GM title and World Senior Championship to the French!

Is Bobby Fischer right that 1.e4 is "best by test"?

Yes, I agree with him on this (as well as on 960 and on the use of increment in chess, although I have a better claim to being the inventor of increment chess than he does!). But 1.d4 and 1.Nf3 are not far behind.

Which question do you wish you had been asked about the new repertoire?

"How can I write a technical opening book full of the latest games and novelties played and analyzed in 2019, when I'm old enough to have had a chess teacher (Harold M. Phillips) who played against Steinitz in 1894?"

As an active partner in Komodo, I have the technical knowledge needed to make best use of computers. I can afford the best practical hardware. Also I probably understand better than any other GM where the displayed scores are coming from, at least in the case of Komodo, and so I can explain the +.53 eval to the reader. I'm still an active tournament player, the oldest active American GM, about to turn 72. My parents lived to an average age of 100, so I'm pretty young still for an old man!

Visit link:

GM Larry Kaufman Interview: 'New Repertoire For Black And White' - Chess.com

Posted in Chess Engines | Comments Off on GM Larry Kaufman Interview: ‘New Repertoire For Black And White’ – Chess.com

How To Win With The Halloween Gambit – Chess.com

Posted: at 2:48 pm

The Halloween Gambit is surely the spookiest way to start a chess game. If you're a conservative chess player who doesn't like to give away pieces, it may indeed be "2 spooky 4 u."

Could this early knight sacrifice be a legitimate opening? No, probably not. But maybe.

Don't take my word for it. Let's see how the world's number-four player, Maxime Vachier-Lagrave, plays this scary gambit:

"I found my new opening," said MVL after the game. "In blitz its sort of a decent weapon."

The doubters out there may chalk up this win to the so-called "throwaway" competition playing Black...

...But this couldn't be further from the truth.

We know this thanks to a superhuman computer engine losing to the supernatural gambit in the Computer Chess Championship. Both of the beastly machines in this game are rated over 3000, with the machine-learning engine Winter coming out on top in this autumn-themed gambit.

Don't be too scared for Rubi, though. It struck back in the next game with a revenge-win using the other side of the gambit. Check out its spooktacular bishop sacrifice on f7 to go two pieces down early:

How can two ridiculously strong computers beat each other, each from the white side of this gambit? The only answer is that the Halloween Gambit is a powerful and uncanny attack, especially in blitz chess.

So how do you win with the Halloween Gambit? The gist of it seems to be, like most encounters in chess and life, to just wait for your opponent to make a mistake. Otherwise how could the power of fright alone overwhelm the material advantage of a knight for a pawn?

With the clock ticking down and two powerful center pawns chasing their knights to the back rank, many players panic when facing this gambit. I know it's hard to believe, but even I don't usually get checkmated by a pawn in 18 moves.

What's the history behind the Halloween Gambit? The opening was known by club players in the 1800s, but got its scary name relatively recently, in the late 1990s.

Steffen Jakob, a German chess player and computer programmer, became interested in the opening in 1996 and made a computer engine clone of Crafty devoted to playing the gambit. Jakob named the opening the Halloween Attack.

"Many players are shocked, the way they would be frightened by a Halloween mask, when they are mentally prepared for a boring Four Knights, and then they are faced with Nxe5," Jakob told the journalist Tim Krabbe in 2000.

The Halloween Gambit is so good in speed chess that even the author of "Pete's Pathetic Chess" can use it to winas long as the opponent isn't in the world's top four.

So how do you defend against this spine-chilling opening? It helps if you don't have a spine. Just look how calmly Stockfish, the GOAT computer engine, deals with the Halloween Gambit.

It does not even break a sweat swatting down the knight sacrifice, largely because it also lacks sweat glands.

In delivering checkmate, Stockfish used its king's knight to strike the final blowthe same king's knight that White sacrifices in the Halloween Gambit. Spooky! It's as if to say, "See? This knight is useful."

You can watch the world's top chess engines play this gambit in the Computer Chess Championship Halloween Gambit bonus, live now through Nov. 1.

Or even better, try out the Halloween Gambit in your own games...if you dare!

See the original post:

How To Win With The Halloween Gambit - Chess.com

Posted in Chess Engines | Comments Off on How To Win With The Halloween Gambit – Chess.com

‘Neurohacking’ cream which ‘helps memory and learning’ could be available in 5 years – Mirror Online

Posted: at 2:47 pm

Scientists claim a new cream helps users learn to speak new languages and play musical instruments.

The outlandish boast was made by the team behind "neurohacking" cream dihexa.

And according to reports it could be available in the UK within five years.

It was developed by experts at Washington State University in an effort to combat Alzheimer's disease.

Developers say it does this by slowing cell death and suppressing enzymes which wipe out chemicals used for memory and learning.

The product is on sale in the US, but makers say they do not know if Brits will embrace it as keenly.

Dr Daniel Stickler, who represents biotech company Apeiron, told the Sunday Telegraph : "When we were in London and meeting people, we were presenting this idea of improving human behaviour and we were finding that as long as there were other people worse off than them it was all OK, they just kept calm and carried on.

"That mindset was very different for us coming from the US where we have a very large percentage of people who think, 'I know I'm good but I want to get better.'"

He added: "If youre learning to play the guitar or something, its really good for creating that kind of mental response."

Back in 2012, Prof Joe Harding, of the Washington State University, said: "This is about recovering function

"That's what makes these things totally unique. They're not designed necessarily to stop anything.

"They're designed to fix what's broken. As far as we can see, they work."

Trials have shown the drug can imprve concentration, mood and memory.

Read more:

'Neurohacking' cream which 'helps memory and learning' could be available in 5 years - Mirror Online

Posted in Neurohacking | Comments Off on ‘Neurohacking’ cream which ‘helps memory and learning’ could be available in 5 years – Mirror Online

Quantum Computing: The Why and How – insideHPC

Posted: at 2:46 pm

In this video from the Argonne Training Program on Extreme-Scale Computing 2019, Jonathan Baker from the University of Chicago presents: Quantum Computing: The Why and How.

The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future. As a bridge to that future, this two-week program fills the gap that exists in the training computational scientists typically receive through formal education or other shorter courses. With around 70 participants accepted each year, admission to the ATPESC program is highly competitive. ATPESC is part of the Exascale Computing Project, a collaborative effort of the DOE Office of Science and the National Nuclear Security Administration.

Jonathan Baker is a second year Ph.D student at The University of Chicago advised by Fred Chong. He is studying quantum architectures, specifically how to map quantum algorithms more efficiently to near term devices. Additionally, he is interested in multivalued logic and taking advantage of quantum computings natural access to higher order states and using these states to make computation more efficient. Prior to beginning his Ph.D., he studied at the University of Notre Dame where he obtained a B.S. of Engineering in computer science and a B.S. in Chemistry and Mathematics.

Check out our insideHPC Events Calendar

Read more here:

Quantum Computing: The Why and How - insideHPC

Posted in Quantum Computing | Comments Off on Quantum Computing: The Why and How – insideHPC

Opinion | Quantum supremacy and the cat thats neither alive nor dead – Livemint

Posted: at 2:46 pm

There is this joke about a cat that belonged to a gentleman called Schrdinger: Schrdingers cat walks into a bar. And doesnt."

If you chuckled, you must have been a student of quantum physics. Austrian physicist Erwin Schrdingers Cat Theory is a paradox that explains the seeming contradiction between what we see with our naked eye and what quantum theory says actually is in its microscopic state. He used this to disprove something called the Copenhagen Interpretation" of quantum mechanics. This interpretation states that a particle exists in all states at once until observed". Schrdingers cat is in a box and could be alive or dead. But, till the box is opened, you wont know its state. This would mean that the cat could be both alive and dead at the same time.

Now, hold that thought while we leap from cats to computers. The ones that we use now follow the principles of a Turing machine. Here, information is encoded into bits (either 1s or 0s) and one can apply a series of operations (and, or, not) to those bits to perform any computation. A quantum computer is different, it uses qubits or the quantum analogue of bits. Now, jump back to the cat. Much like the feline in Schrdingers box, a qubit is not always 0 or 1, but can be both at the same time. Only at the end of the computation or when the box is opened, would you know which, but during the computation process, its exact state is indeterminate.

If this leaves you scratching your head, do not fret. In a 2017 Wall Street Journal interview, here is what Bill Gates said: I know a lot of physics and a lot of math. But the one place where they put up slides and it is hieroglyphics, its quantum." Even Einstein had some difficulty grasping the concept and famously dismissed it with, God does not play dice with the universe."

What makes a quantum computer exciting is its ability to exploit these properties of quantum physics to perform certain calculations far more efficiently and faster than any supercomputer. Thus, megacorps such as Microsoft, IBM, and Google have been working on quantum computers. Last week, Google claimed to have achieved quantum supremacy, or the point when such a computer can perform a calculation that a traditional one cannot complete within its lifetime. Googles quantum computer took 200 seconds for a calculation that would take a supercomputer 10,000 years.

While all this is impressive, what does it mean for us? Its hard to fully answer this, as we are venturing into an entirely new area, and the future will reveal applications we have not even imagined yet. Its a bit like classical computing. We did not know how it will totally revolutionize our world. In the same manner, quantum computing could be a game-changer for many industries.

Take big data and analytics. We produce 3 exabits of data every day, equivalent to 300,000 Libraries of Congress. Classical computers are reaching their limits of processing power. However, with exponentially more powerful quantum computers, we could spot unseen patterns in large data sets, integrate data from different data sets, and tackle the whole problem at once. This would be rocket fuel for artificial intelligence (AI), with quantum computing offering quick feedbacks and collapsing the learning curve of machines. This will make AI more intuitive, expand to various industries and help build artificial general intelligence.

Online security will be impacted, with our current data encryption strategies wilting under the assault of quantum power. On the other hand, there will be formidable new cryptographic methods like quantum key distribution, where even if the message gets intercepted, no one can read it (the Cat, again). On a side note, the security of every public blockchain will be under threat from quantum hacks. It was no coincidence that Bitcoins price slumped the day Google announced its breakthrough. Quantum computing could speed up drug development by reviewing multiple molecules simultaneously, quickly sequencing individual DNAs for personalized drugs. Another application lies in weather forecasting and, more importantly, climate-change predictions. It will require the tremendous power of quantum computing to create complex, ever-changing weather models to properly predict and respond to the climate cataclysm that awaits us.

Its a brave new world of quantum computing were entering, and we will discover its possibilities as we go along. If you feel youve got it but are still confused, thats okayit is the nature of this beast. Just step out of the box.

Jaspreet Bindra is a digital transformation and technology expert, and the author of the book The Tech Whisperer

See the article here:

Opinion | Quantum supremacy and the cat thats neither alive nor dead - Livemint

Posted in Quantum Computing | Comments Off on Opinion | Quantum supremacy and the cat thats neither alive nor dead – Livemint

IBM picked a fight with Google over its claims of ‘quantum supremacy.’ Here’s why experts say the feud could shake up the tech industry’s balance of…

Posted: at 2:46 pm

Most people probably couldn't tell you what quantum computing is. And, as we learned last week from an unusual public spat between tech companies, it turns out that the top quantum-computing engineers aren't so sure either.

It all started when Google researchers published a paper in the journal Nature declaring that they achieved "quantum supremacy" a breakthrough in computing speed so radical that, to use a fictional analogy, it might be akin to attaining hyperspace travel speed.

But before the champagne had even been poured, IBM was disputing Google's claims with a blog post, insisting that, technically,"quantum supremacy" hadn't really been reached.

Quantum computers have special properties that allow them to solve problems exponentially faster than even the most powerful computers today. Google researchers said their quantum computer solved a problem in 200 seconds that would take a powerful supercomputer 10,000 years to solve a potential game changer for fighting climate change, discovering drugs, predicting the stock market, and cracking the toughest encryption.

Quantum computing is still in its infant stages, and you won't find it in your office anytime soon, but investors and researchers see huge potential in it. Already, companies like Google, IBM, Microsoft, and Intel are racing tobuild quantum computers, while venture capitalists are pouring money into startups like IonQ, Rigetti Computing, Aliro, and D-Wave.

The feud between IBM and Google is in many ways academic. But it also highlights the prominence and importance within the industry of a technology considered science fiction just a decade ago. As computing technology gets pushed to its limits, new technology like quantum computing has the potential to open entirely new markets and shake up the balance of powers in the tech industry.

And while Google and IBM are taking different approaches to quantum, the rival claims underscore the seriousness with which each company views the technology.

"Google is doing things as a research project," Brian Hopkins, the vice president and principal analyst at Forrester, told Business Insider. "IBM has a commercial strategy, pouring money in to get money out. They want to get to a point where quantum computers are powerful enough so people are willing to pay money to solve problems."

At the same time, rivals like Microsoft, Intel, and quantum-computing startups are lauding Google's experiment and see it as a good sign for quantum computing.

Jim Clarke, Intel's director of quantum hardware, with one of the company's quantum processors. Intel

"We're beginning to have a discussion that a quantum computer can do something that a supercomputer does not," Jim Clarke, the director of quantum hardware at Intel, told Business Insider. "It motivates us that we're on the right path. There's still a long way to go to get to a useful quantum computer. I think this is a positive step along the way."

Computer experts told Business Insider it would take time to prove whether Google did, in fact, reach this benchmark and whether IBM's disputes were correct.

IBM, which built Summit, the most powerful supercomputer, said the experiment could be run by a supercomputer in 2 1/2 days, as opposed to the 10,000 years Google said would be required with a traditional computing technology.

In other words, even though Google's quantum computer is faster, if it were true that the supercomputer could run that same problem in 2 1/2 days, it would not be that large of a difference. Running a problem that takes 10,000 years to solve is impractical, but if it took 2 1/2 days to solve, it would not be that big of a deal.

"The conflict between Google and IBM highlights that there's some ambiguity in the definition of quantum supremacy," Bill Fefferman, an assistant professor of computer science at the University of Chicago, told Business Insider.

Still, Google's work shows the progress of quantum computing, and people shouldn't lose sight of that, despite the arguments about it, Martin Reynolds, the distinguished vice president at Gartner, said.

That being said, since quantum computing is still in its early days, Google's milestone is "a bit like being the record holder in the 3-yard sprint," Reynolds said.

Fefferman added that the "jury is still out" on whether Google has actually reached quantum supremacy, but not because of anything IBM said.

"While it's not completely clear to me that there's currently enough evidence to conclude that we've reached quantum supremacy, Google is certainly breaking new ground and going places people have not gone before," Fefferman said.

And though Google's experiment is a "major scientific breakthrough," it has little influence on commercial users today, Matthew Brisse, the research vice president at Gartner, said.

"It demonstrates progress in the quantum community, but from an end-user perspective, it doesn't change anyone's plans or anyone's project initiatives because we're still many years away," Brisse told Business Insider. "We're literally five to 10 years away from using this in a commercial production environment."

In general, IBM and Google's competitors told Business Insider they saw the experiment as a step forward.

"This is an exciting scientific achievement for the quantum industry and another step on a long journey towards a scalable, viable quantum future," a Microsoft spokesperson said in a statement.

Rigetti Computing CEO Chad Rigetti. YouTube/Y Combinator

Chad Rigetti, the founder and CEO of the startup Rigetti Quantum Computing, called Google's experiment a "remarkable achievement" that should give researchers, policymakers, investors, and other users more confidence in quantum computing.

He added that IBM's claims haven't been tested on actual hardware yet, and even if it were proved, it would still be slower and more expensive to run than on Google's quantum computer.

"The Google experiment is a landmark scientific achievement and the most important milestone to date in quantum computing," Rigetti told Business Insider. "It shows that real commercial applications are now within sight for superconducting qubit systems."

Clarke, of Intel, agreed that it was a positive for the quantum community overall, though he said that calling it "quantum supremacy" might be debatable. Clarke also said that it could show that quantum computers could be more efficient, as he suspects that Google's quantum computer uses much less power than running a Summit supercomputer for over two days.

"What's been interesting to me is seeing some of the negative reactions to this announcement," Clarke told Business Insider. "If you're in the quantum community, any good experiment that suggests there's a long future in quantum computing should be appreciated. I haven't quite understood some of the negative response at this point."

What happens next is that other scientists will review the paper, work to prove or disprove it, and debate whether quantum supremacy has been reached. Ines Montano, an associate professor of applied physics at Northern Arizona University, said IBM would likely work to prove that its supercomputer could run that experiment in a shorter time frame.

"IBM will have to figure out something to put some data to their claim," Montano told Business Insider. "That will be a very public discussion for a while. In the meantime, there's the quest is to find problems that may be more applicable to current things ... We're not as far away as we were thinking 10 years ago."

This will likely take some time, as quantum supremacy is difficult to prove. Still, quantum computing is still in its early stages, experts say, and they expect more advancements in the coming years. Experts predict that the industry is still at least 10 years away from useful quantum computers.

"Google's managed to find a complex problem that they can solve on this system," Reynolds told Business Insider. "It isn't a useful solution, but it is a big step forwards. IBM offers a way to solve the problem with classical hardware in a couple of days. That's also impressive and shows the caliber of thinking that we find in these early quantum programs."

Continued here:

IBM picked a fight with Google over its claims of 'quantum supremacy.' Here's why experts say the feud could shake up the tech industry's balance of...

Posted in Quantum Computing | Comments Off on IBM picked a fight with Google over its claims of ‘quantum supremacy.’ Here’s why experts say the feud could shake up the tech industry’s balance of…

Editorial: Quantum computing is a competition we can’t afford to lose – The Winchester Star

Posted: at 2:46 pm

We Americans have a habit of bragging about our feats of technology. Our chief economic and military rivals namely Russia and China seldom do. They prefer to keep their secrets.

No one in this country is certain, then, how far the state-controlled economies of those nations have gone in developing quantum computing.

What is certain is that our national security, both militarily and economically, demands that the United States be first to perfect the technology. The reason for that was demonstrated in an announcement Wednesday by technology giant Google.

Google officials claim to have achieved a breakthrough in quantum computing. They say they have developed an experimental quantum computing processor capable of completing a complex mathematical calculation in less than four minutes.

Google says it would take the most advanced conventional supercomputer in existence about 10,000 years to do that.

Wrap your mind around that, if you can.

Other companies working with quantum computing, including IBM, Intel and Microsoft, say Google is exaggerating. IBM researchers told The Associated Press the test calculation used by Google actually could be handled by certain supercomputers in two and one-half days.

Still, you get the idea: Quantum computing will give the nation including its armed forces and industries that gets there first an enormous advantage over everyone else. The possibilities, ranging from near-perfect missile defense systems to vastly accelerated research on curing diseases, are virtually endless.

U.S. officials are cognizant of the ramifications of quantum computing, to the point that Washington has allocated $1.2 billion to support research during the next five years.

If that is not enough to ensure the United States stays in the lead in the quantum computing race, more should be provided. This is a competition we cannot afford to lose.

Read this article:

Editorial: Quantum computing is a competition we can't afford to lose - The Winchester Star

Posted in Quantum Computing | Comments Off on Editorial: Quantum computing is a competition we can’t afford to lose – The Winchester Star

Quantum investment soars in the UK to more than 1bn – Management Today

Posted: at 2:46 pm

Whats very small but set to be very big? Quantum technology, according to the UK government, which took the decision in June to reinvest in a scheme designed to move the science beyond academia and research laboratories and into commercial and practical use.

Some 1bn has already been invested in the UKs National Quantum Technologies Programme, which was set up in 2013. The government recently announced a further 153m of funding through the Industrial Strategy Challenge Fund (which aims to ensure that 2.4 per cent of GDP is invested in R&D by 2027) plus 200m of investment from the private sector.

This means spending by industry is outstripping government investment for the first time, a good indication that the technology has stepped beyond an initial, broadly speculative stage. "Quantum is no longer an experimental science for the UK," says former science minister Chris Skidmore. "Investment by government and businesses is paying off as we become one of the worlds leading nations for quantum science and technologies."

Whereas "classical" computers are based on a structure of binary choices yes or no; on or off quantum computing is a lot more complicated. Classical chips rely on whether or not an electron is conducted from one atom to another around a circuit, but super-cooled quantum chips allow us to interface with the world at a much deeper level, taking into account properties such as superposition, entanglement or interference.

Confused? Think of a simple coin toss. Rather than being able to simply call heads or tails, superposition allows us to take into account when a coin spins, while entanglement is whether its properties are intrinsically linked with those of another coin.

To help harness this new potential in different areas, the governments programme works across four hubs: sensing and timing; imaging; computing and simulation; and communications.

One of the key advances that quantum computing is expected to bring is not just substantially greater processing speed but the ability to mimic and, therefore, understand and predict the ways that nature works.

For example, this could allow us to look directly inside the human body, see through smoke or mist, develop new drugs much more quickly and reliably by reviewing the effect on many molecules at the same time, or even make our traffic run smoothly. Meanwhile, the Met Office has already invested in this technology to improve weather forecasting.

Image: IBM Q System One quantum computer, photo by Misha Friedman/Getty Images

See the rest here:

Quantum investment soars in the UK to more than 1bn - Management Today

Posted in Quantum Computing | Comments Off on Quantum investment soars in the UK to more than 1bn – Management Today

Courts continue to consider intersection of Fourth Amendment and technology: without a warrant, retrieval of car’s electronic data unconstitutional,…

Posted: at 2:44 pm

The Fourth Amendment has received significant attention in recent court rulings involving surveillance, electronic data retrieval, and other types of technology. Two rulings issued on October 21, 2019 demonstrate how difficult it can be to anticipate the outcome of Fourth Amendment disputes relating to technology. In one, the Georgia Supreme Court found the warrantless search of electronic data from a car following a fatal accident to be unconstitutional. In the second, the U.S. Court for the Western District of Tennessee held that the Fourth Amendment permitted the warrantless placement of a government surveillance camera on a mans private hunting and fishing property.

Mobley v. State (Ga. Oct. 21, 2019)

In Mobley, the Georgia Supreme Court ruled that a trial court erred in denying a motion to suppress evidence that law enforcement retrieved from the electronic data recorder in the defendants car. In coming to this conclusion, the Mobley court ruled that regardless of any reasonable expectation of privacy the physical entry of a police officer into the defendants car to retrieve the electronic data was a search for Fourth Amendment purposes.

The Mobley case arose after a car driven by defendant Mobley collided with a car that pulled out of a private driveway; both occupants of the latter car died. Before the cars were removed from the accident scene, a police investigator entered both cars, and attached a crash data retrieval device to data ports in the cars to download available data. The data revealed that shortly before the collision, Mobleys car was traveling almost 100 miles per hour. The next day, law enforcement applied for a warrant to seize the electronic data recorders. The warrant was issued, but no additional data was retrieved from the recorders. A grand jury indicted Mobley on a number of counts, including vehicular homicide.

Mobley moved to suppress evidence retrieved from the data recorder, arguing that it was an unreasonable search and seizure in violation of the Fourth Amendment. The trial court denied the motion, finding that because police obtained a warrant for the data the following day, and the warrant application did not rely on data from the device, that data would have inevitably been discovered by investigators.

Mobley appealed, and the Georgia Court of Appeals affirmed. The appeals court found Mobley had no reasonable expectation of privacy in the data, as much of it such as approximate speed and whether a driver applied the brakes could be observed by a person looking at the car based on speed reduction or brake light activation.

Last week, the Georgia Supreme Court reversed. Noting that the Fourth Amendment is concerned with government trespasses upon the rights of individuals to be secure in their persons, houses, papers, and effects, the court determined that the retrieval implicated Mobleys Fourth Amendment rights, regardless of any reasonable expectation of privacy. The court pointed out that while the reasonable expectation of privacy inquiry established in Katz v. United States, 389 U.S. 347 (1967) is one way of determining whether the Fourth Amendment is implicated, that analysis did not replace the traditional trespass test. Because entering Mobleys vehicle was trespassory in nature, the reasonable expectation of privacy inquiry was unnecessary.

Georgias highest court then concluded that because the data retrieval occurred without a warrant it was an unreasonable search and seizure that violated the Fourth Amendment. Also, because the record did not show the law enforcement officers were actively pursuing a warrant at the time the data was retrieved, the retrieval did not fit into the narrow inevitable discovery exception. Thus, the court held that the motion to suppress should have been granted.

In addition to re-affirming the vitality of the trespassory inquiry post-Katz, Mobley also demonstrates that adherence to criminal procedure must not take a back seat to the speed and convenience of digital data collection.

Hollingsworth v. United Sates Fish and Wildlife Service, et al. (W.D. Tenn. Oct. 21, 2019)

While Mobley may be hailed by privacy advocates, the Western District of Tennessees ruling in Hollingsworth is likely to be less enthusiastically embraced, despite being grounded in solid precedent. In Hollingsworth, the court dismissed constitutional claims against the U.S. Fish and Wildlife Service, an agent for the service, and an agent of Tennessees Wildlife Resources Agency based on a camera positioned on plaintiff Hollingsworths property to surveil him. The court found that the Fourth Amendment was not implicated because although the camera was on plaintiffs property, it was in an open field rather than on the property surrounding plaintiffs home.

The case arose after Hollingsworth found a camera mounted to a tree in the interior of property he used for hunting and fishing. Hollingsworth removed the camera and found pictures of men he believed to be the defendants on the SD card it contained. Hollingsworth sued the individual agents and their respective agencies for Fourth Amendment violations and trespass, although the claim against the state wildlife agency was dismissed on sovereign immunity grounds. The remaining defendants moved to dismiss.

The court observed that while the Fourth Amendment protects houses against unreasonable searches, that protection extends only to the dwelling and the surrounding land (known as the curtilage) where privacy expectations are most heightened. However, the Fourth Amendment does not prohibit all investigations on private property, such as in those areas which are more easily accessible to the public and less intimate to ones home. Such areas beyond the curtilage are considered open field, and intrusion upon those areas is not considered a search of ones house.

As a result, the court found that even though Hollingsworth found the camera in a position designed to record his entrance and exit from his property, and even though his property was posted and landlocked by other parcels, the use of the camera did not constitute a Fourth Amendment violation. This was true even where the defendants had to commit a trespass to reach the area where they placed the camera, because the Fourth Amendment protects a smaller scope of property than trespass law does. The court also explained that prior cases had held that areas outside of the curtilage still could be considered open field despite efforts to prevent unwanted guests from intruding, such as the use of fences, locked gates, and no trespassing signs. Further, the court explained that a reasonable expectation of privacy analysis was unnecessary, because courts have consistently held that individuals cannot have such an expectation in open field property. Finally, the court dispatched with Hollingsworths argument that the use of a surveillance camera to observe his movements was analogous to the GPS tracking of a persons movements addressed in U.S. v. Jones, 565 U.S. 400 (2012), because the Jones holding relied on the determination that a car was an effect for Fourth Amendment purposes and thus in the zone of constitutional privacy, whereas an open field is not.

Conclusion

Although it may seem illogical that the Fourth Amendment would tolerate the warrantless placement of a surveillance camera on an individuals property but not the use of information for which a warrant was obtained the next day, both decisions are grounded in long-standing precedent. Although neither vehicle data recorders nor surveillance cameras existed at the time the Fourth Amendment was drafted, the trespassory inquiry and open field doctrine are both sufficiently developed to adequately address new technology. Technological developments can pose issues where the law is ill-suited to adapt to novel issues, but for now, it appears that Fourth Amendment jurisprudence is flexible enough to tackle changing circumstances. However, had Hollingsworth involved, for example, a highly powerful zoom lens or drone, the argument against applying the open field doctrine might have been stronger.

See the rest here:
Courts continue to consider intersection of Fourth Amendment and technology: without a warrant, retrieval of car's electronic data unconstitutional,...

Posted in Fourth Amendment | Comments Off on Courts continue to consider intersection of Fourth Amendment and technology: without a warrant, retrieval of car’s electronic data unconstitutional,…

Reasonable Suspicion From Driver to Car: A Few Thoughts on Kansas v. Glover – Reason

Posted: at 2:44 pm

Next Monday, the Supreme Court will hold argument in an interesting Fourth Amendment case, Kansas v. Glover. Glover raises a simple question: When an officer spots a car driving on a public road, and a license check reveals that the registered owner of the car has a suspended license, does the fact that the registered owner of the car has a suspended license create reasonable suspicion that the driver of the car has a suspended license that then justifies a Terry stop of the car? Put another way, for Fourth Amendment purposes, can the police presume that the registered owner of a car is driving it?

Glover touches on a conceptually rich Fourth Amendment question I have written about before, and I wanted to offer a few thoughts about different ways the Justices might approach it.

I. What is the nature of reasonable suspicion?

The most interesting part of Glover, I think, is that it raises a fundamental question about the nature of the reasonable suspicion testand of likelihood thresholds in Fourth Amendment law, such as probable cause, more broadly.

Here's the context. The norm in Fourth Amendment law is for every case on likelihood thresholds to be fact-specific. To learn what reasonable suspicion or probable cause mean, you start by reading what the precedents say the standards are. But the doctrinal statements of the standard are vague in isolation. To really learn the law, I think, you need to read a bunch of Supreme Court cases. After you read a bunch of cases, you get a what Karl Llewellyn would call a "situation-sense" for what kind of degree of plausibility the standards require.

This common-sense, totality-of-the-circumstances inquiry doesn't produce a lot of rules on what facts amount to enough suspicion. But both reasonable suspicion and probable cause become pretty predictable when you study Fourth Amendment law because they're based on a kind of feel that you learn to develop when you read the cases. Even thought the doctrinal tests can be vague in their words, every police officer and every judge with a criminal docket eventually develops a situation-sense of where the lines are. There are disagreements on occasion, but they're relatively rare.

II. The Unusual Feature of Glover

Glover is unusual because it involves a recurring fact pattern that is based on likelihoods likely outside our typical experience. First, the police see a car and run a license check. Second, the license check reveals that the registered owner has a suspended license. The question is, does the license suspension create reasonable suspicion to stop the car? It's harder to answer that based on our situation-sensethan it usually is in Fourth Amendment cases, I think, as it would seem to depend on dynamics that most people don't often encounter.

Consider the questions you'd want to think about. First assume that the case before you is is entirely typical and generic. To answer the typical case, you'd probably want to know two things. First, how often do non-owners drive an owner's car? And second, how frequently do people with suspended licenses continue to drive?

That's a start. But then you would want to know if the particular case before you is typical. While we might have answer for the odds in a typical case, any particular case might be quite different. Variation may be common. And that can change the odds.

Consider two examples. First, how often non-owners drive a car may vary based on the city or even the neighborhood where the car is found. Family size is one possible concern. In a town like Fresno where 37% of households include kids, there's a decent chance that teenage drivers might be driving the family car. In a city like San Francisco where only 16% of households have kids, that's less likely. Along the same lines, the kind of car might make a difference. I would guess that a new Porsche 911 is very likely to be driven by its registered owner. On the other hand, a family minivan likely would have more possible drivers.

The same dynamic applies to the rates at which people still drive after their licenses have been suspended. That plausibly varies based on the reasons why a particular jurisdiction suspends licenses. For example, Illinois may suspend your license if you don't pay your parking tickets. In California, on the other hand, they won't. I would imagine that people are particularly unlikely to stop driving when their licenses are suspended for unpaid parking tickets, either because they don't have the money to pay but need to drive or else they don't think unpaid tickets are a big deal. The key point, it seems to me, is that state or local policies can change the likelihood that spotting a car on the road when the owner's license was suspended means that a crime is afoot.

III. Three Conceptual Ways Forward

So how do you try to figure out if there is reasonable suspicion in Glover? In light of the above discussion, I think there are three basic conceptual approaches:

A. Continue to focus on the overall gestalt sense of whether there is reasonable suspicion. Under this approach, you would treat Glover like any other reasonable suspicion case. You'd try to get a rough sense whether in general an owner's suspended license will create reasonable suspicion when the car is spotted on the road. You would recognize some special cases will be different, as you might be in a place where those rough senses aren't justified or dealing with a particular car or time when you might expect a different result. But you'd reach the answer guided by the rough sense, the feel, of the likelihood.

B. Focus on the statistical likelihood of a typical case. Under this approach, you would want to know the typical empirics of how many cars there are per driver and how license suspensions affect driving patterns. You could then estimate a rough likelihood that a typical stop based on a suspended license is going to involve the suspended owner behind the wheel. You'd then want to know the certainty threshold of reasonable suspicion, and you would ask if the empirics support a finding of reasonable suspicion in the general case.

C. Focus on the statistical likelihood of that actual case. Under this approach, you would try to develop a statistical model of that particular stop. You would recognize that the likelihood of reasonable suspicion varies based on local factors, ranging from the jurisdiction to the neighborhood to the car to the time of day. As a result, instead of answering the likelihood of finding the driver behind the wheel in some generic case, you would try to figure out the likelihood of it based on all the kinds of local factors that would be known when the officer makes the stop. You'd then want to know the certainty threshold of reasonable suspicion, and you would ask if the empirics support a finding of reasonable suspicion in the general case.

IV. We've Been Here Before: Florida v. Harris

At this point you're probably wondering: Hasn't this problem come up before? And indeed it has. I see a lot of conceptual similarities between Glover and a 2013 probable cause case, Florida v. Harris, 568 U.S. 237 (2013). In Harris, the state court below went for approach C. The U.S. Supreme Court reversed, adopting approach A.

Harris asked whether a positive alert from a drug-sniffing dog was sufficient to create probable cause that drugs were present in the car. As I see it, the dog's alert on the car was sort of like the license check that reveals the car owner's suspended license. It was a single triggering event, with the likelihood probably outside our everyday experience, which could vary in significance. The question in Harris was, how do you know when the alert was sufficient?

In the decision below, the Florida Supreme Court took option C above. That is, the Florida court assessed the statistical likelihood that each particular dog's alert created that particular probable cause. That approach required the government to produce a lot of information about that particular dog to be able to assess the reliability of its alerts. In each case, the Florida Supreme Court ruled, the State was required to

present the training and certification records, an explanation of the meaning of the particular training and certification of that dog, field performance records, and evidence concerning the experience and training of the officer handling the dog, as well as any other objective evidence known to the officer about the dog's reliability in being able to detect the presence of illegal substances within the vehicle.

The U.S. Supreme Court granted cert and unanimously reversed. Instead of the Florida court's approach C, the U.S. Supreme Court took approach A.

According to Justice Kagan, writing for the majority, the Florida court's statistical approach had "flouted" the U.S. Supreme Court's guidance on probable cause that "rejected rigid rules, bright-line tests, and mechanistic inquiries in favor of a more flexible, all-things-considered approach."

The Court's basic thinking was that well-trained drug-sniffing dogs are generally pretty reliable. Based on that, evidence of solid training was usually going to be enough:

If a bona fide organization has certified a dog after testing his reliability in a controlled setting, a court can presume (subject to any conflicting evidence offered) that the dog's alert provides probable cause to search. The same is true, even in the absence of formal certification, if the dog has recently and successfully completed a training program that evaluated his proficiency in locating drugs.

But it wouldn't be enough in every case, as a defendant "must have an opportunity to challenge such evidence of a dog's reliability, whether by cross-examining the testifying officer or by introducing his own fact or expert witnesses."

The defendant, for example, may contest the adequacy of a certification or training program, perhaps asserting that its standards are too lax or its methods faulty. So too, the defendant may examine how the dog (or handler) performed in the assessments made in those settings. Indeed, evidence of the dog's (or handler's) history in the field, although susceptible to the kind of misinterpretation we have discussed, may sometimes be relevant, as the Solicitor General acknowledged at oral argument. See Tr. of Oral Arg. 23-24 ("[T]he defendant can ask the handler, if the handler is on the stand, about field performance, and then the court can give that answer whatever weight is appropriate"). And even assuming a dog is generally reliable, circumstances surrounding a particular alert may undermine the case for probable cause if, say, the officer cued the dog (consciously or not), or if the team was working under unfamiliar conditions.

V. Which Approach for Glover?

Enough wind-up. What should the Court do with Glover? My own view, consistent with the unanimous opinion in Harris, is that Approach A is the right path forward. That is, the Court should get a feel for the general likelihood that the owner is behind the wheel when the police learn that an owner's license is suspended but the car is on the road. No calculations or statistics are needed. As in Harris, it's more a matter of ball-park feel.

And as in Harris, that situation-sense shouldn't be the end of things. Whichever way the Justices see the default, the other side should be able to show that a particular case is special. If the Justices think that an owner-suspension alert normally creates reasonable suspicion, the defense should be able to show specific circumstances when it doesn't. If the Justices think that an owner-suspension alert normally fails to create reasonable suspicion, the government should be allowed to show when it does.

My own sense, I'll add, is that the owner-suspension alert ordinarily creates reasonable suspicion these days. That's largely the case because I think reasonable suspicion is a pretty low threshold. It's more than a hunch, but it's a lot less than probable cause. When the owner of a car has a suspended license but the car is on the road, it's certainly possible that someone else is driving. But my situation-sense is that, these days, it's ordinarily going to be reasonable suspicion. The owner of the car isn't supposed to be driving, but there's the car on the road. It's the kind of thing that a prudent officer would reasonably want to check out to make sure the owner isn't behind the wheel.

VI. The Problem With Fourth Amendment Statistics, and A Response to 17 States and to Professor Crespo

Why not adopt one of the statistical approaches, such as B or C above? The main reason is one I wrote about in this book chapter in 2012, Why Courts Should Not Quantify Probable Cause.

In that chapter, I argued that it's important not to try to quantify probable cause in order to measure it accurately. The basic problem is that you don't know what you don't know. When we quantify, we feel like we're being all scientific. But we're actually blinding ourselves to the intuitions needed to assess probable cause accurately. Using numbers, I argued, would provide a false sense of certainty that blinds us to the intuitions needed to assess probable cause accurately.

I think similar concerns make approaches B or C problematic in Glover. If you come up with a typical likelihood, approach B above, you don't know if a particular case is a typical example. You miss or don't appreciate all the reasons to think a particular case is different. And if you come up with a case-specific likelihood, approach C above, you end up misunderstanding when you have only a partial and inaccurate view of the relevant criteria and factors that misrepresents the odds. It feels scientific, as it has numbers and data. But this is a context in which I think the intuitive approach is more accurate.

This puts me in disagreement with some very interesting amicus briefs, I should add. First, an amicus brief of 17 states adopts approach B. It offers and analyzes empirical evidence of the general odds that a driver-suspension alert will mean that a suspended driver is behind the wheel. It's an interesting brief, and the general odds can help inform intuitions about general cases. But I don't think it can go beyond that.

I also end up in disagreement with Professor Andrew Crespo, who filed a solo amicus brief in Glover in support of the defendant. I think it's fair to say that Professor Crespo favors approach C. In his brief, he argues that the government must provide localized statistical data to establish that the owner-suspension created reasonable suspicion. In particular, he argues that the state should have to provide evidence of "how many times vehicles reportedly registered to unlicensed drivers are actually driven by those individuals when such vehicles are stopped in the relevant geographic area."

I disagree with Professor Crespo for the reasons flagged above. Among the difficulties, what is the level of generality for the "relevant geographic area"? It seems to me that the odds may vary along different geographic criteria, ranging from the state or city (which may determine suspension policies) to the neighborhood (which may be more or less family-friendly) to the specific road (which may be driven by people from different places). The odds also can vary based on non-geographic factors, such as the car (Porsche v. mini-van), the time of day (commuting time vs. night-time), the decade (are we moving to self-driving cars?), or the officer who decided to make the stop.

Even assuming the government can readily collect some kind of data, which is its own problem, it's hard for us to know which criteria matter. And I think that makes it hard to use data about those criteria to say whether a particular stop is one that was justified by reasonable suspicion.

As always, stay tuned. Glover will be argued next Monday, November 4th, 2019.

Link:
Reasonable Suspicion From Driver to Car: A Few Thoughts on Kansas v. Glover - Reason

Posted in Fourth Amendment | Comments Off on Reasonable Suspicion From Driver to Car: A Few Thoughts on Kansas v. Glover – Reason