High risk of rip currents at all South Jersey beaches on Saturday – Press of Atlantic City

The National Weather Service has issued a high risk of rip currents along all South Jersey beaches for Saturday, and cautions that a moderate rip current risk will probably continue through at least Wednesday.

This is the first high rip current risk of the season, but the risk has been moderate throughout the past week.

A high risk of rip currents means that strong and frequent rip currents are likely to form, and are often hard to detect. The rip currents are due to a long period southeast swell that promotes their formation.

Rip currents have led to an above average number of water rescues this week along the South Jersey shore, as well as multiple drownings.

ATLANTIC CITY The families of two missing Atlantic County teens sat on the beach Friday af

Two Atlantic County teenagers were swept out to sea in a rip current Thursday evening off of Atlantic City and are presumed dead. The effort to recover their bodies continued Saturday morning.

Farther north off Belmar in Monmouth County, two more swimmers were caught in rip currents, where one of them drowned.

Meteorologist and beach patrols strongly advise beach-goers to only swim in the presence of lifeguards, and never swim on unguarded beaches.

Most South Jersey beach patrols are fully staffed seven days a week beginning this weekend.

See more here:

High risk of rip currents at all South Jersey beaches on Saturday - Press of Atlantic City

Topless Women Can Be Banned From Ocean City Beaches: Maryland AG – Patch.com


Patch.com
Topless Women Can Be Banned From Ocean City Beaches: Maryland AG
Patch.com
OCEAN CITY, MD Leaders in Ocean City have the legal right to ban topless women from sunbathing on the town's beaches, says the Maryland Attorney General's office. The legal opinion issued Thursday bolsters an emergency measure the city council ...
Ocean City says no to beach nudityDelmarva Daily Times

all 2 news articles »

See the article here:

Topless Women Can Be Banned From Ocean City Beaches: Maryland AG - Patch.com

The central peaks of Tycho – Blastr

One my favorite things about living in Colorado is the view of the mountains. Even in late spring,the Rockies nearby are tall enough to have snow on them, and a decent rainfall where I live means more snow on the loftier peaks. If its deep enough the mountains lose all contrast, just appearing as eye-achingly white figures thrusting up into the sky.

And that may be why I love the above image so much. Truthfully, if I didnt tell you, would you have guessed that those are mountains on the Moon?

But they are. That cluster of peaks sits right in the center of the large crater Tycho, in the Moons southern hemisphere on the near side. Check out the peaks in context:

Ah, get it now? In this oblique view taken by the Lunar Reconnaissance Orbiter when it was a mere 59 kilometers above the Moons surface, Tychos far crater rim wall can be seen above the mountains, and the near rim below them. The Sun was high in the sky when this was taken, so shadows are short, giving the landscape its luminous quality.

Usually LRO observes the area of the Moon directly below it, but it is sometimes commanded to look off to the side to get a wider view of the moonscape. In this case, it helps get context for these mountains, instead of just seeing them from straight above.

Contrast this with how they appear when the Sun is low:

I know, right? That was also taken by LRO, back in 2011. I love the long shadow of the cluster stretching behind them, and the shadow of the crater rim encroaching on the lower right.

As much as they might look like the mountains I can see out my window, they formed very differently. The Rockies were pushed up by tectonic forces under the Earths surface, taking millions of years (the current mountains formed near the end of the Cretaceous period).

The Tycho peaks formed in a few minutes. Yes, minutes.

About 100 million years ago, an asteroid something like 5-10 km across slammed into the Moons surface. The huge energy released was far, far larger than what youd get out of every nuke on Earth today if you detonated them all simultaneously. The explosion created a colossal shock wave in the Moons surface, carving out many cubic kilometers of material, creating the crater in just a few minutes. Material ejected from the center flew upward and outward for hundreds of kilometers, and when the plumes collapsed onto the surface they formed bright rays, all pointing back toward the impact center.

As for the central peaks ... take a glass, fill it with water, and then let a drop fall into it from a height. It will form a temporary crater in the water surface that quickly collapses. Thats due to gravity; the displaced water is in a wave that is above the surface of the rest of the water, so it falls down and flows back inward. This creates a wave rushing toward the center from all sides. When it reaches the center, that water all crashes into itself, sending a column of water up into the air.

Thats what happens when craters form, too! The rock flows outward after the impact, but then once its momentum dies it starts to flow back toward the impact point. That shrinking circle of material meets at the center, then flies upward. It solidifies like that, forming those mountains.

Mind you, the peaks in Tycho are 2000 meters high! That must have been a helluva flow. Incredible.

Over the years, Ive seen Tycho through telescopes hundreds of times. Its best at full Moon when the rays are bright, and its beautiful. But its formation was an event so colossal that had that rock hit Earth instead, the dinosaurs wouldve been wiped out much earlier.

By the fortunes of trajectory and velocity they got an extra 35 million years to rule the Earth, but then were wiped out by a similar event anyway. We know no asteroid that big is headed our way anytime soon, but it doesnt take one 10 km across to give us a bad day. Hopefully, when we do see one coming our way, well be able to do better than just watch it come in.

See more here:

The central peaks of Tycho - Blastr

A discussion about AI’s conflicts and challenges – TechCrunch

Thirty five years ago having a PhD in computer vision was considered the height of unfashion, as artificial intelligence languished at the bottom of the trough of disillusionment.

Back then it could take a day for a computer vision algorithm to process a single image.How times change.

The competition for talent at the moment is absolutely ferocious, agrees Professor Andrew Blake, whose computer vision PhD was obtained in 1983, but who is now, among other things, a scientific advisor to UK-based autonomous vehicle software startup,FiveAI, which is aiming to trial driverless cars on Londons roads in 2019.

Blake founded Microsofts computer vision group, and was managing director ofMicrosoft Research, Cambridge, where he was involved in the development of the Kinect sensor which was something of an augur for computer visions rising star (even if Kinect itself did not achieve the kind of consumer success Microsoft might have hoped).

Hes now research director at theAlan Turing Institute in the UK, which aims to support data science research, which of course means machine learning and AI, and includes probing the ethics and societal implications of AI and big data.

So how can a startup like FiveAI hope to compete with tech giants like Uber and Google, which are also of course working on autonomous vehicle projects, in this fierce fight for AI expertise?

And, thinking of society as a whole, is it a risk or an opportunity that such powerful tech giants are throwing everything theyve got at trying to make AI breakthroughs? Might the AI agenda not be hijacked, and progress in the field monopolized, by a set of very specific commercial agendas?

I feel the ecosystem is actually quite vibrant, argues Blake, though his opinion is of course tempered by the fact he was himself a pioneering researcher working under the umbrella of a tech giant for many years. Youve got a lot of talented people in universities and working in an open kind of a way because academics are quite a principled, if not even a cussed bunch.

Blake says he considered doing a startup himself, back in 1999, but decided that working for Microsoft, where he could focus on invention and not have to worry about the business side of things, was a better fit. Prior to joining Microsoft his research work included building robots with vision systems that could react in real time a novelty in the mid-90s.

People want to do it all sorts of different ways. Some people want to go to a big company. Some people want to do a startup. Some people want to stay in the university because they love the productivity of having a group of students and postdocs, he says. Its very exciting. And the freedom of working in universities is still a very big draw for people. So I dont think that part of the ecosystem is going away.

Yet he concedes the competition for AI talent is now at fever pitch pointing, for example, to startup Geometric Intelligence, founded by a group of academics andacquired by Uber at the end of 2016 after operating for only about a year.

I think it was quite a big undisclosed sum, he says of the acquisition price for the startup. It just goes to show how hot this area of invention is.

People get together, they have some great ideas. In that case instead of writing a research paper about it, they decided to turn it into intellectual property I guess they must have filed patents and so on and then Uber looks at that and thinks oh yes, we really need a bit of that, and Geometric Intelligence has now become the AI department of Uber.

Blake will not volunteer a view on whether he thinks its a good thing for society that AI academic excellent is being so rapidly tractor-beamed into vast, commercial motherships. But he does have an anecdote that illustrates how conflicted the field has become as a result of a handful of tech giants competing so fiercely to dominate developments.

I was recently trying to find someone to come and consult for a big company the big company wants to know about AI, and it wants to find a consultant, he tells TechCrunch. They wanted somebody quite senior and I wanted to find somebody who didnt have too much of a competing company allegiance. And, you know what, there really wasnt anybody I just could not find anybody who didnt have some involvement.

They might still be a professor in a university but theyre consulting for this company or theyre part time at that company. Everybody is involved.It is very exciting but the competition is ferocious.

The government at the moment is talking a lot about AI and the context of the industrial strategy and understanding that its a key technology for productivity of the nation so a very important part of that is education and training. How are we going to create more excellence? he adds.

The idea for the Turing Institute, which was set up in 2015 by five UK universities, is to play a role here, says Blake, by training PhD students, and via its clutch of research fellows who, the hope is, will help form the next generation of academics powering new AI breakthroughs.

The big breakthrough over the last ten years has been deep learning but I think weve done that now, he argues. People are of course writing more papers than ever about it. But its entering a more mature phase where at least in terms of using deep learning. We can absolutely do it. But in terms of understanding deep learning the fundamental mathematics of it thats another matter.

But the hunger, the appetite of companies and universities for trained talent is absolutely prodigious at the moment and I am sure we are going to need to do more, he adds, on education and expertise.

Returning to the question of tech giants dominating AI research he points out that many of these companies are making public toolkits available, such as Google, Amazon and Microsoft have done, to help drive activity across a wider AI ecosystem.

Meanwhile academic open source efforts are also making important contributions to the ecosystem, such as Berkleys deep learning framework, Caffe. Blakes view therefore is thata few talented individuals can still make waves despite not wielding the vast resources of a Google, an Uber or a Facebook.

Often its just one or two people when you get just a couple of people doing the right thing its very agile, he says. Some of the biggest advances in computer science have come that way. Not necessarily the work of a group of a hundred people. But just a couple of people doing the right thing. Weve seen plenty of that.

Running a big team is complex, he adds. Sometimes, when you really want to cut through and make a breakthrough it comes from a smaller group of people.

That said, he agrees that access to data or, more specifically the data that relates to your problem, as he qualifies it is vital for building AI algorithms. Its certainly true that the big advance over the last ten years has depended on the availability of data often at Internet-scale, he says. So weve learnt, or weve understood, how to build algorithms that learn with big data.

And tech giants are naturally positioned to feed off of their own user-generated data engines, giving them a built-in reservoir for training and honing AI models arguably locking in an advantage over smaller players that dont have, for example in Facebooks case, billions of users generating data-sets on a daily basis.

Although even Google, via its AI division DeepMind, has felt the need to acquire certain high value data-sets by forging partnerships with third party institutions such as the UKs National Health Service, where DeepMind Health has, since late 2015, been accessing millions of peoples medical data, which the publicly funded NHS is custodian of, in an attempt to build AIs that have diagnostic healthcare benefits.

Even then, though, the vast resources and high public profile of Google appears to have given the company a leg up. A smaller entity approaching the NHS with a request for access to valuable (and highly sensitive) public sector healthcare data might well have been rebuffed. And would certainly have been less likely to have been actively invited in, as DeepMind says it was. So when its Google-DeepMind offering free help to co-design a healthcare app or their processing resources and expertise in exchange for access to data, well, its demonstrably a different story.

Blake declines to answer when asked whether he thinks DeepMind should have released the names of the people on its AI ethics board. (Next question!) Nor will he confirm (nor deny) if he is one of the people sitting on this anonymous board. (For more on his thoughts on AI and ethics see the additional portions from the interview at the end of this post.)

But he does not immediately subscribe to the view that AI innovations must necessarily come at the cost of individual privacy as some have suggested by, for example, arguing that Apple is fatally disadvantaged in the AI race because it will not data-mine and profile its users in the no-holes-barred fashion that a Google or a Facebook does (Apple has rather opted to perform local data processing and apply obfuscation techniques, such as differential privacy, to offer is users AI smarts that dont require they hand over all their information).

Nor does Blake believe AIs blackboxes are fundamentally unauditable a key point given that algorithmic accountability will surely be necessary to ensure this very powerful technologys societal impacts can be properly understood and regulated, where necessary, to avoid bias being baked in. Rather he says research in the area of AI ethics is still in a relatively early phase.

Theres been an absolute surge of algorithms experimental algorithms, and papers about algorithms just in the last year or two about understanding how you build ethical principles like transparency and fairness and respect for privacy into machine learning algorithms and the jury is not yet out. I think people have been thinking about it for a relatively short period of time because its arisen in the general consciousness that this is going to be a key thing. And so the work is ongoing. But theres a great sense of urgency about it because people realize that its absolutely critical. So well have to see how that evolves.

On the Apple point specifically he responds with a no I dont think so to the idea that AI innovation and privacy might be mutually exclusive.

There will be good technological solutions, he continues. Weve just got to work hard on it and think hard about it and Im confident that the discipline of AI, looked at broadly so thats machine learning plus other areas of computer science like differential privacy you can see its hot and people are really working hard on this. We dont have all the answers yet but Im pretty confident were going to get good answers.

Of course not all data inputs are equal in another way when it comes to AI. And Blake says his academic interest is especially piqued by the notion of building machine learning systems that dont need lots of help during the learning process in order to be able to extract useful understandings from data, but rather learn unsupervised.

One of the things that fascinates me is that humans learn without big data. At least the storys not so simple, he says, pointing out that toddlers learn whats going on in the world around them without constantly being supplied with the names of the things they are seeing.

A child might be told a cup is a cup a few times, but not that every cup they ever encounter is a cup, he notes. And if machines could learn from raw data in a similarly lean way it would clearly be transformative for the field of AI. Blake sees cracking unsupervised learning as the next big challenge for AI researchers to grapple with.

We now have to distinguish between two kinds of data theres raw data and labelled data. [Labelled] data comes at a high price. Whereas the unlabelled data which is just your experience streaming in through your eyes as you run through the world and somehow you still benefit from that, so theres this very interesting kind of partnership between the labelled data which is not in great supply, and its very expensive to get and the unlabelled data which is copious and streaming in all the time.

And so this is something which I think is going to be the big challenge for AI and machine learning in the next decade how do we make the best use of a very limited supply of expensively labelled data?

I think what is going to be one of the major sources of excitement over the next five to ten years, is what are the most powerful methods for accessing unlabelled data and benefiting from that, and understanding that labelled data is in very short supply and privileging the labelled data. How are we going to do that? How are we going to get the algorithms that flourish in that environment?

Autonomous cars would be one promising AI-powered technology that obviously stands to benefit from a breakthrough on this front given that human-driven cars are already being equipped with cameras, and the resulting data streams from cars being driven could be used to train vehicles to self drive if only the machines could learn from the unlabelled data.

FiveAIs website suggests this goal is also on its mind with the startup saying its using stronger AI to solve the challenge of autonomous vehicles safely navigating complex urban environments, without needing to have highly-accurate dense 3D prior maps and localization. A challenge billed as being defined as the top level in autonomy 5.

Im personally fascinated with how different it is humans learn from the way, at the moment, our machines are learning, adds Blake. Humans are not learning all the time from big data. Theyre able to learn from amazingly small amounts of data.

He citesresearchby MITs Josh Tenenbaum showing how humans are able to learn new objects after just one or two exposures. What are we doing? he wonders. This is a fascinating challenge. And we really, at the moment, dont know the answer Ithink theres going to be a big race on, from various research groups around the world, to see and to understand how this is being done.

He speculates that the answer to pushing forward might lie in looking back into the history of AI at methods such as reasoning with probabilities or logic, previously applied unsuccessfully, given they did not result in the breakthrough represented by deep learning, but which are perhaps worth revisiting to try to write the next chapter.

The earlier pioneers tried to do AI using logic and it absolutely didnt work for a whole lot of reasons, he says. But one property that logic seems to have, and perhaps we can somehow learn from this, is this idea of being incredibly efficient incredibly respectful if you like of how costly the data is to acquire. And so making the very most of even one piece of data.

One of the properties of learning with logic is that the learning can happen very, very quickly, in the sense of only needing one or two examples.

Its a nice idea that the hyper fashionable research field of AI, as it now is, where so many futuristic bets are being placed, might need to look backwards, to earlier apparent dead-ends, to achieve its next big breakthrough.

Though, given Blake describes the success of deep networks as a surprise to pretty much the whole field (i.e. that the technology has worked as well as it has) its clear that making predictions about the forward march of AI is a tricky, possibly counterintuitive business.

As our interview winds up I hazard one final thought asking whether, after more than three decades of research in artificial intelligence, Blake has come up with his own definition of human intelligence?

Oh! Thats much too hard a question for the final question of the interview, he says, punctuating this abrupt conclusion with a laugh.

On why deep learning is such a black boxI suppose its sort of like an empirical finding. If you think about physics the way experimental physics goes and theoretical physics, very often, some discovery will be made in experimental physics and that sort of sets off the theoretical physics for years trying to understand what was actually happening. But the way you first got there was with this experimental observation. Or maybe something surprising. And I think of deep networks as something like that its a surprise to pretty much the whole field that it has worked as well as it has. So thats the experimental finding. And the actual object itself, if you like, is quite complex. Because youve got all of these layers [processing the input] and that happens maybe ten times And by the time youve put the data through all of those transformations its quite hard to say what the composite effect is. And getting a mathematical handle on all of that sequence of operations. A bit like cooking, I suppose.

On designing dedicated hardware for processing AIIntel build the whole processor and also they build the equipment you need for an entire data center so thats the individual processors and the electronic boards that they sit on and all the wiring that connects these processors up inside the data center. The wiring actually is more than just a bit of wire they call them an interconnect. And its a bit of smart electronics itself. So Intel has got its hands on the whole system At the Turing Institute with have a collaboration with Intel and with them we are asking exactly that question: if you really have got freedom to design the entire contents of the data center how can you build the data center which is best for data science? That really means, to a large extent, best for machine learning The supporting hardware for machine learning is definitely going to be a key thing.

On the challenges ahead for autonomous vehiclesOne of the big challenges in autonomous vehicles is its built on machine learning technologies which are shall we say quite reliable. If you read machine learning papers, an individual technology will often be right 99% of the time Thats pretty spectacular for most machine learning technologies But 99% reliability is not going to be nearly enough for a safety critical technology like autonomous cars. So I think one of the very interesting things is how you combine technologies to get something which, in the aggregate, at the level of assist, rather than the level of an individual algorithm, is delivering the kind of very high reliability that of course were going to demand from our autonomous transport. Safety of course is a key consideration. All of the engineering we do and the research we do is going to be building around the principle of safety rather than safety as an afterthought or a bolt-on, its got to be in there right at the beginning.

On the need to bake ethics into AI engineeringThis is something the whole field has become very well tuned to in the last couple of years, and there are numerous studies going on In the Turing Institute weve got a substantial ethics program where on the one hand weve got people from disciplines like philosophy and the law, thinking about how ethics of algorithms would work in practice, then weve also got scientists who are reading those messages and asking themselves how do we have to design the algorithms differently if we want them to embody ethical principles. So I think for autonomous driving one of the key ethical principles is likely to be transparency so when something goes wrong you want to know why it went wrong. And thats not only for accountability purposes. Even for practical engineering purposes, if youre designing an engineering system and it doesnt perform up to scratch you need to understand which of the many components is not pulling its weight, where do we need to focus the attention. So its good from the engineering point of view, and its good from the public accountability and understanding point of view. And of course we want the public to feel as far as possible comfortable with these technologies. Public trust is going to be a key element. Weve had examples in the past of technologies that scientists have thought about that didnt get public acceptability immediately GM crops was one the communication with the public wasnt sufficient in the early days to get their confidence, and so we want to learn from those kinds of things. I think a lot of people are paying attention to ethics. Its going to be important.

Original post:

A discussion about AI's conflicts and challenges - TechCrunch

Three barriers to artificial intelligence adoption – ModernMedicine

Artificial intelligence (AI) will play a major role in healthcare digital transformation, according to new research.

The study, Human Amplification in the Enterprise, surveyed more than 1,000 business leaders from organizations of more than 1,000 employees, with $500 million or more annual revenue and from a range of sectors, all in the U.S.

Survey respondents from the healthcare sectorindicated that the following AI-supported activities will play a significant role in their transformations: Machine learning (77%), robotic automation (61%), institutionalization of enterprise knowledge using AI (59%), cognitive AI-led processes or tasks (50%) and automated predictive analytics (47%).

The research also found that almost half of the respondentsin healthcareindicate their organizations priorities for automation initiatives is to automate processes to:

Dalwani

This suggests that many processes in the healthcare sector are still manual-driven and produce a high volume of errors as a result, says Sanjay Dalwani, vice president and head of hospital and healthcare at Infosys.

The survey found that 73% of respondents want AI to process complete structured and unstructured data and to automate insights-led decisions. It also found that 72% want AI to provide human-like recommendations for automated customer support/advice.

More widely,healthcare sectorrespondents shared that the top three digital transformation goals of their organizations are to build an innovation culture (65%), build a mobile enterprise (63%) and become more agile and customer-centric (58%).

The findings underscore that healthcare organizations are well on their way with starting to work alongside AI to selectively use it to inform and improve patient care, Dalwani says. However, in this process, its pertinent that the industry establishes ethical standards as well as metrics to assess the performance of AI systems.

The study also indicates that as automation becomes more widely adopted in healthcare, employees will be retrained for higher-value work, according to Dalwani. Healthcare organizations can benefit from redirecting a section of this talent to managing and ensuring ethical use of AI, he says.

Even though the majority of enterprises in the healthcare and life sciences sector are undergoing digital transformation, few have fully accomplished their goals. This is due to three primary reasons, according to Dalwani:

Lack of time (64%)

Lack of collaboration amongst teams (63%)

Lack of data-led insights on demand (61%)

Furthermore, when healthcare IT professionals were asked about the challenges of adopting more AI-supported activities as component of their digital transformation initiatives, 78% of respondents indicated lack of financial resources, 78% state lack of in-house knowledge and skills around the technology and 66% say theres a lack of clarity regarding the value proposition of AI, according to the study.

This suggests that the healthcare IT sector still has a long way to go in terms of AI buy-in, Dalwani says. Until more senior level IT-decision makers are bought into the benefits of bringing AI to healthcare, teams wont have access to the proper resources to support full-scale implementations.

More here:

Three barriers to artificial intelligence adoption - ModernMedicine

Amazon just acquired a training ground for retail artificial intelligence research – GeekWire

(Whole Foods photo)

Amazon didnt acquire an iconic grocery store brand just for the quinoa: Whole Foods operates hundreds of retail data mines, and Amazon just married a world-class artificial intelligence team with one of the best sources of in-store consumer shopping data in the U.S.

There are lots of reasons, to be sure, why Amazon would want to spend $13.7 billion on Whole Foods. But the quintessential online retailer has been trying to establish a physical store presence for a few years now, and with one big check, it will now control more than 400 sources of prime data on consumer behavior.

Big-box grocery stores are easy sources of data on human purchasing behavior. Any modern retail outlet monitors activity such as customer flow through the aisles, brand affinity, and, of course, the customer loyalty cards that do as good a job of profiling a person as anything. After all, you are what you eat.

Obviously, Amazon already collects a ton of data on consumer purchasing behavior, but its relatively new to groceries and brick-and-mortar retail in general. Whole Foods instantly gives Amazon a reliable source of the purchasing habits of well-off Americans, and that data can be used to train artificial intelligence models that will allow retailers to better predict demand and someday automate much of the labor involved in grocery retailing, no matter what the company said Friday about layoffs.

As Amazons Swami Sivasubramanian explained at our GeekWire Cloud Tech Summit last week, Amazon has thousands of engineers focused on AI, and a lot of that work goes toward making Amazons fulfilment centers more efficient and toward giving Amazon Web Services customers access to cutting-edge artificial intelligence models theyd never be able to build on their own.

Amazon just acquired a company that can improve its AI models on both of those counts. The logistics of shipping fresh food around the country are not easy, and that generates a ton of specialized data that Amazon can use to improve its own distribution strategies as well as build a cloud retail AI product for AWS customers.

Investing in big data products just isnt enough any more for retailers. Artificial intelligence models are going to dictate how products are sold over the next decade, and there are only a few companies with the expertise and data sets necessary to build those models at scale.

A few years down the road, if youre an established but aging grocery brand say Safeway or Albertsons or Publix (try the subs) youll either watch Amazon and Whole Foods eat your lunch with improved efficiency and incredible reach, or youll become an AWS customer because youll need the retail AI products that could emerge from this deal to compete.

View post:

Amazon just acquired a training ground for retail artificial intelligence research - GeekWire

Facebook to Use AI to Block ‘Terrorist Content’ – Government Technology

(TNS)-- Amid growing pressure from governments, Facebook says it has stepped up its efforts to address the spread of "terrorist propaganda" on its service by using artificial intelligence (AI).

In a blog post on Thursday, the California-based company announced the introduction of AI, including image matching and language understanding, in conjunction with it already-existing human reviewers to better identify and remove content "quickly".

"We know we can do better at using technology - and specifically artificial intelligence - to stop the spread of terrorist content on Facebook," Monika Bickert, Facebook's director of global policy management, and Brian Fishman, the company's counterterrorism policy manager, said in the post.

"Although our use of AI against terrorism is fairly recent, it's already changing the ways we keep potential terrorist propaganda and accounts off Facebook.

"We want Facebook to be a hostile place for terrorists."

Such technology is already used to block child pornography from Facebook and other services such as YouTube, but Facebook had been reluctant about applying it to other potentially less clear-cut uses.

In most cases, the company only removed objectionable material if users first report it.

Facebook and other internet companies have faced growing pressure from governments to identify and prevent the spread of "terrorist propaganda" and recruiting messages on their services.

Government officials have at times threatened to fine Facebook, which has nearly two billion users, and strip the broad legal protections it enjoys against liability for the content posted by its users.

Efforts welcomed

Facebook's announcement did not specifically mention this pressure, but it did acknowledge that "in the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online".

It said Facebook wants "to answer those questions head on" and that it agrees "with those who say that social media should not be a place where terrorists have a voice".

The UK interior ministry welcomed Facebook's efforts, but said technology companies needed to go further.

"This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place," a ministry spokesman said on Thursday.

Among the AI techniques being used by Facebook is image matching, which compares photos and videos people upload to Facebook to "known" terrorism images or video.

Matches generally mean that either that Facebook had previously removed that material, or that it had ended up in a database of such images that the company shares with YouTube, Twitter and Microsoft.

Facebook is also developing "text-based signals" from previously removed posts that praised or supported terrorist organisations.

It will feed those signals into a machine-learning system, over time, will learn how to detect similar posts.

In their blog post, Bickert and Fishman said that when Facebook receives reports of potential "terrorism posts", it reviews those reports urgently.

In addition, it says that in the rare cases when it uncovers evidence of imminent harm, it promptly informs authorities.

The company admitted that "AI can't catch everything" and technology is "not yet as good as people when it comes to understanding" what constitutes content that should be removed.

To address these shortcomings, Facebook said it continues to use "human expertise" to review reports and determine their context.

The company had previously announced it was hiring 3,000 additional people to review content that was reported by users.

Facebook also said it will continue working with other tech companies, as well as government and intergovernmental agencies to combat the spread of "terrorism" online.

2017 Al Jazeera (Doha, Qatar) Distributed by Tribune Content Agency, LLC.

Read more:

Facebook to Use AI to Block 'Terrorist Content' - Government Technology

Facebook Will Use Artificial Intelligence to Find Extremist Posts – New York Times


New York Times
Facebook Will Use Artificial Intelligence to Find Extremist Posts
New York Times
Artificial intelligence will largely be used in conjunction with human moderators who review content on a case-by-case basis. But developers hope its use will be expanded over time, said Monika Bickert, the head of global policy management at Facebook.
Facebook using artificial intelligence to combat terrorist propagandaTelegraph.co.uk
Facebook taps artificial intelligence in new push to block terrorist propagandaUSA TODAY
Facebook wants to use artificial intelligence to block terrorists onlineWashington Post
CBS News -VICE News -The Guardian
all 123 news articles »

Continued here:

Facebook Will Use Artificial Intelligence to Find Extremist Posts - New York Times

Alaska aerospace company wants to launch more satellites – Lexington Herald Leader

Alaska aerospace company wants to launch more satellites
Lexington Herald Leader
An Alaska aerospace company wants to increase number of launches to at least two or three launches per year. Representatives from Alaska Aerospace Corporation spoke about their plans earlier this week at a town hall meeting in Kodiak, The Kodiak Daily ...

and more »

The rest is here:

Alaska aerospace company wants to launch more satellites - Lexington Herald Leader

Boeing reshuffled defense with eye towards increased aerospace presence, CEO says – SpaceNews

An Ariane 5 rocket carrying the Boeing-built Intelsat-33 and SSL-built Intelsat-36 satellites lift off Aug. 24, 2016. Credit: Arianespace

WASHINGTON The CEO of Boeing Defense, Space, and Security said that the goal of reshuffling the companys upper management is to streamline operations and work more closely with the U.S. government and other customers.

Leanne Caret, who took over as CEO in March 2016, said the move follows a banner year in 2016 and literally was about taking out a layer of executive management, which is what weve done, flattening the organization, and elevating some programs so that theyre direct reports to me.

Boeing announced June 13 that it was eliminating nearly 50 executive positions and would break up its two current units, Boeing Military Aircraft and Network & Space Systems, into four smaller groups. The Space and Missile Systems unit, to be led by Jim Chilton, will include the companys current space business, such as satellite manufacturing, ISS operations and the companys stake in United Launch Alliance.

Speaking at a June 14 event hosted by Defense One, Caret said she wanted to conduct the reorg near the start of her tenure.

The longer you stay in any position, your ability to make those changes only becomes harder, not easier, because it so much becomes a part of who you are, she said.

The executive reshuffling isnt the first change Caret made since she took over the position from Christopher Chadwick. She moved Boeings defense business headquarters from St. Louis, Missouri, to Arlington, Virginia, in order to be closer to the Pentagon, NASA and other Washington stakeholders.

To listen to customers you just cant be available when you fly in and fly back out when its convenient to you, she said. You need to be a part of the community.

Caret said company decisions need to be conducted in a thoughtful and pragmatic manner as it seeks expansion of its production lines.

Revenue for the companys space sector formerly known as Network and Space Systems was down in 2016 relative to the previous year: $7.04 billion versus $7.75 billion in 2015, according to the companys fourth-quarter reports. (For comparison, Boeings largest competitor, Lockheed Martin Space Systems, had a revenue of $9.41 billion in 2016 and $9.1 billion in 2015.)

While Boeing is a big player in the commercial and civil space sectors, one of their bigger space-related defense offerings is the Wideband Global Satcom constellation, which is set to end with the launch of the tenth satellite in 2019.

The Air Force is currently conducting an analysis of alternative (AoA) looking at the next course of action, be it purchasing additional WGS satellites or starting development of a follow-on capability. The study was originally set to conclude this December, though that end date is likely to wind up pushed into 2018.

Caret did not say what she hopes the result of the AoA will be, instead explaining that Boeing is working on finding a solution to best fit the Defense Departments needs.

From a defense perspective, its continuing to work with the customer on what they want, she said.

In a statement emailed to SpaceNews, Enrico Attanasio, the executive director of the defense and civil programs at Boeing Satellite Systems, said that Boeing is a major player in commercial space, government space and satellite services, and whatever the outcome of the AoA, Boeing has the ability to assist and to deliver the architecture that will enable those missions.

The company also has its eye on taking over the contract for the next generation of the Global Positioning System: GPS 3.

Lockheed Martin is under contract to supply the first 10 satellites in the constellation, but development delays with the satellites and related ground control system have led the Air Force to indicate theyll recompete the next planned 22. Boeing already runs part of the existing constellation, mainly the GPS 2A and 2F satellites.

Read the original post:

Boeing reshuffled defense with eye towards increased aerospace presence, CEO says - SpaceNews

The Fourth Industrial Revolution Is About Empowering People, Not The Rise Of The Machines – Forbes


Forbes
The Fourth Industrial Revolution Is About Empowering People, Not The Rise Of The Machines
Forbes
Even the creators of artificial chess-playing machines acknowledge that the best chess player is actually a team of both human and machine. ... Railroad locomotives are powered by massive, highly complex electrical engines that cost millions of dollars.

See more here:

The Fourth Industrial Revolution Is About Empowering People, Not The Rise Of The Machines - Forbes

Pressing Tech Issue: Enterprise Software Vs. Cloud Computing? – Credit Union Times

One ofRobert Frost's most popular poemscontains more than a few parallels with what insurance technology executives are grappling with as they look at systems in the cloud compared with systems housed within their own organizations.Consider this classic verse:

"Two roads diverged in a wood, and I...

I took the one less traveled by,

And that has made all the difference."

Certainly there are many who are opting for the less-traveled SaaS road, and others who prefer the other road commonly called enterprise.

Within the insurance industry, cloud technologies have been successfully deployed in ancillary areas of the organization such as Human Resources, Accounting, e-mail, and other non-core areas of the business. Typically, those core applications such as policy administration, agent commissions, rating, billing, claims, and agent and customer portals have been firmly entrenched in enterprise or on-premises applications.

However, with the success of cloud-based software in those non-mission critical areas,SaaS systems are becoming the favored choice for deployment in certain core insurance areas. But for those core tasks that are truly mission critical, have deep integration requirements and importantly, are processor intensive, IT executives are taking a go-slow approach before they commit to putting those systems or business processes into the cloud.

Why the concern? The short answer is that enterprise software is "owned" by the insurance carrier, and the risks of a data breach of sensitive information is relatively low when the application is housed behind the insurance companys firewall. Insurance companies are huge repositories of customers personal information. And that information is entrusted to the insurance company with the expectation that it will remain private and confidential.

In short,enterprise software deployments merit a certain kind of securitythat is hard to duplicate in a cloud-based system.

Another aspect to consider is processing horsepower. Saving and retrieving data such as we see in popular CRM systems like Salesforce.com is not particularly processor intensive. Tasks with intensive calculation requirements, such as commissions and bonus calculation, are another matter. These systems can often have more than a hundred integration points both up- and downstream, and managing them in the cloud is a major concern to many insurers.

According to recent research from Novarica, the key driver for carriers adopting SaaS systems was "the speed of deployment and the ability to sunset current applications." (Photo: iStock)

Among the common drivers for carriers to adopt SaaS system, according toNovarica, were standardization paired with release management, which reduces support costs and ultimately lowers the cost of ownership. However, that standardization, call it efficiency, is largely a trade-off between having key business processes undifferentiated from competitors that are on that same SaaS application and having a custom designed application that preserves competitive differentiation.

Large companies see being able to differentiate from competitors as a key advantage of the on-premises model. Additionally, large companies havevery large IT staffs that are capable of implementing and managing new applications.

Cost is clearly another factor in making SaaS a viable choice for many core insurance applications. For mid-tier and smaller insurance organizations, the advantages of SaaS are clear:

No infrastructure costs;

Software is on a subscription model that includes maintenance and upgrades; and

Provisioning is very easy.

With SaaS, a smaller insurance company can readily compete with the 'big guys.' While some simple back-of-the-napkin analysis can show advantages for SaaS, the analysis is really an apples-to-oranges comparison. A more detailed look at cost and a few other items show that cost may not be the main concern.

You may not appreciate the importance of some of the items buried in the fine print of SaaS solution provider contracts. Items such as transaction volume, number of processes allowed per month, data storage fees, data transformation costs and other items can result in significant additional fees levied by the vendor that must be met for subscription compliance.

If you dont understand and carefully quantify each item in the SaaS agreement, fees can easily double or triple but you might not realize the impact until the solution is implemented and in full production and you receive your first over-usage invoice. (Photo: iStock)

In order to get a full assessment of hosted versus on-premises factors such as implementation, customization,upgrade cycles, support and maintenance, security,scalability, and integration(s)must be understood. For example, implementing a SaaS application is relatively easy, since it is using a ready-made platform that has already been provisioned, while on-premises applications take resources, equipment, and time to set up a new environment. In essence, the financial choice is whether the new system will tap the operating expense budget or the capital expense budget.

The key in assessing the advantages and disadvantages of SaaS or on-premises is one that is common to all technology acquisitions the vendor. At the outset, the absolute key requirement is that the vendor has extensive experience withininsurance technology. There are many vendors that purport to have deep domain experience in insurance. From what Ive observed, however, in many applications sold to insurance companies vendors are very likely taking a horizontal application and providing some level of uniqueness that makes it salable to insurance companies. This is very common in CRM and commissions applications, where vendors have created hundreds of applications from managing sales to managing human resources to managing inventory. Vendors will claim insurance expertise, but a look under the hood will usually reveal an application that was built for, say, telecommunications or pharmaceuticals and verticalized to make it acceptable to insurance carriers and distributors. Its the old "one-size fits all" mentality.

Where the rubber hits the road invendor selectionis in looking at a vendors expertise in integration and security. As experienced insurance IT managers are aware, insurance infrastructure can be a hodge-podge of technologies and applications that run the gamut from fairly modern to legacy. A vendor that doesnt have a track record of successful implementations with a variety of technologies is one that probably shouldnt be on your short list. As a starting point, look for applications with embedded integration platforms that you (not the SaaS IT/Support team) will have full access to. The same thing can be said regarding the privacy and security of data and personal and private information.

Insurance carriers are very aware of the security implications of SaaS, where security is dependent on the vendor. A corollary to the vendors experience in integrations is the vendors experience in implementing fixes of the software or migrating existing clients to new versions of the software. Again, vendors that have dozens of satisfied clients are more likely to have the experience and talent to become a credible business partner. One more tip on vendor selection.

Ask for a report detailing system outages for the last two years that shows the nature of the outage, core issue and time to resolution. If the vendor refuses to deliver this document, think again about adding them to your short list.

Some large vendors in our space have recently dropped their on-premise solutions and 'gone all in' for the cloud. It might be a safer to go with a vendor that can provide cloudoron-premise solutions, leaving the final hosting decision in your hands. You can always migrate to the cloud later if youre not comfortable with the change. The choice between the cloud and on-premises is very much like choosing between the two paths that 'diverged in the wood.'

There are certainly advantages to each alternative, but ultimately the key driver is whether the vendor can accommodate both software delivery models, on-premises and SaaS. Vendors that have the capability to work with clients with unique requirements that mandate enterprise software or SaaS are vendors that have the overall experience to help you choose which path to take.

John Sarichis an industry analyst and VP of Strategy atVUE Software. He is a senior solutions architect, strategic consultant and business advisor with over 25 years of insurance industry experience.He can be reached atJohn.Sarich@VUESoftware.com.

Originally published on PropertyCasualty360. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Read this article:

Pressing Tech Issue: Enterprise Software Vs. Cloud Computing? - Credit Union Times

Alibaba to enter European cloud computing market in mid-2017 – Air Cargo World (registration)

As it seeks to expand its global reach outside China, Alibaba Cloud announced that it will introduce MaxCompute, to Europes US$18.9 billion cloud computing market in the second half of 2017.

The E.U. cloud market is smaller than, say, its U.S. counterpart, but it is already populated by the likes of Amazon, Salesforce, Microsoft and IBM, setting the stage for competition. Alibaba said it hopes to get a piece of this action by convincing E.U. businesses that its artificial intelligence (AI) features can, unlock the immense value of their data, using a highly secure and scalable cloud infrastructure and AI programs, said Wanli Min, an Alibaba Cloud scientist responsible for AI and data-mining.

The Chinese cloud providers AI program is already popular in China, where it has been applied to easing traffic congestion, diagnosing disease through medical imaging, and even predicting the winners of reality show contests.

MaxCompute intends to deliver these same services to the E.U. market, using advanced AI and deep -earning technologies and algorithms for data storage, modeling and analytics.

Alibaba Cloud opened its first European data center in Germany in late 2016. The company has not revealed what its E.U. customer base looks like, but said it is in discussions with companies in Europe about using MaxCompute.

Alibaba corporate headquarters

Read more:

Alibaba to enter European cloud computing market in mid-2017 - Air Cargo World (registration)

Alibaba to enter European cloud computing market in mid-2017 | Air … – Air Cargo World (registration)

As it seeks to expand its global reach outside China, Alibaba Cloud announced that it will introduce MaxCompute, to Europes US$18.9 billion cloud computing market in the second half of 2017.

The E.U. cloud market is smaller than, say, its U.S. counterpart, but it is already populated by the likes of Amazon, Salesforce, Microsoft and IBM, setting the stage for competition. Alibaba said it hopes to get a piece of this action by convincing E.U. businesses that its artificial intelligence (AI) features can, unlock the immense value of their data, using a highly secure and scalable cloud infrastructure and AI programs, said Wanli Min, an Alibaba Cloud scientist responsible for AI and data-mining.

The Chinese cloud providers AI program is already popular in China, where it has been applied to easing traffic congestion, diagnosing disease through medical imaging, and even predicting the winners of reality show contests.

MaxCompute intends to deliver these same services to the E.U. market, using advanced AI and deep -earning technologies and algorithms for data storage, modeling and analytics.

Alibaba Cloud opened its first European data center in Germany in late 2016. The company has not revealed what its E.U. customer base looks like, but said it is in discussions with companies in Europe about using MaxCompute.

Alibaba corporate headquarters

Link:

Alibaba to enter European cloud computing market in mid-2017 | Air ... - Air Cargo World (registration)

Quantum computing, the machines of tomorrow | The Japan Times – The Japan Times

NEW YORK It is a sunny Tuesday morning in late March at IBMs Thomas J. Watson Research Center. The corridor from the reception area follows the long, curving glass curtain-wall that looks out over the visitors parking lot to leafless trees covering a distant hill in Yorktown Heights, New York, an hour north of Manhattan. Walk past the podium from the Jeopardy! episodes at which IBMs Watson smote the human champion of the TV quiz show, turn right into a hallway, and you will enter a windowless lab where a quantum computer is chirping away.

Actually, chirp isnt quite the right word. It is a somewhat metallic sound, chush chush chush, that is made by the equipment that lowers the temperature inside a so-called dilution refrigerator to within hailing distance of absolute zero. Encapsulated in a white canister suspended from a frame, the dilution refrigerator cools a superconducting chip studded with a handful of quantum bits, or qubits.

Quantum computing has been around, in theory if not in practice, for several decades. But these new types of machines, designed to harness quantum mechanics and potentially process unimaginable amounts of data, are certifiably a big deal. I would argue that a working quantum computer is perhaps the most sophisticated technology that humans have ever built, said Chad Rigetti, founder and chief executive officer of Rigetti Computing, a startup in Berkeley, Calif. Quantum computers, he says, harness nature at a level we became aware of only about 100 years ago one that isnt apparent to us in everyday life.

What is more, the potential of quantum computing is enormous. Tapping into the weird way nature works could potentially speed up computing so some problems that are now intractable to classical computers could finally yield solutions. And maybe not just for chemistry and materials science. With practical breakthroughs in speed on the horizon, Wall Streets antennae are twitching.

The second investment that CME Group Inc.s venture arm ever made was in 1QB Information Technologies Inc., a quantum-computing software company in Vancouver. From the start at CME Ventures, weve been looking further ahead at transformational innovations and technologies that we think could have an impact on the financial-services industry in the future, said Rumi Morales, head of CME Ventures LLC.

That 1QBit financing round, in 2015, was led by Royal Bank of Scotland. Kevin Hanley, RBSs director of innovation, says quantum computing is likely to have the biggest impact on industries that are data-rich and time-sensitive. We think financial services is kind of in the cross hairs of that profile, he said.

Goldman Sachs Group Inc. is an investor in D-Wave Systems Inc., another quantum player, as is In-Q-Tel, the CIA-backed venture capital company, says Vern Brownell, CEO of D-Wave. The British Columbia-based company makes machines that do something called quantum annealing. Quantum annealing is basically using the quantum computer to solve optimization problems at the lowest level, Brownell said. Weve taken a slightly different approach where were actually trying to engage with customers, make our computers more and more powerful, and provide this advantage to them in the form of a programmable, usable computer.

Marcos Lopez de Prado, a senior managing director at Guggenheim Partners LLC who is also a scientific adviser at 1QBit and a research fellow at the U.S. Department of Energys Lawrence Berkeley National Laboratory, says it is all about context. The reason quantum computing is so exciting is its perfect marriage with machine learning, he said. I would go as far as to say that currently this is the main application for quantum computing.

Part of that simply derives from the idea of a quantum computer; harnessing a physical device to find an answer, Lopez de Prado says. He sometimes explains it by pointing to the video game Angry Birds. When you play it on your iPad, the central processing units use some mathematical equations that have been programmed into a library to simulate the effects of gravity and the interaction of objects bouncing and colliding. This is how digital computers work, he said.

By contrast, quantum computers turn that approach on its head, Lopez de Prado says. The paradigm for quantum computers is to throw some birds and see what happens. Encode into the quantum microchip this problem; these are your birds and where you throw them from, so whats the optimal trajectory? Then you let the computer check all possible solutions essentially or a very large combination of them and come back with an answer, he said. In a quantum computer, there is no mathematician cracking the problem, he said. The laws of physics crack the problem for you.

The fundamental building blocks of our world are quantum mechanical. If you look at a molecule, said Dario Gil, vice president for science and solutions at IBM Research, the reason molecules form and are stable is because of the interactions of these electron orbitals. Each calculation in there each orbital is a quantum mechanical calculation. The number of those calculations, in turn, increases exponentially with the number of electrons youre trying to model. By the time you have 50 electrons, you have 2 to the 50th power calculations, Gil said. Thats a phenomenally large number, so we cant compute it today, he said. (For the record, it is 1.125 quadrillion. So if you fired up your laptop and started cranking through several calculations a second, it would take a few million years to run through them all.) Connecting information theory to physics could provide a path to solving such problems, Gil says. A 50-qubit quantum computer might begin to be able to do it.

Landon Downs, president and co-founder of 1QBit, says it is now becoming possible to unlock the computational power of the quantum world. This has huge implications for producing new materials or creating new drugs, because we can actually move from a paradigm of discovery to a new era of quantum design, he said in an email. Rigetti, whose company is building hybrid quantum-classical machines, says one moonshot use of quantum computing could be to model catalysts that remove carbon and nitrogen from the atmosphere and thereby help fix global warming.

The quantum-computing community hums with activity and excitement these days. Teams around the world at startups, corporations, universities, and government labs are racing to build machines using a welter of different approaches to process quantum information. Superconducting qubit chips too elementary for you? How about trapped ions, which have brought together researchers from the University of Maryland and the National Institute of Standards and Technology? Or maybe the topological approach that Microsoft Corp. is developing through an international effort called Station Q? The aim is to harness a particle called a non-abelian anyon which has not yet been definitively proven to exist.

These are early days, to be sure. As of late May, the number of quantum computers in the world that clearly, unequivocally do something faster or better than a classical computer remains zero, according to Scott Aaronson, a professor of computer science and director of the Quantum Information Center at the University of Texas at Austin. Such a signal event would establish quantum supremacy. In Aaronsons words, That we dont have yet.

Yet someone may accomplish the feat as soon as this year. Most insiders say one clear favorite is a group at Google Inc. led by John Martinis, a physics professor at the University of California at Santa Barbara. According to Martinis, the groups goal is to achieve supremacy with a 49-qubit chip. As of late May, he says, the team was testing a 22-qubit processor as an intermediate step toward a showdown with a classical supercomputer. We are optimistic about this, since prior chips have worked well, he said in an email.

The idea of using quantum mechanics to process information dates back decades. One key event happened in 1981, when International Business Machines Corp. and MIT co-sponsored a conference on the physics of computation at the universitys Endicott House in Dedham, Massachusetts. At the conference, Richard Feynman, the famed physicist, proposed building a quantum computer. Nature isnt classical, damn it, and if you want to make a simulation of nature, youd better make it quantum mechanical, he said in his talk. And by golly, its a wonderful problem, because it doesnt look so easy.

He got that part right. The basic idea is to take advantage of a couple of the weird properties of the atomic realm superposition and entanglement. Superposition is the mind-bending observation that a particle can be in two states at the same time. Bring out your ruler to get a measurement, however, and the particle will collapse into one state or the other. And you wont know which until you try, except in terms of probabilities. This effect is what underlies Schrodingers cat, the thought-experiment animal that is both alive and dead in a box until you sneak a peek.

Sure, bending your brain around that one doesnt come especially easy; nothing in everyday life works that way, of course. Yet about 1 million experiments since the early 20th century show that superposition is a thing. And if superposition happens to be your thing, the next step is figuring out how to strap such a crazy concept into a harness.

Enter qubits. Classical bits can be a 0 or a 1; run a string of them together through logic gates (AND, OR, NOT, etc.), and you will multiply numbers, draw an image, and whatnot. A qubit, by contrast, can be a 0, a 1, or both at the same time.

Ready for entanglement? (You are in good company if you balk; Albert Einstein famously rebelled against the idea, calling it spooky action at a distance.) Well, lets say two qubits were to get entangled. Gil says that would make them perfectly correlated. A quantum computer could then utilize a menagerie of distinctive logic gates. The so-called Hadamard gate, for example, puts a qubit into a state of perfect superposition. (There may be something called a square root of NOT gate, but lets take a pass on that one.) If you tap the superposition and entanglement in clever arrangements of the weird quantum gates, you start to get at the potential power of quantum computing.

If you have two qubits, you can explore four states; 00, 01, 10, and 11. (Note that thats 4:2 raised to the power of 2.) When I perform a logical operation on my quantum computer, I can operate on all of this at once, Gil said. And the number of states you can look at is 2 raised to the power of the number of qubits. So if you could make a 50-qubit universal quantum computer, you could in theory explore all of those 1.125 quadrillion states at the same time.

What gives quantum computing its special advantage, says Aaronson, of the University of Texas, is that quantum mechanics is based on things called amplitudes. Amplitudes are sort of like probabilities, but they can also be negative in fact, they can also be complex numbers, he said. So if you want to know the probability that something will happen, you add up the amplitudes for all the different ways that it can happen, he says.

The idea with a quantum computation is that you try to choreograph a pattern of interference so that for each wrong answer to your problem, some paths leading there have positive amplitudes and some have negative amplitudes, so they cancel each other out, Aaronson said. Whereas the paths leading to the right answer all have amplitudes that are in phase with each other. The tricky part is that you have to arrange everything not knowing in advance which answer is the right one. So I would say its the exponentiality of quantum states combined with this potential for interference between positive and negative amplitudes thats really the source of the power of quantum computing, he said.

Did we mention that there are problems that a classical computer cant solve? You probably harness one such difficulty every day when you use encryption on the internet. The problem is that it is not easy to find the prime factors of a large number. To review, the prime factors of 15 are 5 and 3. That is easy. If the number you are trying to factor has, say, 200 digits, it is very hard. Even with your laptop running an excellent algorithm, you might have to wait years to find the prime factors.

That brings us to another milestone in quantum computing Shors algorithm. Published in 1994 by Peter Shor, now a math professor at MIT, the algorithm demonstrated an approach that you could use to find the factors of a big number if you had a quantum computer, which didnt exist at the time. Essentially, Shors algorithm would perform some operations that would point to the regions of numbers in which the answer was most likely to be found.

The following year, Shor also discovered a way to perform quantum error correction. Then people really got the idea that, wow, this is a different way of computing things and is more powerful in certain test cases, said Robert Schoelkopf, director of the Yale Quantum Institute and Sterling professor of applied physics and physics. Then there was a big upswell of interest from the physics community to figure out how you could make quantum bits and logic gates between quantum bits and all of those things.

Two decades later, those things are here.

See the article here:

Quantum computing, the machines of tomorrow | The Japan Times - The Japan Times

Toward optical quantum computing – MIT News

Ordinarily, light particles photons dont interact. If two photons collide in a vacuum, they simply pass through each other.

An efficient way to make photons interact could open new prospects for both classical optics and quantum computing, an experimental technology that promises large speedups on some types of calculations.

In recent years, physicists have enabled photon-photon interactions using atoms of rare elements cooled to very low temperatures.

But in the latest issue of Physical Review Letters, MIT researchers describe a new technique for enabling photon-photon interactions at room temperature, using a silicon crystal with distinctive patterns etched into it. In physics jargon, the crystal introduces nonlinearities into the transmission of an optical signal.

All of these approaches that had atoms or atom-like particles require low temperatures and work over a narrow frequency band, says Dirk Englund, an associate professor of electrical engineering and computer science at MIT and senior author on the new paper. Its been a holy grail to come up with methods to realize single-photon-level nonlinearities at room temperature under ambient conditions.

Joining Englund on the paper are Hyeongrak Choi, a graduate student in electrical engineering and computer science, and Mikkel Heuck, who was a postdoc in Englunds lab when the work was done and is now at the Technical University of Denmark.

Photonic independence

Quantum computers harness a strange physical property called superposition, in which a quantum particle can be said to inhabit two contradictory states at the same time. The spin, or magnetic orientation, of an electron, for instance, could be both up and down at the same time; the polarization of a photon could be both vertical and horizontal.

If a string of quantum bits or qubits, the quantum analog of the bits in a classical computer is in superposition, it can, in some sense, canvass multiple solutions to the same problem simultaneously, which is why quantum computers promise speedups.

Most experimental qubits use ions trapped in oscillating magnetic fields, superconducting circuits, or like Englunds own research defects in the crystal structure of diamonds. With all these technologies, however, superpositions are difficult to maintain.

Because photons arent very susceptible to interactions with the environment, theyre great at maintaining superposition; but for the same reason, theyre difficult to control. And quantum computing depends on the ability to send control signals to the qubits.

Thats where the MIT researchers new work comes in. If a single photon enters their device, it will pass through unimpeded. But if two photons in the right quantum states try to enter the device, theyll be reflected back.

The quantum state of one of the photons can thus be thought of as controlling the quantum state of the other. And quantum information theory has established that simple quantum gates of this type are all that is necessary to build a universal quantum computer.

Unsympathetic resonance

The researchers device consists of a long, narrow, rectangular silicon crystal with regularly spaced holes etched into it. The holes are widest at the ends of the rectangle, and they narrow toward its center. Connecting the two middle holes is an even narrower channel, and at its center, on opposite sides, are two sharp concentric tips. The pattern of holes temporarily traps light in the device, and the concentric tips concentrate the electric field of the trapped light.

The researchers prototyped the device and showed that it both confined light and concentrated the lights electric field to the degree predicted by their theoretical models. But turning the device into a quantum gate would require another component, a dielectric sandwiched between the tips. (A dielectric is a material that is ordinarily electrically insulating but will become polarized all its positive and negative charges will align in the same direction when exposed to an electric field.)

When a light wave passes close to a dielectric, its electric field will slightly displace the electrons of the dielectrics atoms. When the electrons spring back, they wobble, like a childs swing when its pushed too hard. This is the nonlinearity that the researchers system exploits.

The size and spacing of the holes in the device are tailored to a specific light frequency the devices resonance frequency. But the nonlinear wobbling of the dielectrics electrons should shift that frequency.

Ordinarily, that shift is mild enough to be negligible. But because the sharp tips in the researchers device concentrate the electric fields of entering photons, they also exaggerate the shift. A single photon could still get through the device. But if two photons attempted to enter it, the shift would be so dramatic that theyd be repulsed.

Practical potential

The device can be configured so that the dramatic shift in resonance frequency occurs only if the photons attempting to enter it have particular quantum properties specific combinations of polarization or phase, for instance. The quantum state of one photon could thus determine the way in which the other photon is handled, the basic requirement for a quantum gate.

Englund emphasizes that the new research will not yield a working quantum computer in the immediate future. Too often, light entering the prototype is still either scattered or absorbed, and the quantum states of the photons can become slightly distorted. But other applications may be more feasible in the near term. For instance, a version of the device could provide a reliable source of single photons, which would greatly abet a range of research in quantum information science and communications.

This work is quite remarkable and unique because it shows strong light-matter interaction, localization of light, and relatively long-time storage of photons at such a tiny scale in a semiconductor, says Mohammad Soltani, a nanophotonics researcher in Raytheon BBN Technologies Quantum Information Processing Group. It can enable things that were questionable before, like nonlinear single-photon gates for quantum information. It works at room temperature, its solid-state, and its compatible with semiconductor manufacturing. This work is among the most promising to date for practical devices, such as quantum information devices.

The rest is here:

Toward optical quantum computing - MIT News

Cybersecurity Attacks Are a Global Threat. Chinese Scientists Have the Answer: Quantum Mechanics – Newsweek

Quantum physics is an often mind-boggling branch of science filled with strange behavior and bizarre implications. For many people, the mere mention of the term is enough to send us hurtling in the opposite direction, like an electron bouncing off the center of an atom.

But evidence is mounting that the future of technology lies in quantum mechanics, which focuses on how the smallest things in our universe work. And a new breakthrough by scientists in China has just brought the world one very big step closer to this quantum revolution. Hundreds of miles closer, in fact. So its as good a time as any to understand why quantum physics is making such waves.

An Atlas 5 rocket, a national security satellite, launched from California in 2008. Chinese physicists have used a satellite to beat the distance record for quantum entanglement. Gene Blevins/Reuters

Tech & Science Emails and Alerts- Get the best of Newsweek Tech & Science delivered to your inbox

Quantum physics is all about waves. And particles. Together. Sort of.

Mostly, we think of light as something that occurs in waves and matter as distinct particles. But theorist Max Plancks attempt in 1900 to explain observations about colors emitted from hot objects started scientists down a path that transformed our understanding of how life works at the very smallest scale.

The first step was realizing that light behaves like a stream of individual particles, called photons. Albert Einstein came to this conclusion following Plancks work. Each photon contains a discrete amount of energy.

Subsequent research by Niels Bohr and others disrupted what physicists understood about electrons, the negatively charged particles that swirl around the heavy centers of the atoms that make up the elements (gold, silver, potassium, calcium, etc.) that in turn make up matter. That disruption was accentuated by Louis de Broglie, who realized that if light can behave like a particle, then maybe electrons, which physicists had always thought of as particles, could behave like waves. Numerous experiments proved that to be the case. Photons behave like waves and particles. Electrons behavelike waves and particles. The type of measurement you do determines how a photon or an electron behave.

One of the most intriguing effects of quantum physics is something called entanglement. With quantum entanglement, two particles derived from the same source behave the same way, even when they are far apart. The state of either particle cannot be determined until it is measured, and the act of measuring is what determines its state. And the measurement of one particle affects the measurement of the other particle. This thinking is embodied by Erwin Schrdingers thought problem about his famous cat.

If you split photon A into a photon pairB and Cmeasuring B will tell you, with absolute certainty, the measure of C. Paul Kwiat, physicist at the University of Illinois, gives the analogy of flipping a coin. If one flipped coin results in heads, heads, tails, heads, tails, tails, head, then the entangled coin, placed hundreds of miles away, would follow the same sequence. Thats not a behavior you see with coins, says Kwiat. Thats where quantum entanglement is pretty weird. Two things hundreds of miles away behaving as one: Thats quantum entanglement. And its real. Albert Einstein called it spuckhafte ferwirklung, or spooky action at a distance.

For more on the history of quantum physics and the entanglement phenomenon, author Chad Orzel, who teaches physics at Union College in Schenectady, New York, has some excellent videos.

Beyond the weirdness factor, quantum entanglement has broad implications for computing and information sharing. Entanglement distributionfor example, the splitting of a single photon into two linked photonscould be used to create a secure internet connection. The technology, called quantum cryptography, would allow the users to detect any eavesdropper on the channel. The reason you can detect the eavesdropper is that such an intruder would necessarily alter the entangled photons by his or her presence.

The principle allows for a secure communication channel that is unhackable, says Jonathan Dowling, a physicist at Louisiana State University When the Chinese roll out this type of communications nationwide, which is their plan, says Dowling, then no matter how many NSA computers you string together, you are never going to be able to tap into their system.

A new study in Science, by Juan Yin and colleagues at the University of Science and Technology in China and several other institutions there, has brought this future technology within much closer reach. The researchers split a photon on a satellite and sent the two resulting photons in two different directions, aimed at ground stations in China. The ground stations were more than 700 miles apart from one another. The distance from the satellite, which was constantly in motion, to each ground station varied from 300 to 1,200 miles.

The researchers managed to send photon pairs to different ground stations repeatedly and confirmed that the photons were entangled. Using a laser pointer-like source, they made about 6 million photon pairs per second. About one pair per second reached the ground stations. Kwiat says its like throwing a dime into a toll booth bucket while driving at high speed, only youre throwing a much tinier object from much farther awayand at a much faster speed. Measurements confirmed that the photon pairs had the same polarization, proving that they were entangled.

Although previous studies have managed to achieve similar results, never has it been done over such a great distance and from a satellite. (The prior record demonstrated entanglement across two of the Canary Islands, about 89 miles apart.) Its a beautiful experiment, says Kwiat. They demonstrated the persistence of entanglement over a longer distance than any experiment before by roughly a factor of 10.

Dowling says that this achievement proves that the quantum-based technologies many physicists envision are attainable. The long-term goal is to build a quantum internet where future computers around the globe are linked together in an uncrackable network of extraordinary computational power, says Dowling. The satellite will go down in history as the first link in the quantum internet.

The Chinese physicists are not the only team on the quest for this technology. Quantum cryptography systems are commercially available and researchers in several countries, including the U.S., Canada, Italy and Singapore are also forging the way ahead, says Kwiat, who is among them. Google is also working on quantum information science.

Still, the new study is a huge breakthrough because it proves entanglement can be achieved from a satellite and across this large distance. We have done something that was absolutely impossible without the satellite, says senior author Jian-Wei Pan. The next step, he says, is to perform more experiments with light from space, across yet longer distances and at faster speeds, with a goal of controlling quantum states and understanding how gravity affects quantum behavior.

Link:

Cybersecurity Attacks Are a Global Threat. Chinese Scientists Have the Answer: Quantum Mechanics - Newsweek

Dianne Feinstein is done pulling punches when it comes to Donald Trump – CNN

And, man, did she have something to say Friday. Here's her full statement on President Donald Trump's latest tweets about the special counsel investigation being led by former FBI Director Bob Mueller:

"I'm growing increasingly concerned that the President will attempt to fire not only Robert Mueller, the special counsel investigating possible obstruction of justice, but also Deputy Attorney General Rosenstein who appointed Mueller.

"The message the President is sending through his tweets is that he believes the rule of law doesn't apply to him and that anyone who thinks otherwise will be fired. That's undemocratic on its face and a blatant violation of the President's oath of office.

"First of all, the President has no authority to fire Robert Mueller. That authority clearly lies with the attorney generalor in this case, because the attorney general has recused himself, with the deputy attorney general. Rosenstein testified under oath this week that he would not fire Mueller without good cause and that none exists.

"And second, if the President thinks he can fire Deputy Attorney General Rosenstein and replace him with someone who will shut down the investigation, he's in for a rude awakening. Even his staunchest supporters will balk at such a blatant effort to subvert the law.

"It's becoming clear to me that the President has embarked on an effort to undermine anyone with the ability to bring any misdeeds to light, be that Congress, the media or the Justice Department. The Senate should not let that happen. We're a nation of laws that apply equally to everyone, a lesson the President would be wise to learn."

Just a few lines worth reading again:

* "The message the President is sending through his tweets is that he believes the rule of law doesn't apply to him."

* "If the President thinks he can fire Deputy Attorney General Rosenstein and replace him with someone who will shut down the investigation, he's in for a rude awakening."

* "It's becoming clear to me that the President has embarked on an effort to undermine anyone with the ability to bring any misdeeds to light."

* "We're a nation of laws that apply equally to everyone, a lesson the President would be wise to learn."

Any one of those lines is a 99-mile-an-hour fastball thrown way, way inside. Taken all altogether, it's a statement very clearly designed to send a message to Trump.

That message? Enough! Time to start acting like a president.

To be clear: Feinstein is a Democrat. She represents one of the most Democratic states in the country and risks absolutely nothing, politically speaking, by issuing a statement like this one that blisters Trump.

But she is also one of the institutions in the Senate, having spent the last 25 years in the chamber. Unlike her longtime colleague Barbara Boxer, who retired in 2016, Feinstein is not seen as terribly partisan and generally enjoys strong across-the-aisle relationships.

"Every conversation that I've had with her now that she's ranking member has been not only friendly, but has been productive, and these little heads-to-heads that you see us having when the committee's actually functioning, work things out right then."

In short: Feinstein isn't just a predictable partisan or someone who pops off at the slightest political provocation. This statement is a purposeful attempt to make clear that Trump has crossed a line and that he needs to take one big step back.

My prediction: He won't.

Read the original here:

Dianne Feinstein is done pulling punches when it comes to Donald Trump - CNN

Why we still really need to see Donald Trump’s tax returns – CNN

In short, Trump's financial disclosure is nice. His tax returns -- which he became the first presidential candidate in four decades to refuse to release -- would be far better.

Trump's wealth, as documented in the report, is vast. He took in hundreds of millions of dollars in income over the past 15 months while carrying liabilities north of $300 million. (The Post estimated that Trump's assets are worth at least $1.4 billion.)

Trump raked in $288 million from his golf courses alone; he made at least $37 million from Mar-a-Lago, his Florida resort and a frequent weekend stomping ground for the first family.

The big problem with financial disclosure forms is that they only require ranges of assets and liability -- making it extremely difficult to get an accurate sense of what Trump true's wealth is.

"Overall, Trump reported liabilities of at least $311 million -- mortgages and loans. But the number could be much higher because he was required only to report a range in value for each loan."

"Of the 16 loans he reported, five were worth more than $50 million each; one is worth between $25 million and $50 million; and seven were worth between $5 million and $25 million apiece. Another three loans combined were worth less than $1 million."

All of which makes the rhetoric coming from Trump and his top aides regarding his level of transparency about his finances misleading.

That isn't accurate. But you can bet that Trump's decision to voluntarily file his financial disclosure forms months ahead of time will become a new talking point for Trump and his aides whenever the questions regarding his taxes inevitably arise again.

The key point to remember: Financial disclosures are simply no substitute for tax returns when it comes to understanding someone's financial standing and various commitments.

Think of it this way. You go to a baseball game. Financial disclosure forms are like sitting near the top of the upper deck. You can see a baseball game is going on but it's tough to make out the individual players or figure out what pitch the pitcher is throwing. Tax returns are like having front-row seats behind home plate. You can see the reaction on the batter's face when he disagrees with a call. You can see how the teams interact -- both with each other and amongst themselves. You can hear the pop of the fastball hitting the catcher's mitt.

It's an entirely different game and experience.

Right now, the American public is sitting way up in the rafters of Trump Financial Stadium. You have a vague sense of what's going on. But unless and until he releases his tax returns -- BREAKING: He probably won't! -- we'll all be squinting to try to figure out exactly what we're looking at.

See more here:

Why we still really need to see Donald Trump's tax returns - CNN