Russian Scientist Gets Award For Breakthrough Research In The Development Of Quantum Computers – Modern Ghana

St. Petersburg State University professor Alexey Kavokin has received the international Quantum Devices Award in recognition of his breakthrough research in the development of quantum computers. Professor Kavokin is the first Russian scientist to be awarded this honorary distinction.

Aleksey Kavokins scientific effort has contributed to the creation of polariton lasers that consume several times less energy compared to the conventional semiconductor lasers. And most importantly, polariton lasers can eventually set the stage for the development of qubits, basic elements of quantum computers of the future. These technologies contribute significantly to the development of quantum computing systems.

The Russian scientists success stems from the fact that the Russian Federation is presently a world leader in polaritonics, a field of science that deals with light-material quasiparticles, or liquid light.

Polaritonics is the electronics of the future, Alexey Kavokin says. Developed on the basis of liquid light, polariton lasers can put our country ahead of the whole world in the quantum technologies race. Replacing the electric current with light in computer processors alone can save billions of dollars by reducing heat loss during information transfer.

This talented physicist believes that the US giants, such as Google and IBM are investing heavily in quantum technologies based on superconductors, Russian scientists are pursuing a much cheaper and potentially more promising path to developing a polariton platform for quantum computing.

Alexey Kavokin heads the Igor Uraltsev Spin Optics Laboratory at St. Petersburg State University, funded by a mega-grant provided by the Russian government. He is also head of the Quantum Polaritonics group at the Russian Quantum Center. Alexey Kavokin is Professor at the University of Southampton (England), where he heads the Department of Nanophysics and Photonics. He is Scientific Director of the Mediterranean Institute of Fundamental Physics (Italy). In 2018, he headed the International Center for Polaritonics at Westlake University in Hangzhou, China.

The Quantum Devices Award was founded in 2000 for innovative contribution to the field of complex semiconductor devices and devices with quantum nanostructures. It is funded by the Japanese section of the steering committee of the International Symposium on Compound Semiconductors (ISCS). The Quantum Devices Award was previously conferred on scientists from Japan, Switzerland, Germany, and other countries, but it is the first time that the award has been received by a scientist from Russia.

Due to the coronavirus pandemic, it was decided that the award presentation will be held next year in Sweden.

Read more:
Russian Scientist Gets Award For Breakthrough Research In The Development Of Quantum Computers - Modern Ghana

The University of New Mexico Becomes IBM Q Hub’s First University Member – UNM Newsroom

Q Hub membership and new faculty hire will build on existing quantum expertise and investments

Under the direction of Michael Devetsikiotis, chair of the Department of Electrical and Computer Engineering (ECE), The University of New Mexico recently joined the IBM Q Hub at North Carolina State University as its first university member.

The NC State IBM Q Hub is a cloud-based quantum computing hub, one of six worldwide and the first in North America to be part of the global IBM Q Network. This global network links national laboratories, tech startups, Fortune 500 companies, and research universities, providing access to IBMs largest quantum computing systems.

Michael Devetsikiotis, chair, Department of Electrical and Computer Engineering

Mainstream computer processors inside our laptops, desktops, and smartphones manipulate bits, information that can only exist as either a 1 or a 0. In other words, the computers we are used to function through programming, which dictates a series of commands with choices restricted to yes/no or if this, then that.Quantum computers, on the other hand, process quantum bits or qubits, that are not restricted to a binary choice. Quantum computers can choose if this, then that or both through complex physics concepts such as quantum entanglement. This allows quantum computers to process information more quickly, and in unique ways compared to conventional computers.

Access to systems such as IBMs newly announced 53 qubit processor (as well as several 20 qubit machines) is just one of the many benefits to UNMs participation in the IBM Q Hub when it comes to data analysis and algorithm development for quantum hardware. Quantum knowledge will only grow with time, and the IBM Q Hub will provide unique training and research opportunities for UNM faculty and student researchers for years to come.

Quantum computer developed by IBM Research in Zrich, Switzerland.

How did this partnership come to be? Two years ago, a sort of call to arms was sent out among UNM quantum experts, saying now was the time for big ideas because federal support for quantum research was gaining traction. Devetsikiotis vision was to create a quantum ecosystem, one that could unite the foundational quantum research in physics atUNM's Center for Quantum Information and Control(CQuIC) with new quantum computing and engineering initiatives for solving big real-world mathematical problems.

At first, I thought [quantum] was something for physicists, explains Devetsikiotis. But I realized its a great opportunity for the ECE department to develop real engineering solutions to these real-world problems.

CQuIC is the foundation of UNMs long-standing involvement in quantum research, resulting in participation in the National Quantum Initiative (NQI) passed by Congress in 2018 to support multidisciplinary research and training in quantum information science. UNM has been a pioneer in quantum information science since the field emerged 25 years ago, as CQuIC Director Ivan Deutsch knows first-hand.

This is a very vibrant time in our field, moving from physics to broader activities, says Deutsch, and [Devetsikiotis] has seen this as a real growth area, connecting engineering with the existing strengths we have in the CQuIC.

With strategic support from the Office of the Vice President for Research, Devetsikiotis secured National Science Foundation funding to support a Quantum Computing & Information Science (QCIS) faculty fellow. The faculty member will join the Department of Electrical and Computer Engineering with the goal to unite well-established quantum research in physics with new quantum education and research initiatives in engineering. This includes membership in CQuIC and implementation of the IBM Q Hub program, as well as a partnership with Los Alamos National Lab for a Quantum Computing Summer School to develop new curricula, educational materials, and mentorship of next-generation quantum computing and information scientists.

IBM Q Hub at North Carolina State University.

As part of the Q Hub at NC State, UNM gains access to IBMs largest quantum computing systems for commercial use cases and fundamental research. It also allows for the restructuring of existing quantum courses to be more hands-on and interdisciplinary than they have in the past, as well as the creation of new courses, a new masters degree program in QCIS, and a new university-wide Ph.D. concentration in QCIS that can be added to several departments including ECE, Computer Science, Physics and Astronomy, and Chemistry.

Theres been a lot of challenges, Devetsikiotis says, but there has also been a lot of good timing, and thankfully The University has provided support for us. UNM has solidified our seat at the quantum table and can now bring in the industrial side.

Continued here:
The University of New Mexico Becomes IBM Q Hub's First University Member - UNM Newsroom

Riverlane partner with bio-tech company Astex – Quantaneo, the Quantum Computing Source

Riverlane builds ground-breaking software to unleash the power of quantum computers. Chemistry is a key application in which quantum computing can be of significant value, as high-level quantum chemistry calculations can be solved far faster than using classical methods.

World leaders in drug discovery and development, Astex Pharmaceuticals apply innovative solutions to treat cancer and diseases of the central nervous system.The two companies will join forces to combine their expertise in quantum computing software and quantum chemistry applications to speed up drug development and move us closer to quantum advantage.

As part of the collaboration, Astex are funding a post-doctoral research scientist at Riverlane. They will apply very high levels of quantum theory to study the properties of covalent drugs, in which protein function is blocked by the formation of a specific chemical bond.So far in this field of research, only empirical methods and relatively low levels of quantum theory have been applied. Riverlane will provide access to specialised quantum software to enable simulations of the target drug-protein complexes.

Dave Plant, Principal Research Scientist at Riverlane, said: This collaboration will produce newly enhanced quantum chemical calculations to drive efficiencies in the drug discovery process. It will hopefully lead to the next generation of quantum inspired pharmaceutical products.

Chris Murray, SVP of Discovery Technology at Astex said: "We are excited about the prospect of exploring quantum computing in drug discovery applications. It offers the opportunity to deliver much more accurate calculations of the energetics associated with the interaction of drugs with biological molecules, leading to potential improvements in drug discovery productivity."

Link:
Riverlane partner with bio-tech company Astex - Quantaneo, the Quantum Computing Source

Quantum Physicist Invents Code to Achieve the Impossible – Interesting Engineering

A physicist at the University of Sydney has achieved something that many researchers previously thought was impossible. He has developed a type of error-correcting code for quantum computers that will free up more hardware.

His solution also delivers an approach that will allow companies to build better quantum microchips. Dr. Benjamin Brown from the School of Physics achieved this impressive feat by applying a three-dimensional code to a two-dimensional framework.

"The trick is to use time as the third dimension. I'm using two physical dimensions and adding in time as the third dimension," Brown said in a statement. "This opens up possibilities we didn't have before."

"It's a bit like knitting," he added. "Each row is like a one-dimensional line. You knit row after row of wool and, over time, this produces a two-dimensional panel of material."

Quantum computing is rampant with errors. As such, one of the biggest obstacles scientists face before they can build machines large enough to solve problems is reducing these errors.

"Because quantum information is so fragile, it produces a lot of errors," said Brown.

Getting rid of these errors entirely is impossible. Instead, researchers are seeking to engineer a new error-tolerant system where useful processing operations outweigh error-correcting ones. This is exactly what Brown achieved.

"My approach to suppressing errors is to use a code that operates across the surface of the architecture in two dimensions. The effect of this is to free up a lot of the hardware from error correction and allow it to get on with the useful stuff," Brown explained.

The result is an approach that could change quantum computing forever.

"This result establishes a new option for performing fault-tolerant gates, which has the potential to greatly reduce overhead and bring practical quantum computing closer," saidDr. Naomi Nickerson, Director of Quantum Architecture at PsiQuantum in Palo Alto, California, who is not connected to the research.

Read the original here:
Quantum Physicist Invents Code to Achieve the Impossible - Interesting Engineering

Virtual ICM Seminar: ‘The Promises of the One Health Concept in the Age of Anthropocen’ – HPCwire

May 27, 2020 The Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) at the University of Warsaw invites enthusiasts of HPC and all people interested in challenging topics in Computer and Computational Science to the ICM Seminar in Computer and Computational Science that will be held on May 28, 2020 (16:00 CEST). The event is free.

On May 28, 2020, Dr. Aneta Afelt from the Interdisciplinary Centre for Mathematical and Computational Modelling department at the University of Warsaw, Espace-DEV, IRD Institut de Recherche pour le Dveloppement, will present a lecture titled, The Promises of the One Health Concept in the Age of Anthropocen

The lecture will dive into the One Health concept. In May 2019 an article was published: Anthropocene now: influential panel votes to recognize Earths new epoch situating at the stratigraphy of Earths history a new geological epoch the domination of human influence on shaping the Earths environment. When humans are a central figure in an ecological niche it results in massive subordination and transformation of the environment for their needs. Unfortunately, the outcome of such actions is a robbery of natural resources. The consequences are socially unexpected a global epidemiological crisis. The current COVID-19 pandemic is an excellent example. It seems that one of the most important questions of the anthropocene era is how to maintain stable epidemiological conditions for now and in the future. The One Health concept proposes a new paradigm a deep look at the sources of humanitys well-being: humanitys relationship with the environment. Humanitys health status is interdependent with the well-being of the environment. It is clear that the socio-ecological niche disturbance results in the spread of pathogens. Can sustainable development of socio-ecological niches help? The lecture dives into the results!

To register, visithttps://supercomputingfrontiers.eu/2020/tickets/neijis7eekieshee/

ICM Seminars is an extension of the international Supercomputing Frontiers Europe conference, which took place March 23-25th in virtual space.

The digital edition of SCFE gathered of the order of 1000 participants we want to continue this formula ofOpen Sciencemeetings despite the pandemic and use this forum to present the results of the most current research in the areas of HPC, AI, quantum computing, Big Data, IoT, computer and data networks and many others, says Dr. Marek Michalewicz, chair of the Organising Committee, SCFE2020 and ICM Seminars in Computer and Computational Science.

Registrationfor all weekly events is free. The ICM Seminars began with an inaugural lecture on April 1st by Scott Aronson, David J. Bruton Centennial Professor of Computer Science at the University of Texas. Aronson led the presentation titled Quantum Computational Supremacy and Its Applications.

For more information, visithttps://supercomputingfrontiers.eu/2020/seminars/

About the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw (UW)

Established by a resolution of the Senate of the University of Warsaw dated 29 June 1993, the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw, is one of the top HPC centres in Poland. ICM is engaged in serving the needs of a large community of computational researchers in Poland through provision of HPC and grid resources, storage, networking and expertise. It has always been an active research centre with high quality research contributions in computer and computational science, numerical weather prediction, visualisation, materials engineering, digital repositories, social network analysis and other areas.

Source: ICM UW

View original post here:
Virtual ICM Seminar: 'The Promises of the One Health Concept in the Age of Anthropocen' - HPCwire

Could this be Elon Musk’s biggest day yet? – Politico

With help from John Hendel and Mark Scott

Editors Note: Morning Tech is a free version of POLITICO Pro Technologys morning newsletter, which is delivered to our subscribers each morning at 6 a.m. The POLITICO Pro platform combines the news you need with tools you can use to take action on the days biggest stories. Act on the news with POLITICO Pro.

4:33 p.m.: NASAs launch today of Elon Musks SpaceX rocket could catapult the astronauts, the Silicon Valley tech entrepreneur, and the country to fame if it works, that is.

Shareholder talks, commence: Tech employees, civil rights activists and antitrust advocates are using Amazons and Facebooks annual shareholder meetings today to pressure the giants on issues ranging from the environmental impact of their businesses to their acquisitions of rival companies.

Schumers (rare) new tech bill: Senate Minority Leader Chuck Schumer plans to introduce bipartisan, bicameral legislation today to give the National Science Foundation an infusion of government cash and provide more money for research into AI, 5G and quantum computing.

A message from Facebook:

State of Small Business Report: Insights from 86,000 businesses and employees. A new report from Facebook and the Small Business Roundtable looks at how small and medium-sized businesses are dealing with the impact of COVID-19 and what they need on the road to recovery. Go further: Read the full report.

GREETINGS, TECHLINGS: ITS WEDNESDAY. WELCOME TO MORNING TECH! Im your host, Alexandra Levine.

Calling all China watchers: The trajectory of the U.S.-China relationship will determine whether this century is judged a bright or a dismal one. POLITICO's David Wertime is launching a new China newsletter this week that will be worth the read. Sign up here.

Meanwhile, whats happening in Washingtons tech circles? Drop me a line at [emailprotected] or @Ali_Lev. An event for our calendar? Send details to [emailprotected]. Anything else? Full team info below. And don't forget: Add @MorningTech and @PoliticoPro on Twitter.

ON WEDNESDAYS, WE LAUNCH ROCKETS The weeks main event is NASA's launch this afternoon of a 230-foot rocket, outfitted by SpaceX founder Elon Musk, from Cape Canaveral a historic event that both President Donald Trump and Vice President Mike Pence are expected to attend. If successful, Musk's SpaceX will go down in history as the first private company to carry humans into orbit. Tune in at 4:33 p.m.

NASAs fortunes are tied to Musks, who has made headlines recently for antics like vowing to sell all his houses, denouncing coronavirus lockdowns as fascist and reopening Teslas electric-car factory in defiance of California health authorities, POLITICOs Jacqueline Feldscher reports. SpaceXs role is a major departure from the traditional way NASA has sent its astronauts into space during the decades when it funded, owned and operated its own rockets and shuttles. And it comes as other private businesses aim to take humans to the final frontier, including Amazon CEO Jeff Bezos rocket company, Blue Origin, and Richard Bransons Virgin Galactic.

EYEBALLS WATCHING EMOJI: SILICON VALLEYS SHAREHOLDER MEETINGS Facebooks and Amazons annual shareholder meetings today are already being met with pushback.

For Facebook, as MT scooped Tuesday, that has taken the form of demands the company be broken up and stop profiting off the pandemic. Change the Terms coalition, a group of civil and digital rights activists that presses tech companies to crack down on hateful activity online, is meanwhile asking the company to ban white supremacists.

For Amazon, the pushback has taken the form of grass-roots groups like Amazon Employees for Climate Justice calling on the board to respond to their environmental concerns, including over warehouse and delivery fleet emissions that workers say are disproportionately hurting communities of color.

Amazons logistics network of trucks spew climate-change-causing greenhouse gases and toxic particles as they drive to and from warehouses that are concentrated near Black, Latinx, and Indigenous communities, the climate group wrote in a blog post mapping out the racial makeup of neighborhoods occupied by Amazon facilities. They claim the giants infrastructure overwhelmingly pollutes immigrant areas and communities of color particularly around San Bernardino, Calif., home to some two dozen warehouses and demand that Amazon enter a so-called Community Benefits Agreement that would require the company to provide permanent, living wage jobs and health benefits for local residents and zero emissions electric delivery trucks to promote clean air, among other asks.

The demands come as Amazon has seen a wave of fresh scrutiny in Washington during the pandemic and after the coronavirus spread to at least 50 warehouses and took the lives of at least eight Amazon warehouse workers.

OMG: If you were wondering how Amazon planned to respond to the discontent ahead of todays meeting, this might really make your jaw drop.

SAY HELLO TO A RARE SCHUMER TECH BILL A bipartisan, bicameral bill led by Schumer is expected to be introduced today. The Endless Frontiers Act, an uncommon piece of tech legislation from the New York Democrat, proposes a major, renewed federal investment in tech and science research through public-private partnerships and funding by the U.S. government investments intended to help in the race ahead with Covid-19 research in the short term, and to help brace for future threats of this magnitude in the long term.

The numbers: The bill would put $100 billion over five years toward the National Science Foundation (which currently has an annual budget of just $8.1 billion) and toward research and innovation across AI, 5G, quantum computing and other areas. It would also notably give the Commerce Department the ability to allocate billions more in funding to 10 to 15 tech hubs around the country, amplifying similar calls to create regional tech hubs by Facebook CEO Mark Zuckerberg and Rep. Ro Khanna (D-Calif.), who is among the co-sponsors of the bill.

Schumer first announced the bill in a recent USA Today op-ed with co-sponsors Khanna, Sen. Todd Young (R-Ind.) and Rep. Mike Gallagher (R-Wis.), highlighting the dangers of our decades-long underinvestment in the infrastructure that would help prevent, respond to and recover from an emergency of this scale namely, scientific and technological discovery. They also stress the need to keep up as China gains ground outpacing the United States by investing in technological innovations essential to Americans future safety and prosperity. The group is looking to package the proposal into the upcoming NDAA, according to a senior Senate aide familiar with the group's efforts.

THE NEXT 5G AIRWAVES FRONTIER? A mix of wireless industry trade groups and think tanks is nudging the FCC to issue an item ASAP to make the 12 GHz band airwaves more available for 5G use. The current technical rules for 12.2-12.7 GHz are obsolete and burdensome, preventing use of this spectrum for 5G wireless services, wrote the Competitive Carriers Association, Incompas, Open Technology Institute, Computer & Communications Industry Association and Public Knowledge.

One likely (and unmentioned) beneficiary: DISH Network, a satellite TV company affiliated with some of the signatories and currently on the hook for building out a 5G wireless network as part of the federal governments T-Mobile-Sprint merger approval. DISH holds much of this spectrum and, despite some industry pushback from players such as OneWeb, is adamant that the commission should act.

THE CASE AGAINST EUROPES DIGITAL SERVICES TAX The business-friendly Tax Foundation crunched the numbers to see whether digital taxes affecting major Silicon Valley companies operating in Europe are legal under international tax, trade and European law (mostly because current DSTs come from EU governments). The answer? Probably not.

In its analysis, the group looks at how current levies from the likes of France or Italy represent potential discrimination under existing trade law (like the World Trade Organization's General Agreement on Trade in Services), as well as under international tax rules if the digital taxes breach existing bilateral agreements between countries (say France and Ireland, where several of Silicon Valleys biggest companies, including Apple and Facebook, have a major presence outside the U.S.).

As for existing EU rules? Country's digital taxes may run afoul of the 27-country bloc's fundamental freedoms, though such a fight would likely wind its way to Europe's highest court and take years to conclude.

A message from Facebook:

Resources and tools to help you and your small business.

We know its a challenging time for small businesses. Facebooks Business Resource Hub offers resources to help you manage your business and support your customers and employees through the COVID-19 crisis.

Resources for businesses here.

Alison Watkins, a privacy litigator who has counseled clients on compliance with the California Consumer Privacy Act and Europes GDPR, has joined Perkins Coie as a partner in the firms litigation and privacy and security practices in the Palo Alto office. ... Jack Westerlund, a director of sales at Microsoft, is now director of sales at Microsoft partner RapidDeploy, an Austin-based software company working to reduce response time for first responders.

(More) gig grumblings: Uber and Lyft drivers in New York, where two rulings have deemed gig workers as employees eligible for the states unemployment insurance, are now suing over allegations that they have not been paid unemployment benefits in a timely manner, NYT reports.

Turning the other cheek: As Facebook did some soul-searching to study how the platform shapes user behavior, executives were warned that our algorithms exploit the human brains attraction to divisiveness, WSJ reports but ultimately, Mr. Zuckerberg and other senior executives largely shelved the basic research ... and weakened or blocked efforts to apply its conclusions to Facebook products.

Land of layoffs: Many leading Silicon Valley firms are feeling the layoff pains most outside the Bay Area, The Information reports.

Its good to be Google: In Sundar Pichais latest update on working from home, the Google CEO said that his employees, who will be largely working from home for the rest of this year, would receive a $1,000 allowance to go toward work equipment and office furniture.

ICYMI: "Twitter took a small stand against a pair of unsubstantiated President Donald Trump tweets about voting fraud on Tuesday by adding fact-check warnings," Cristiano reports, "but the move was unlikely to stem the onslaught of criticism the company is facing about tweets it hasn't acted on, including those peddling conspiracy theories about a deceased congressional staffer."

Podcast OTD: The latest episode of FCC Commissioner Jessica Rosenworcels Broadband Conversations podcast features Julie Samuels, executive director of Tech:NYC. Listen through Google Podcasts, GooglePlay, iTunes or the FCC.

Opinion: Samuels spells out in the Daily News how tech jobs and investment will be a key component of New Yorks post-pandemic economic recovery across all five boroughs.

Tips, comments, suggestions? Send them along via email to our team: Bob King ([emailprotected], @bkingdc), Heidi Vogt ([emailprotected], @HeidiVogt), Nancy Scola ([emailprotected], @nancyscola), Steven Overly ([emailprotected], @stevenoverly), John Hendel ([emailprotected], @JohnHendel), Cristiano Lima ([emailprotected], @viaCristiano), Alexandra S. Levine ([emailprotected], @Ali_Lev), and Leah Nylen ([emailprotected], @leah_nylen).

TTYL and go wash your hands.

Go here to read the rest:
Could this be Elon Musk's biggest day yet? - Politico

IIT Mumbai alumnus Rajiv Joshi, an IBM scientist, bags inventor of the year award – Livemint

Indian-American inventor Rajiv Joshi has bagged the prestigious Inventor of the Year award in recognition of his pioneering work in advancing the electronic industry and improving artificial intelligence capabilities.

Dr Joshi has more than 250 patented inventions in the US and works at the IBM Thomson Watson Research Center in New York.

He was presented with the prestigious annual award by the New York Intellectual Property Law Association early this month during a virtual awards ceremony.

An IIT Mumbai alumnus, Joshi has an MS degree from the Massachusetts Institute of Technology (MIT) and a PhD in mechanical/electrical engineering from Columbia University, New York.

His inventions span from novel interconnect structures and processes for more scaling, machine learning techniques for predictive failure analytics, high bandwidth, high performance and low power integrated circuits and memories and their usage in hardware accelerators, meant for artificial intelligence applications.

Many of these structures exist in processors, supercomputers, laptops, smartphones, handheld and variable gadgets and many other electronic items. His innovations have helped advance day-to-day life, global communication, health sciences and medical fields.

Necessity and curiosity inspire me," Dr Joshi told PTI in a recent interview, adding that the identification of a problem and providing out of the box solution as well as observe and think help him immensely to generate ideas.

Joshi claimed that stories about great, renowned inventors like Guglielmo Marconi, Madame Curie, Wright Brothers, James Watt, Alexander Bell, Thomas Edison inspired him.

In his acceptance speech, Dr Joshi said that cloud, artificial intelligence and quantum computing not only remain the buzzwords, but their utility, widespread usage is advancing with leaps and bounds.

All these areas are very exciting and I have been dabbling further in Artificial Intelligence (AI) and quantum computing," he said.

Quantum computing, which has offered tremendous opportunities, also faces challenges, he noted, adding that he is involved in advancing technology, improving memory structures and solutions and their usage in AI and contributing to quantum computing to advance the science. (With Agency Inputs)

Subscribe to newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Continue reading here:
IIT Mumbai alumnus Rajiv Joshi, an IBM scientist, bags inventor of the year award - Livemint

Global Quantum Computing Technologies Market Size and Forecast to 2026: Industry Analysis by Types, Top Vendors, Regions, Demand & Outlook 2020 -…

A research report on the Global Quantum Computing Technologies Market delivers complete analysis regarding the size, trends, market share, and growth prospects. In addition, the report includes market volume with an exact opinion offered in the report. This research report assesses the market growth rate and the industry value depending on the growth such as driving factors, market dynamics, and other associated data. The information provided in this report is integrated based on the trends, latest industry news, as well as opportunities. The Quantum Computing Technologies market report is major compilation of major information with respect to the overall competitor data of this market. Likewise, the information is an inclusive of the number of regions where the global Quantum Computing Technologies industry has fruitfully gained the position. This research report delivers a broad assessment of the Quantum Computing Technologies market. The global Quantum Computing Technologies market report is prepared with the detailed verifiable projections, and historical data about the Quantum Computing Technologies market size.

Request a sample of this report @ https://www.orbisresearch.com/contacts/request-sample/4571360

Moreover, the report also includes a full market analysis and supplier landscape with the help of PESTEL and SWOT analysis of the leading service providers. In addition, the projections offered in this report have been derived with the help of proven research assumptions as well as methodologies. By doing so, the Quantum Computing Technologies research study offers collection of information and analysis for each facet of the Quantum Computing Technologies industry such as technology, regional markets, applications, and types. The report has been made through the primary research interviews, complete surveys, as well as observations, and secondary research. Likewise, the Quantum Computing Technologies market report delivers major illustrations and presentations about the market which integrates graphs, pie charts, and charts and offers the precise percentage of the different strategies implemented by the major providers in the global Quantum Computing Technologies market. This report delivers a separate analysis of the foremost trends in the accessible market, regulations and mandates, micro & macroeconomic indicators are also included in this report.

Top Players:

Airbus GroupCambridge Quantum ComputingIBMGoogle Quantum AI LabMicrosoft Quantum ArchitecturesNokia Bell LabsAlibaba Group Holding LimitedIntel CorporationToshiba

Browse the complete report @ https://www.orbisresearch.com/reports/index/global-quantum-computing-technologies-market-size-status-and-forecast-2020-2026

By doing so, the study forecast the attractiveness of each major segment over the prediction period. The global Quantum Computing Technologies market study extensively features a complete quantitative and qualitative evaluation by studying data collected from various market experts and industry participants in the market value chain. The report also integrates the various market conditions around the globe such as pricing structure, product profit, demand, supply, production, capacity, as well as market growth structure. In addition, this study provides important data about the investment return data, SWOT analysis, and investment feasibility analysis.

Types:

SoftwareHardware

Applications:

GovernmentBusinessHigh-TechBanking & SecuritiesManufacturing & LogisticsInsuranceOther

In addition, the number of business tactics aids the Quantum Computing Technologies market players to give competition to the other players in the market while recognizing the significant growth prospects. Likewise, the research report includes significant information regarding the market segmentation which is designed by primary and secondary research techniques. It also offers a complete data analysis about the current trends which have developed and are expected to become one of the strongest Quantum Computing Technologies market forces into coming future. In addition to this, the Quantum Computing Technologies report provides the extensive analysis of the market restraints that are responsible for hampering the Quantum Computing Technologies market growth along with the report also offers a comprehensive description of each and every aspects and its influence on the keyword market.

If enquiry before buying this report @ https://www.orbisresearch.com/contacts/enquiry-before-buying/4571360

About Us :

Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us :

Visit link:
Global Quantum Computing Technologies Market Size and Forecast to 2026: Industry Analysis by Types, Top Vendors, Regions, Demand & Outlook 2020 -...

Machine Learning: What Is It Really Good For? – Forbes

AI artificial intelligence concept Central Computer Processors CPU concept, 3d rendering, Circuit ... [+] board, Technology background, Motherboard digital chip, Tech science background, machine learning

Machine learning is definitely a confusing term.Is it AI or something different?

Well, its actually a subset of AI (which, by the way, is a massive category). Machine learning is a method of analyzing data using an analytical model that is built automatically, or learned, from training data, said Rick Negrin, who is the VP of Product Management at MemSQL. The idea is that the model gets better as you feed it more data points.

There are two key steps with machine learning.First, you need to collect and train the data, which can be a long and tough process.Then, you will operationalize the machine learning, such as by using it to help provide insights or as part of a product.There are a myriad of tools to help with the process, such as open source platforms like TensorFlow and commercial systems, such as DataRobot.

Successful machine learning is only as good as the data available, which is why it needs new, updated data to provide the most accurate outputs or predictions for any given need, said Panagiotis Angelopoulos, who is the Chief Data Officer at Persado.And unlike what any one person can analyze, machine learning can take vast amounts of data over time and make predictions to improve the customer experience and provide real value to the end-user.

Sometimes the models are so intricate that they are nearly impossible to understand. The lack of transparency can make it so that certain industries, like healthcare and banking, may not be able to use machine learning models. Because of this, more research is being focused on the explainability of models.

Another challenge with machine learning is the need to form an experienced team. To build this team in-house, you will have to hire more than just data scientists, said Ji Li, who is the director of data science at CLARA analytics.Full deployment of a new solution requires product managers, software engineers, data engineers, operational experts to develop process and operational workflows, staff to integrate data models into operations, people to manage onboarding and training of the employees who will ultimately use the solution, and staff who can quantify value generation.

In other words, for many organizations, the best option with machine learning may be to buy an off-the-shelf solution.The good news is that there are many on the marketand they are generally affordable.

But regardless of what path you take, there needs to be a clear-cut business case for machine learning.It should not be used just because it is trendy.There also needs to be sufficient change management within the organization. One of the greatest challenges in implementing machine learning and other data science initiatives is navigating institutional changegetting a buy-in, dealing with new processes, the changing job duties, and more, said Ingo Mierswa, who is the founder and CTO of RapidMiner.

Then what are the use cases for machine learning?According to Alyssa Simpson Rochwerger, who is the VP of AI and the Data Evangelist at Appen:Machine learning can solve lots of different types of problems.But it's particularly well suited to decisions that require very simple and repetitive tasks at large scale. For example, the US Postal Service has been successfully using machine learning systems to sort the mail for decades. The task was simple:read the address on the mail (sense) and then understand the zip code (perceive) and then sort into different buckets (decide). The US Postal Service processes almost two hundred million pieces of mail per dayso sorting this by hand wouldn't work.

In fact, the examples are seemingly endless for machine learning.Here are just a few:

Machine learning is a tool and like most tools, it works best when used properly, said Matei Zaharia, who is the chief technologist and co-founder of Databricks.Machine learning can take something as simple as some images and some annotations or just drawings on those images and create a solution that can be automated efficiently and at scale. However, we are not in a technological state where a machine learning model can just work on anything that is thrown at itthat is, not without some kind of external guidance. A machine learns, a human teaches.

Tom (@ttaulli) is an advisor to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems. He also has developed various online courses, such as for the Python programming language.

Read the original here:
Machine Learning: What Is It Really Good For? - Forbes

What is machine learning? | MIT Technology Review

Machine-learning algorithms are responsible for the vast majority of the artificial intelligence advancements and applications you hear about. (For more background, check out our first flowchart on "What is AI?" here.)

Machine-learning algorithms use statistics to find patterns in massive* amounts of data. And data, here, encompasses a lot of thingsnumbers, words, images, clicks, what have you. If it can be digitally stored, it can be fed into a machine-learning algorithm.

Machine learning is the process that powers many of the services we use todayrecommendation systems like thoseon Netflix, YouTube, and Spotify; search engines like Google and Baidu; social-media feeds like Facebook and Twitter; voice assistants like Siri and Alexa. The list goes on.

In all of these instances, each platform is collecting as much data about you as possiblewhat genres you like watching, what links you are clicking, which statuses you are reacting toand using machine learning to make a highly educated guess about what you might want next. Or, in the case of a voice assistant, about which words match best with the funny sounds coming out of your mouth.

Frankly, this process is quite basic: find the pattern, apply the pattern. But it pretty much runs the world. Thats in big part thanks to an invention in 1986, courtesy of Geoffrey Hinton, today known as the father of deep learning.

Deep learning is machine learning on steroids: it uses a technique that gives machines an enhanced ability to findand amplifyeven the smallest patterns. This technique is called a deep neural networkdeep because it has many, many layers of simple computational nodes that work together to munch through data and deliver a final resultin the form of the prediction.

Neural networks were vaguely inspired by the inner workings of the human brain. The nodes are sort of like neurons, and the network is sort of like the brain itself. (For the researchers among you who are cringing at this comparison: Stop pooh-poohing the analogy. Its a good analogy.) But Hinton published his breakthrough paper at a time when neural nets had fallen out of fashion. No one really knew how to train them, so they werent producing good results. It took nearly 30 years for the technique to make a comeback. And boy, did it make a comeback.

One last thing you need to know: machine (and deep) learning comes in three flavors: supervised, unsupervised, and reinforcement. In supervised learning, the most prevalent, the data is labeled to tell the machine exactly what patterns it should look for. Think of it as something like a sniffer dog that will hunt down targets once it knows the scent its after. Thats what youre doing when you press play on a Netflix showyoure telling the algorithm to find similar shows.

In unsupervised learning, the data has no labels. The machine just looks for whatever patterns it can find. This is like letting a dog smell tons of different objects and sorting them into groups with similar smells. Unsupervised techniques arent as popular because they have less obvious applications. Interestingly, they have gained traction incybersecurity.

Lastly, we have reinforcement learning, the latest frontier of machine learning. A reinforcement algorithm learns by trial and error to achieve a clear objective. It tries out lots of different things and is rewarded or penalized depending on whether its behaviors help or hinder it from reaching its objective. This is like giving and withholding treats when teaching a dog a new trick. Reinforcement learning is the basis of Googles AlphaGo, the program that famously beat the best human players in the complex game of Go.

Thats it. That's machine learning. Now check out the flowchart above for a final recap.

*Note: Okay, there are technically ways to perform machine learning on smallish amounts of data, but you typically need huge piles of it to achieve good results.

___

This originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox,subscribe herefor free.

More here:
What is machine learning? | MIT Technology Review

Machine learning techniques applied to crack CAPTCHAs – The Daily Swig

A newly released tool makes light work of solving human verification challenges

F-Secure says its achieved 90% accuracy in cracking Microsoft Outlooks text-based CAPTCHAs using its AI-based CAPTCHA-cracking server, CAPTCHA22.

For the last two years, the security firm has been using machine learning techniques to train unique models that solve a particular CAPTCHA, rather than trying to build a one-size-fits-all model.

And, recently, it decided to try the system out on a CAPTCHA used by an Outlook Web App (OWA) portal.

The initial attempt, according to F-Secure, was comparatively unsuccessful, with the team finding that after manually labelling around 200 CAPTCHAs, it could only identify the characters with an accuracy of 22%.

The first issue to emerge was noise, with the team determining that the greyscale value of noise and text was always within two distinct and constant ranges. Tweaks to the tool helped filter out the noise.

The team also realized that some of the test CAPTCHAs had been labelled incorrectly, with confusion between, for example, l and I (lower case L and upper case i). Fixing this shortcoming brought the accuracy up to 47%.

More challenging, though, was handling the CAPTCHA submission to Outlooks web portal.

There was no CAPTCHA POST request, with the CAPTCHA instead sent as a value appended to a cookie. JavaScript was used to keylog the user as the answer to the CAPTCHA was typed.

Instead of trying to replicate what occurred in JS, we decided to use Pyppeteer, a browsing simulation Python package, to simulate a user entering the CAPTCHA, said Tinus Green, a senior information security consultant at F-Secure

Doing this, the JS would automatically take care of the submission for us.

Green added: We could use this simulation software to solve the CAPTCHA whenever it blocked entries and once solved, we could continue with our conventional attack, hence automating the process once again.

We have now also refactored CAPTCHA22 for a public release.

CAPTCHAs are challenge-response tests used by many websites in an attempt to distinguish between genuine requests to sign-up to or access web services by a human user and automated requests by bots.

Spammers, for example, attempt to circumvent CAPTCHAs in order to create accounts they can later abuse to distribute junk mail.

CAPTCHAs are something of a magnet for cybercriminals and security researchers, with web admins struggling to stay one step ahead.

Late last year, for example, PortSwigger Web Security uncovered a security weakness in Googles reCAPTCHA that allowed it to be partially bypassed by using Turbo Intruder, a research-focused Burp Suite extension, to trigger a race condition.

Soon after, a team of academics from the University of Maryland was able to circumvent Googles reCAPTCHA v2s anti-bot mechanism using a Python-based program called UnCaptcha, which could solve its audio challenges.

Green said: There is a catch 22 between creating a CAPTCHA that is user friendly grandma safe as we call it and sufficiently complex to prevent solving through computers. At this point it seems as if the balance does not exist.

Web admins shouldnt, he says, give away half the required information through username enumeration, and users should be required to set strong pass phrases conforming to NIST standards.

And, he adds: Accept that accounts can be breached, and therefore implement MFA [multi-factor authentication] as an additional barrier.

RELATED New tool highlights shortcomings in reCAPTCHAs anti-bot engine

Read the original post:
Machine learning techniques applied to crack CAPTCHAs - The Daily Swig

cnvrg.io Releases New Streaming Endpoints With One-click Deployment for Real-time Machine Learning Applications – PRNewswire

TEL AVIV, Israel, May 26, 2020 /PRNewswire/ --cnvrg.io, the data science platform simplifying model management and introducing advanced MLOps to the industry, today announced its streaming endpoints solution, a new capability for deploying ML models to production with Apache Kafka in one click. cnvrg.iois the first ML platform to enable one click streaming endpoint deployment for large-scale and real-time predictions with high throughput and low latency.

85% of machine learning models don't get to production due to the technical complexity of deploying the model in the right environment and architecture. Models can be deployed in a variety of different ways. Batch deployment for offline inference and web service for more real-time scenarios. These two approaches cover most of the ML use cases, but they both fall short in an enterprise setting when you need to scale and stream millions of predictions in real time. Enterprises require fast, scalable predictions to execute critical and time sensitive business decisions.

cnvrg.iois thrilled to announce its new capability of deploying ML models to production with a streaming architecture of producer/consumer interface with native integration to Apache Kafka and AWS Kinesis. In just one click, data scientists and engineers can deploy any kind of model as an endpoint that can receive data as stream and output predictions as streams.

Deployed models will be tracked with advanced model management and model monitoring solutions including alerts, retraining, A/B testing and canary rollout, autoscaling and more.

This new capability allows engineers to support and predict millions of samples in a real-time environment. This architecture is ideal for time sensitive or event-based predictions, recommender systems, and large-scale applications that require high throughput, low latency and fault tolerant environments.

"Playtika has 10 million daily active users (DAU), 10 billion daily events and over 9TB of daily processed data for our online games. To provide our players with a personalized experience, we need to ensure our models run at peak performance at all times," says Avi Gabay, Director of Architecture at Playtika. "With cnvrg.io we were able to increase our model throughput by up to 50% and on average by 30% when comparing to RESTful APIs. cnvrg.io also allows us to monitor our models in production, set alerts and retrain with high-level automation ML pipelines."

The new cnvrg.io release extends the market footprint and enhances the prior announcements of NVIDIA DGX-Ready partnership and Red Hat unified control plane.

About cnvrg.io

cnvrg.io is an AI OS, transforming the way enterprises manage, scale and accelerate AI and data science development from research to production. The code-first platform is built by data scientists, for data scientists and offers unrivaled flexibility to run on-premise or cloud.

Logo - https://mma.prnewswire.com/media/1160338/cnvrg_io_Logo.jpg

SOURCE cnvrg.io

Full Stack Machine Learning Operating System

Go here to see the original:
cnvrg.io Releases New Streaming Endpoints With One-click Deployment for Real-time Machine Learning Applications - PRNewswire

Machine learning helps Invisalign patients find their perfect smile – CIO

The mobile computing trend requires enterprises to meet consumers' expectations for accessing information and completing tasks from a smartphone. But there's a converse to that arrangement: Mobile has also become the go-to digital platform companies use to market their goods and services.

Align Technology, which offers the Invisalign orthodontic device to straighten teeth, is embracing the trend with a mobile platform that both helps patients coordinate care with their doctors and entices new customers. The My Invisalign app includes detailed content on how the Invisalign system works, as well as machine learning (ML) technology to simulate what wearers' smiles will look like after using the medical device.

"It's a natural extension to help doctors and patients stay in touch," says Align Technology Chief Digital Officer Sreelakshmi Kolli, who joined the company as a software engineer in 2003 and has spent the past few years digitizing the customer experience and business operations. The development of My Invisalign also served as a pivot point for Kolli to migrate the company to agile and DevSecOps practices.

My Invisalign is a digital on-ramp for a company that has relied on pitches from enthusiastic dentists and pleased patients to help Invisalign find a home in the mouths of more than 8 million customers. An alternative to clunky metal braces, Invisalign comprises sheer plastic aligners that straighten patients' teeth gradually over several months. Invisalign patients swear by the device, but many consumers remain on the fence about a device with a $3,000 to $5,000 price range that is rarely covered completely by insurance.

Visit link:
Machine learning helps Invisalign patients find their perfect smile - CIO

Growing Adoption of AI and Machine Learning and Increased Use of Drones is Driving Growth in the Global Mining Ventilation Systems Market -…

DUBLIN--(BUSINESS WIRE)--The "Global Mining Ventilation Systems Market 2020-2024" report has been added to ResearchAndMarkets.com's offering.

The mining ventilation systems market is poised to grow by $ 81.73 mn during 2020-2024 progressing at a CAGR of 4% during the forecast period. This report on the mining ventilation systems market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as key vendor analysis.

The market is driven by the growing demand for safety in underground mining and demand for minerals. In addition, increasing demand for precious metals is anticipated to boost the growth of the market as well. This study identifies technological advances as one of the prime reasons driving the mining ventilation systems market growth during the next few years. Also, the growing adoption of AI and machine learning and increasing use of drones will lead to sizable demand in the market.

The mining ventilation systems market analysis includes product segment and geographic landscapes

The mining ventilation systems market covers the following areas:

This robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading mining ventilation systems market vendors that include ABB Ltd., ABC Canada Technology Group Ltd., ABC Industries Inc., Epiroc AB, Howden Group Ltd., New York Blower Co., Sibenergomash-BKZ LLC, Stantec Inc., TLT-Turbo GmbH, and Zitron SA. Also, the mining ventilation systems market analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage on all forthcoming growth opportunities.

The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.

This study presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters such as profit, pricing, competition, and promotions. It presents various market facets by identifying the key industry influencers. The data presented is comprehensive, reliable, and a result of extensive research - both primary and secondary.

The market research report provide a complete competitive landscape and an in-depth vendor selection methodology and analysis using qualitative and quantitative research to forecast an accurate market growth.

Key Topics Covered:

Executive Summary

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation by Product

Customer landscape

Geographic Landscape

Vendor Landscape

Vendor Analysis

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/yl3xpg

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Read more from the original source:
Growing Adoption of AI and Machine Learning and Increased Use of Drones is Driving Growth in the Global Mining Ventilation Systems Market -...

Advanced Analytics and Machine Learning Boost Bee Populations – Transmission & Distribution World

As part of its commitment to using data and analytics to solve the world's most pressing problems, SAS' recent work includes helping to save the world's No. 1 food crop pollinator the honey bee. With the number of bee colonies drastically declining around the world, SAS is using technology such as theInternet of Things (IoT), machine learning and visual analytics to help maintain and support healthy bee populations.

In honor of World Bee Day, SAS is highlighting three separate projects where technology is monitoring, tracking and improving pollinator populations around the globe. First, researchers at SAS have developed a noninvasive way to monitor real-time conditions of beehives through auditory data and machine learning algorithms. SAS is also working withAppalachian State Universityon the World Bee Count to visualize world bee population data and understand the best ways to save them. Lastly, recent SASViyaHackathon winners decoded bee communication through machine learning in order to maximize their food access and boost human food supplies.

"SAS has always looked for ways to use technology for a better world," said Oliver Schabenberger, COO and CTO of SAS. "By applying advanced analytics and artificial intelligence to beehive health, we have a better shot as a society to secure this critically important part of our ecosystem and, ultimately, our food supply."

Noninvasively Monitoring Beehive HealthResearchers from the SAS IoT Division are developing abioacoustic monitoring systemto noninvasively track real-time conditions of beehives using digital signal processing tools and machine learning algorithms available in SASEvent Stream Processingand SAS Viya software. This system helps beekeepers better understand and predict hive problems which could lead to colony failure, including the emergence of new queens something they would not ordinarily be able to detect.

Annual loss rates of U.S. beehives exceed 40%, and between 25% and 40% of these losses are due to queen failure. Acoustic analysis can alert beekeepers to queen disappearances immediately, which is vitally important to significantly reducing colony loss rates. With this system, beekeepers will have a deeper understanding of their hives without having to conduct time-consuming and disruptive manual inspections.

"As a beekeeper myself, I know the magnitude of bees' impact on our ecosystem, and I'm inspired to find innovative ways to raise healthier bees to benefit us all," saidAnya McGuirk, Distinguished Research Statistician Developer in the IoT division at SAS. "And as a SAS employee, I'm proud to have conducted this experiment with SAS software at our very own campus beehives, demonstrating both the power of our analytical capabilities and our commitment to innovation and sustainability."

By connecting sensors to SAS' four Bee Downtown hives at its headquarters inCary, NC, the team startedstreaming hive datadirectly to the cloud to continuously measure data points in and around the hive, including weight, temperature, humidity, flight activity and acoustics. In-stream machine learning models were used to "listen" to the hive sounds, which can indicate health, stress levels, swarming activities and the status of the queen bee. To ensure only the hum of the hive was being used to determine bees' health and happiness, researchers used robust principal component analysis (RPCA), a machine learning technique, to separate extraneous or irrelevant noises from the inventory of sounds collected by hive microphones.

The researchers found that with RPCA capabilities, they could detect worker bees "piping" at the same frequency range at which a virgin queen pipes after a swarm, likely to assess whether a queen was present. The researchers then designed an automated pipeline to detect either queen piping following a swarm or worker piping that occurs when the colony is queenless. This is greatly beneficial to beekeepers, warning them that a new queen may be emerging and giving them the opportunity to intervene before significant loss occurs.

The researchers plan to implement the acoustic streaming system very soon and are continuing to look for ways to broaden the usage of technology to help honey bees and ultimately humankind.

Visualizing the World's Pollinator PopulationsOn World Bee Day, SAS is launching a data visualization that maps out bees "counted" around the globe for theWorld Bee Count, an initiative co-founded by theCenter for Analytics Research and Education(CARE) atAppalachian State University. The goal of a World Bee Count is to engage citizens across the world to take pictures of bees as a first step toward understanding the reasons for their alarming decline.

"The World Bee Count allows us to crowdsource bee data to both visualize our planet's bee population and create one of the largest, most informative data sets about bees to date," saidJoseph Cazier, Professor and Executive Director atAppalachian State University'sCARE. "SAS' data visualization will show the crowdsourced location of bees and other pollinators. In a later phase of the project, researchers can overlay key data points like crop yield, precipitation and other contributing factors of bee health, gathering a more comprehensive understanding of our world's pollinators." Bayer has agreed to help sponsor CARE to allow its students and faculty to perform research on the World Bee Count data and other digital pollinator data sources.

In early May, the World Bee Count app was launched for users both beekeepers and the general public, aka "citizen data scientists" to add data points to the Global Pollinator Map. Within the app, beekeepers can enter the number of hives they have, and any user can submit pictures of pollinators from their camera roll or through the in-app camera. Through SAS Visual Analytics, SAS has created avisualization mapto display the images users submit via the app. In addition to showing the results of the project, the visualizations can potentially provide insights about the conditions that lead to the healthiest bee populations.

In future stages of this project, the robust data set created from the app could help groups like universities and research institutes better strategize ways to save these vital creatures.

Using Machine Learning to Maximize Bees' Access to FoodRepresenting the Nordic region, a team from Amesto NextBridgewon the 2020 SAS EMEA Hackathon, which challenged participants to improve sustainability using SAS Viya. Their winning project used machine learning to maximize bees' access to food, which would in turn benefit mankind's food supply. In partnership withBeefutures, the team successfully accomplished this by developing a system capable of automatically detecting, decoding and mapping bee "waggle" dances using Beefutures' observation hives and SAS Viya.

Bees are responsible for pollinating nearly 75% of all plant species directly used for human food, but the number of bee colonies are declining, which will lead to a devastating loss for human food supply. A main reason for the decline of bee populations is a lack of access to food due to an increase in monoculture farming. When bees do find a good food source, they come back to the hive to communicate its exact location through a "waggle dance." By observing these dances, beekeepers can better understand where their bees are getting food and then consider establishing new hives in these locations to help maintain strong colonies.

"Observing all of these dances manually is virtually impossible, but by using video footage from inside the hives and training machine learning algorithms to decode the dance, we will be able to better understand where bees are finding food," said Kjetil Kalager, lead of the Amesto NextBridge and Beefutures team. "We implemented this information, along with hive coordinates, sun angle, time of day and agriculture around the hives into an interactive map in SAS Viya and then beekeepers can easily decode this hive information and relocate to better suited environments if necessary."

This systematic real-time monitoring of waggle dances allows bees to act as sensors for their ecosystems. Further research using this technology may uncover other information bees communicate through dance that could help us save and protect their population, which ultimately benefits us all.

See thiswaggle dance project in actionand learn about howSAS is committed to corporate social responsibility.

Read more from the original source:
Advanced Analytics and Machine Learning Boost Bee Populations - Transmission & Distribution World

How Machine Learning in Search Works: Everything You Need to Know – Search Engine Journal

In the world of SEO, its important to understand the system youre optimizing for.

You need to understand how:

Another crucial area to understand is machine learning.

Now, the term machine learning gets thrown around a lot these days.

But how does machine learning actually impact search and SEO?

This chapter will explore everything you need to know about how search engines use machine learning.

It would be difficult to understand how search engines use machine learning without knowing what machine learning actually is.

ADVERTISEMENT

CONTINUE READING BELOW

Lets start with the definition (provided by Stanford University in their course description for Coursera) before we move on to a practical explanation:

Machine learning is the science of getting computers to act without being explicitly programmed.

Machine learning isnt the same as Artificial Intelligence (AI), but the line is starting to get a bit blurry with the applications.

As noted above, machine learning is the science of getting computers to come to conclusions based on information but without being specifically programmed in how to accomplish said task.

AI, on the other hand, is the science behind creating systems that either have, or appear to possess, human-like intelligence and process information in a similar manner.

Think of the difference this way:

Machine learning is a system designed to solve a problem. It works mathematically to produce the solution.

The solution could be programmed specifically, or worked out by humans manually, but without this need, the solutions come much faster.

ADVERTISEMENT

CONTINUE READING BELOW

A good example would be setting a machine off to pour through oodles of data outlining tumor size and location without programming in what its looking for. The machine would be given a list of known benign and malignant conclusions.

With this, we would then ask the system to produce a predictive model for future encounters with tumors to generate odds in advance as to which it is based on the data analyzed.

This is purely mathematical.

A few hundred mathematicians could do this but it would take them many years (assuming a very large database) and hopefully, none of them would make any errors.

Or, this same task could be accomplished with machine learning in far less time.

When Im thinking of Artificial Intelligence, on the other hand, thats when I start to think of a system that touches on the creative and thus becomes less predictable.

An artificial intelligence set on the same task may simply reference documents on the subject and pull conclusions from previous studies.

Or it may add new data into the mix.

Or may start working on a new system of electrical engine, foregoing the initial task.

It probably wont get distracted on Facebook, but you get where Im going.

The key word is intelligence.

While artificial, to meet the criteria it would have to be real thus producing variables and unknowns akin to what we encounter when we interact with others around us.

Right now what the search engines (and most scientists) are pushing to evolve is machine learning.

Google has a freecourse on it, has made its machine learning frameworkTensorFlow open source, and is makingbig investments in hardware to run it.

Basically, this is the future so its best to understand it.

While we cant possibly list (or even know) every application of machine learning going on over at the Googleplex, lets look at a couple of known examples:

ADVERTISEMENT

CONTINUE READING BELOW

What article on machine learning at Google would be complete without mentioning their first and still highly-relevant implementation of a machine learning algorithm into search?

Thats right were talkingRankBrain.

Essentially the system was armed only with an understanding of entities (a thing or concept that is singular, unique, well-defined, and distinguishable) and tasked with producing an understanding of how those entities connect in a query to assist in better understanding the query and a set of known good answers.

These are brutally simplified explanations of both entities and RankBrain but it serves our purposes here.

So, Google gave the system some data (queries) and likely a set of known entities.

Im going to guess on the next process but logically the system would then be tasked with training itself based on the seed set of entities on how to recognize unknown entities it encounters.

The system would be pretty useless if it wasnt able to understand a new movie name, date, etc.

ADVERTISEMENT

CONTINUE READING BELOW

Once the system had that process down and was producing satisfactory results they would have then tasked it with teaching itself how to understand the relationships between entities and what data is being implied or directly requested and seek out appropriate results in the index.

This system solves many problems that plagued Google.

The requirement to include keywords like How do I replace my S7 screen on a page about replacing one should not be necessary.

You also shouldnt have to include fix if youve included replace as, in this context, they generally imply the same thing.

RankBrain uses machine learning to:

In its first iteration, RankBrain was tested on queries Google had not encountered before. This makes perfect sense and is a great test.

If RankBrain can improve results for queries that likely werent optimized for and will involve a mix of old and new entities and services a grouping of users who were likely getting lackluster results to begin with then it should be deployed globally.

ADVERTISEMENT

CONTINUE READING BELOW

Andit was in 2016.

Lets take a look at the two results I referenced above (and worth noting, I was writing the piece and the example and then thought to get the screen capture this is simply how it works and try it yourself it works in almost all cases where different wording implies the same thing):

Some very subtle differences in rankings with the #1 and 2 sites switching places but at its core its the same result.

Now lets look at my automotive example:

Machine learning helps Google to not just understand where there are similarities in queries, but we can also see it determining that if I need my car fixed I may need a mechanic (good call Google), whereas for replacing it I may be referring to parts or in need of governmental documentation to replace the entire thing.

ADVERTISEMENT

CONTINUE READING BELOW

We can also see here where machine learning hasnt quite figured it all out.

When I ask it how to replace my car, I likely mean the whole thing or Id have listed the part I wanted.

But itll learn its still in its infancy.

Also, Im Canadian, so the DMV doesnt really apply.

So here weve seen an example of machine learning at play in determining query meaning, SERP layout, and possible necessary courses of action to fulfill my intent.

Not all of that is RankBrain, but its all machine learning.

If you use Gmail, or pretty much any other email system, you also are seeing machine learning at work.

According to Google, they are now blocking 99.9% of all spam and phishing emails with a false-positive rate of only 0.05%.

Theyre doing this using the same core technique give the machine learning system some data and let it go.

If one was to manually program in all the permutations that would yield a 99.9% success rate in spam filtering and adjust on the fly for new techniques it would be an onerous task if at all possible.

ADVERTISEMENT

CONTINUE READING BELOW

When they did things this way they sat at a 97% success rate with 1% of false positive (meaning 1% of your real messages were sent to the spam folder unacceptable if it was important).

Enter machine learning set it up with all the spam messages you can positively confirm, let it build a model around what similarities they have, enter in some new messages and give it a reward for successfully selecting spam messages on its own and over time (and not a lot of it) it will learn far more signals and react far faster than a human ever could.

Set it to watch for user interactions with new email structures and when it learns that there is a new spam technique being used, add it to the mix and filter not just those emails but emails using similar techniques to the spam folder.

This article promised to be an explanation of machine learning, not just a list of examples.

ADVERTISEMENT

CONTINUE READING BELOW

The examples, however, were necessary to illustrate a fairly easy-to-explain model.

Lets not confuse this with easy to build, just simple in what we need to know.

A common machine learning model follows the following sequence:

This model is referred to as supervised learning and if my guess is right, its the model used in the majority of the Google algorithm implementations.

Another model of machine learning is the Unsupervised Model.

To draw from the example used in a great courseover on Coursera on machine learning, this is the model used to group similar stories in Google News and one can infer that its used in other places like the identification and grouping of images containing the same or similar people in Google Images.

In this model, the system is not told what its looking for but rather simply instructed to group entities (an image, article, etc.) into groups by similar traits (the entities they contain, keywords, relationships, authors, etc.)

ADVERTISEMENT

CONTINUE READING BELOW

Understanding what machine learning is will be crucial if you seek to understand why and how SERPs are laid out and why pages rank where they do.

Its one thing to understand an algorithmic factor which is an important thing to be sure but understanding the system in which those factors are weighted is of equal, if not greater, importance.

For example, if I was working for a company that sold cars I would pay specific attention to the lack of usable, relevant information in the SERP results to the query illustrated above.

The result is clearly not a success. Discover what content would be a success and generate it.

Pay attention to the types of content that Google feels may fulfill a users intent (post, image, news, video, shopping, featured snippet, etc.) and work to provide it.

I like to think of machine learning and its evolution equivalent to having a Google engineer sitting behind every searcher, adjusting what they see and how they see it before it is sent to their device.

ADVERTISEMENT

CONTINUE READING BELOW

But better that engineer is connected like the Borg to every other engineer learning from global rules.

But well get more into that in our next piece on user intent.

Image Credits

View post:
How Machine Learning in Search Works: Everything You Need to Know - Search Engine Journal

The ML Expert Who Hated Mathematics: Interview With Dipanjan Sarkar – Analytics India Magazine

Every week, Analytics India Magazine reaches out to developers, practitioners and experts from the machine learning community to gain insights into their journey in data science, and the tools and skills essential for their day-to-day operations.

For this weeks column, Analytics India Magazine got in touch with Dipanjan Sarkar, a very well known face in the machine learning community. In this story, we take you through the journey of Dipanjan and how he became an ML expert.

Dipanjan currently works as a Data Science Lead at Applied Materials where he leads a team of data scientists to solve various problems in the manufacturing and semiconductor domain by leveraging machine learning, deep learning, computer vision and natural language processing. He provides the much needed technical expertise, AI strategy, solutioning, and architecture, and works with stakeholders globally.

He has a bachelors degree in computer science & engineering and a masters in data science from IIIT Bangalore. Currently, he is pursuing a PG Diploma in ML and AI from Columbia University and an executive education certification course in AI Strategy from Northwestern University Kellogg School of Management.

Apart from academia, Dipanjan is a big fan of MOOCs. He also beta-test new courses for Coursera before they are made public.

Dipanjan is also a Google Developer Expert in Machine Learning and has worked with several Fortune 500 companies. For an expert in ML, mathematics is a prerequisite, but we were surprised when we learnt that Dipanjan actually hated mathematics at school and this continued until ninth grade where he picked up statistics, linear algebra and calculus, the three pillars of machine learning.

I always loved the way you could program a computer to do specific tasks and make a machine actually learn with data!

Dipanjans renewed interest in mathematics was followed by his fascination for computer programming. With his growing fascination from mathematics to statistics and traditional computer programming, his career choice became almost obvious.

Reminiscing about his initial days, when the word data science wasnt worshipped yet, Dipanjan spoke about how the field was more conceptual and theoretical. Back then, there werent any active ecosystems of tools, languages and frameworks dedicated for data science. Hence, it took more time to learn theoretical concepts since it took more efforts to actually implement them or see them in practice.

With the advent of Python, R and a whole suite of tools and libraries, he believes that it has become easier to tame the learning curve of data science. However, he also warns that this can be a double-edged sword if one focuses on hands-on without deep-diving into the math and concepts behind algorithms and techniques to understand how it works or why it is used.

I have always been a strong advocate of self-learning, and I believe that is where you get maximum value

Due to the lack of mentors or proper guides, which are plenty nowadays on LinkedIn and other forums, Dipanjan had no other option than to self-learn with the help of the web and books.

For aspirants, he recommends the following books:

To dive deep into the concepts and to get hands-on, he recommends Deep Learning with Keras, Python Machine Learning and Hands-On Machine Learning as practical books with examples. Dipanjan has also written a handful of books on practical machine learning.

When it comes to practice and deploying ML models, Dipanjan extensively uses the CRISP-DM (cross industry process for data mining) framework, which he considers to be one of the best frameworks to tackle any data science problem.

Also, before diving into models or data, he insists on the importance of identifying and articulating the business problem in the right manner. For conceptualising an AI use-case, Dipanjan recommends something called AI Canvas, which he has learnt from the Kellogg School of Management:

Use the right tools for the job without waging wars of Python vs R or PyTorch vs TensorFlow

When asked about his favourite tools, Dipanjan explained the importance of not paying attention towards Python vs R or PyTorch vs TensorFlow and using the right tools that get the job done.

For instance, he and his team use the ecosystem of tools and libraries centered around Python very frequently. This includes the regular run-of-the-mill pandas, matplotlib, seaborn, plotly for data wrangling and exploratory data analysis. For statistical modelling he prefers libraries like scikit-learn, statsmodels and pyod.

Dipanjans toolkit looks as follows:

Along with picking the right tools, he recommends practitioners to always go with the simplest solution unless complexity is adding substantial value and last but not the least, he urges people not to ignore documentation.

To those looking to break into the world of data science, Dipanjan suggests one to follow a hybrid approach, i.e. learn concepts, code and apply them on real-world datasets.

First, learn all the math and concepts and then try to actually apply the methods you have learnt

In the long, tedious process of learning, Dipanjan warns that people might lose focus and get sidetracked into thinking why are they even learning a certain method. To remedy this, he insists on learning and applying if one aims of becoming a good data scientist without deviating from the goal.

Addressing the overwhelming hype around AI and ML, Dipanjan says that he is already witnessing the dust settling down and how companies are now actually starting to realise both the limitations and value of AI. Deep learning and deep transfer learning are actually starting to provide value for companies working on complex problems involving unstructured data like images, audio, video and text and things are only going to get bigger and better with advanced tools and hardware in future. However, he admits that there is definitely still a fair bit of hype out there.

Traditional machine learning models like linear and logistic regression will never go out of fashion

No matter how advanced the field gets, he believes that traditional machine learning models like linear and logistic regression will never go out of fashion since they are the bread and butter of various organisations and use-cases out there. And, models that are easy to explain, including linear models and decision trees will continue to be used extensively.

Going forward, he is optimistic about the use-cases and applications to optimise manufacturing, predicting demand and sales, inventory planning, logistics and routing, infrastructure management optimisation and enhancing customer support and experience, will continue to be the key drivers for almost all major organisations for the next decade.

When it comes to breakthroughs, Dipanjan expects something big to happen in newer domains like self-learning, continuous-learning, meta-learning and reinforcement learning.

Always remember to challenge others opinions with a healthy mindset because a good data scientist doesnt just follow instructions blindly.

Talking about his tireless efforts to guide youngsters, he recollects how not having a mentor had been a major hindrance and how he had to unlearn and relearn overtime to correct his misconceptions. To help aspirants avoid the same mistakes, he mentors them whenever possible.

On a concluding note, Dipanjan said that he is mightily impressed by the relentless efforts of the data science community to share ideas through blogs, vlogs and online forums. Confessing his love for Analytics India Magazine, Dipanjan spoke about how AIM has been fostering a rich analytics ecosystem in India by reaching out to the global community.

Dipanjan will be speaking at Analytics India Magazines inaugural virtual conference, Plugin on 28th of May 2020. For more information, check our portal here.

comments

Original post:
The ML Expert Who Hated Mathematics: Interview With Dipanjan Sarkar - Analytics India Magazine

Impetus StreamAnalytix Launches a Cloud-based Version for Self-service ETL and Machine Learning – EnterpriseTalk

Impetus TechnologiesInc., a leading software products and services company, announced the launch of its newcloud-based version of StreamAnalytixonAWS Marketplace. StreamAnalytix Cloud will also be available on other leading cloud marketplaces like Azure and Google Cloud very soon.

Customer-driven Open Source Technology Platform is Future

Leveraging an interactive data-first approach, the tool provides an intuitive drag-and-drop interface to build ETL flows on the cloud, effortlessly. Users can ingest data from multiple on-premise and cloud sources, enrich this data, and swiftly build applications for a wide range of analytics use cases.

StreamAnalytix Cloud offers a host of power-packed features, including:

We are focused on helping enterprises harness the limitless power of the cloud to build, test, and run ETL and machine learning applications faster across industries and use cases, saidPunit Shah, Director for StreamAnalytix at Impetus Technologies. As a next-generation ETL tool in the cloud,StreamAnalytix Cloud accelerates Spark application development, and empowers users with unmatched scalability and extensibility to meet their strategic business needs.

4 Cyber Security Predictions to Watch Out for in 2020

Originally posted here:
Impetus StreamAnalytix Launches a Cloud-based Version for Self-service ETL and Machine Learning - EnterpriseTalk

AI threat intelligence is the future, and the future is now – TechTarget

The next progression in organizations using threat intelligence is adding AI threat intelligence capabilities, in the form of machine learning technologies, to improve attack detection. Machine learning is a form of AI that enables computers to analyze data and learn its significance. The rationale for using machine learning with threat intelligence is to enable computers to more rapidly detect attacks than humans can and stop those attacks before more damage occurs. In addition, because the volume of threat intelligence is often so large, traditional detection technologies inevitably generate too many false positives. Machine learning can analyze the threat intelligence and condense it into a smaller set of things to look for, thereby reducing the number of false positives.

This sounds fantastic, but there's a catch -- actually, a few catches. Expecting AI to magically improve security is unrealistic, and deploying machine learning without preparation and ongoing support may make things worse.

Here are three steps enterprises should take to use AI threat intelligence tools with machine learning capabilities to improve attack detection.

AI threat intelligence products that use machine learning work by taking inputs, analyzing them and producing outputs. For attack detection, machine learning's inputs include threat intelligence, and its outputs are either alerts indicating attacks or automated actions stopping attacks. If the threat intelligence has errors, it will give "bad" information to the attack detection tools, so the tools' machine learning algorithms may produce "bad" outputs.

Many organizations subscribe to multiple sources of threat intelligence. These include feeds, which contain machine-readable signs of attacks, like the IP addresses of computers issuing attacks and the file names used by malware. Other sources of threat intelligence are services, which generally provide human-readable prose describing the newest threats. Machine learning can use feeds but not services.

Organizations should use the highest quality threat intelligence feeds for machine learning. Characteristics to consider include the following:

It's hard to directly evaluate the quality of threat intelligence, but you can indirectly evaluate it based on the number of false positives that occur from using it. High-quality threat intelligence should lead to minimal false positives when it's used directly by detection tools -- without machine learning.

False positives are a real concern if you're using threat intelligence with machine learning to do things like automatically block attacks. Mistakes will disrupt benign activity and could negatively affect operations.

Ultimately, threat intelligence is just one part of assessing risk. Another part is understanding context -- like the role, importance and operational characteristics of each computer. Providing contextual information to machine learning can help it get more value from threat intelligence. Suppose threat intelligence indicates a particular external IP address is malicious. Detecting outgoing network traffic from an internal database server to that address might merit a different action than outgoing network traffic to the same address from a server that sends files to subscribers every day.

The toughest part of using machine learning is providing the actual learning. Machine learning needs to be told what's good and what's bad, as well as when it makes mistakes so it can learn from them. This requires frequent attention from skilled humans. A common way of teaching machine learning-enabled technologies is to put them into a monitor-only mode where they identify what's malicious but don't block anything. Humans review the machine learning tool's alerts and validate them, letting it know which were erroneous. Without feedback from humans, machine learning can't improve on its mistakes.

Conventional wisdom is to avoid relying on AI threat intelligence that uses machine learning to detect attacks because of concern over false positives. That makes sense in some environments, but not in others. Older detection techniques are more likely to miss the latest attacks, which may not follow the patterns those techniques typically look for. Machine learning can help security teams find the latest attacks, but with potentially higher false positive rates. If missing attacks is a greater concern than the resources needed to investigate additional false positives, then more reliance on automation utilizing machine learning may make sense to protect those assets.

Many organizations will find it best to use threat intelligence without machine learning for some purposes, and to get machine learning-generated insights for other purposes. For example, threat hunters might use machine learning to get suggestions of things to investigate that would have been impossible for them to find in large threat intelligence data sets. Also, don't forget about threat intelligence services -- their reports can provide invaluable insights for threat hunters on the newest threats. These insights often include things that can't easily be automated into something machine learning can process.

Originally posted here:
AI threat intelligence is the future, and the future is now - TechTarget

Teaching machine learning to check senses may avoid sophisticated attacks – University of Wisconsin-Madison

Complex machines that steer autonomous vehicles, set the temperature in our homes and buy and sell stocks with little human control are built to learn from their environments and act on what they see or hear. They can be tricked into grave errors by relatively simple attacks or innocent misunderstandings, but they may be able to help themselves by mixing their senses.

In 2018, a group of security researchers managed to befuddle object-detecting software with tactics that appear so innocuous its hard to think of them as attacks. By adding a few carefully designed stickers to stop signs, the researchers fooled the sort of object-recognizing computer that helps guide driverless cars. The computers saw an umbrella, bottle or banana but no stop sign.

Two multi-colored stickers attached to a stop sign were enough to disguise it to the eyes of an image-recognition algorithm as a bottle, banana and umbrella. UW-Madison

They did this attack physically added some clever graffiti to a stop sign, so it looks like some person just wrote on it or something and then the object detectors would start seeing it is a speed limit sign, says Somesh Jha, a University of WisconsinMadison computer sciences professor and computer security expert. You can imagine that if this kind of thing happened in the wild, to an auto-driving vehicle, that could be really catastrophic.

The Defense Advanced Research Projects Agency has awarded a team of researchers led by Jha a $2.7 million grant to design algorithms that can protect themselves against potentially dangerous deception. Joining Jha as co-investigators are UWMadison Electrical and Computer Engineering Professor Kassem Fawaz, University of Toronto Computer Sciences Professor Nicolas Papernot, and Atul Prakash, a University of Michigan professor of Electrical Engineering and Computer Science and an author of the 2018 study.

Kassem Fawaz

One of Prakashs stop signs, now an exhibit at the Science Museum of London, is adorned with just two narrow bands of disorganized-looking blobs of color. Subtle changes can make a big difference to object- or audio-recognition algorithms that fly drones or make smart speakers work, because they are looking for subtle cues in the first place, Jha says.

The systems are often self-taught through a process called machine learning. Instead of being programmed into rigid recognition of a stop sign as a red octagon with specific, blocky white lettering, machine learning algorithms build their own rules by picking distinctive similarities from images that the system may know only to contain or not contain stop signs.

The more examples it learns from, the more angles and conditions it is exposed to, the more flexible it can be in making identifications, Jha says. The better it should be at operating in the real world.

But a clever person with a good idea of how the algorithm digests its inputs might be able to exploit those rules to confuse the system.

DARPA likes to stay a couple steps ahead, says Jha. These sorts of attacks are largely theoretical now, based on security research, and wed like them to stay that way.

A military adversary, however or some other organization that sees advantage in it could devise these attacks to waylay sensor-dependent drones or even trick largely automated commodity-trading computers run into bad buying and selling patterns.

Somesh Jha

What you can do to defend against this is something more fundamental during the training of the machine learning algorithms to make them more robust against lots of different types of attacks, says Jha.

One approach is to make the algorithms multi-modal. Instead of a self-driving car relying solely on object-recognition to identify a stop sign, it can use other sensors to cross-check results. Self-driving cars or automated drones have cameras, but often also GPS devices for location and laser-scanning LIDAR to map changing terrain.

So, while the camera may be saying, Hey this is a 45-mile-per-hour speed limit sign, the LIDAR says, But wait, its an octagon. Thats not the shape of a speed limit sign, Jha says. The GPS might say, But were at the intersection of two major roads here, that would be a better place for a stop sign than a speed limit sign.

The trick is not to over-train, constraining the algorithm too much.

The important consideration is how you balance accuracy against robustness against attacks, says Jha. I can have a very robust algorithm that says every object is a cat. It would be hard to attack. But it would also be hard to find a use for that.

Share via Facebook

Share via Twitter

Share via Linked In

Share via Email

See the rest here:
Teaching machine learning to check senses may avoid sophisticated attacks - University of Wisconsin-Madison