David Graves to Head New Research at PPPL for Plasma Applications in Industry and Quantum Information Science – Quantaneo, the Quantum Computing…

Graves, a professor at the University of California, Berkeley, since 1986, is an expert in plasma applications in semiconductor manufacturing. He will become the Princeton Plasma Physics Laboratorys (PPPL) first associate laboratory director for Low-Temperature Plasma Surface Interactions, effective June 1. He will likely begin his new position from his home in Lafayette, California, in the East Bay region of San Francisco.

He will lead a collaborative research effort to not only understand and measure how plasma is used in the manufacture of computer chips, but also to explore how plasma could be used to help fabricate powerful quantum computing devices over the next decade.

This is the apex of our thrust into becoming a multipurpose lab, said Steve Cowley, PPPL director, who recruited Graves. Working with Princeton University, and with industry and the U.S. Department of Energy (DOE), we are going to make a big push to do research that will help us understand how you can manufacture at the scale of a nanometer. A nanometer, one-billionth of a meter, is about ten thousand times less than the width of a human hair.

The new initiative will draw on PPPLs expertise in low temperature plasmas, diagnostics, and modeling. At the same time, it will work closely with plasma semiconductor equipment industries and will collaborate with Princeton University experts in various departments, including chemical and biological engineering, electrical engineering, materials science, and physics. In particular, collaborations with PRISM (the Princeton Institute for the Science and Technology of Materials) are planned, Cowley said. I want to see us more tightly bound to the University in some areas because that way we get cross-fertilization, he said.

Graves will also have an appointment as professor in the Princeton University Department of Chemical and Biological Engineering, starting July 1. He is retiring from his position at Berkeley at the end of this semester. He is currently writing a book (Plasma Biology) on plasma applications in biology and medicine. He said he changed his retirement plans to take the position at PPPL and Princeton University. This seemed like a great opportunity, Graves said. Theres a lot we can do at a national laboratory where theres bigger scale, world-class colleagues, powerful computers and other world-class facilities.

Exciting new direction for the Lab

Graves is already working with Jon Menard, PPPL deputy director for research, on the strategic plan for the new research initiative over the next five years. Its a really exciting new direction for the Lab that will build upon our unique expertise in diagnosing and simulating low-temperature plasmas, Menard said. It also brings us much closer to the university and industry, which is great for everyone.

The staff will grow over the next five years and PPPL is recruiting for an expert in nano-fabrication and quantum devices. The first planned research would use converted PPPL laboratory space fitted with equipment provided by industry. Subsequent work would use laboratory space at PRISM on Princeton Universitys campus. In the longer term, researchers in the growing group would have brand new laboratory and office space as a central part the Princeton Plasma Innovation Center (PPIC), a new building planned at PPPL.

Physicists Yevgeny Raitses, principal investigator for the Princeton Collaborative Low Temperature Plasma Research Facility (PCRF) and head of the Laboratory for Plasma Nanosynthesis, and Igor Kavanovich, co-principal investigator of PCRF, are both internationally-known experts in low temperature plasmas who have forged recent partnerships between PPPL and various industry partners. The new initiative builds on their work, Cowley said.

A priority research area

Research aimed at developing quantum information science (QIS) is a priority for the DOE. Quantum computers could be very powerful in solving complex scientific problems, including simulating quantum behavior in material or chemical systems. QIS could also have applications in quantum communication, especially in encryption, and quantum sensing. It could potentially have an impact in areas such as national security. A key question is whether plasma-based fabrication tools commonly used today will play a role in fabricating quantum devices in the future, Menard said. There are huge implications in that area, Menard said. We want to be part of that.

Graves is an expert on applying molecular dynamics simulations to low temperature plasma-surface interactions. These simulations are used to understand how plasma-generated ions, atoms and molecules interact with various surfaces. He has extensive research experience in academia and industry in plasma-related semiconductor manufacturing. That expertise will be useful for understanding how to make very fine structures and circuits at the nanometer, sub-nanometer and even atom-by-atom level, Menard said. Davids going to bring a lot of modeling and fundamental understanding to that process. That, paired with our expertise and measurement capabilities, should make us unique in the U.S. in terms of what we can do in this area.

Graves was born in Daytona Beach, Florida, and moved a lot as a child because his father was in the U.S. Air Force. He lived in Homestead, Florida; near Kansas City, Missouri; and in North Bay Ontario; and finished high school near Phoenix, Arizona.

Graves received bachelors and masters degrees in chemical engineering from the University of Arizona and went on to pursue a doctoral degree in the subject, graduating with a Ph.D. from the University of Minnesota in 1986. He is a fellow of the Institute of Physics and the American Vacuum Society. He is the author or co-author of more than 280 peer-reviewed publications. During his long career at Berkeley, he has supervised 30 Ph.D. students and 26 post-doctoral students, many of whom are now in leadership positions in industry and academia.

A leader since the 1990s

Graves has been a leader in the use of plasma in the semiconductor industry since the 1990s. In 1996, he co-chaired a National Research Council (NRC) workshop and co-edited the NRCs Database Needs for Modeling and Simulation of Plasma Processing. In 2008, he performed a similar role for a DOE workshop on low-temperature plasmas applications resulting in the report Low Temperature Plasma Science Challenges for the Next Decade.

Graves is an admitted Francophile who speaks (near) fluent French and has spent long stretches of time in France as a researcher. He was named Matre de Recherche (master of research) at the cole Polytechnic in Palaiseau, France, in 2006. He was an invited researcher at the University of Perpignan in 2010 and received a chaire dexcellence from the Nanoscience Foundation in Grenoble, France, to study plasma-graphene interactions.

He has received numerous honors during his career. He was appointed the first Lam Research Distinguished Chair in Semiconductor Processing at Berkeley for 2011-2016. More recently, he received the Will Allis Prize in Ionized Gas from the American Physical Society in 2014 and the 2017 Nishizawa Award, associated with the Dry Process Symposium in Japan. In 2019, he was appointed foreign expert at Huazhong University of Science and Technology in Wuhan, China. He served as the first senior editor of IEEE Transactions on Radiation and Plasma Medical Science.

Graves has been married for 35 years to Sue Graves, who recently retired from the City of Lafayette, where she worked in the school bus program. The couple has three adult children. Graves enjoys bicycling and yoga and the couple loves to travel. They also enjoy hiking, visiting museums, listening to jazz music, and going to the theater.

Read the original post:
David Graves to Head New Research at PPPL for Plasma Applications in Industry and Quantum Information Science - Quantaneo, the Quantum Computing...

Registration Open for Inaugural IEEE International Conference on Quantum Computing and Engineering – HPCwire

LOS ALAMITOS, Calif.,May 14, 2020 Registration is now open for the inauguralIEEE International Conference on Quantum Computing and Engineering (QCE20), a multidisciplinary event focusing on quantum technology, research, development, and training. QCE20, also known as IEEE Quantum Week, will deliver a series ofworld-class keynotes,workforce-building tutorials,community-building workshops, andtechnical paper presentations and postersonOctober 12-16inDenver, Colorado.

Were thrilled to open registration for the inaugural IEEE Quantum Week, founded by the IEEE Future Directions Initiative and supported by multiple IEEE Societies and organizational units, said Hausi Mller, QCE20 general chair and co-chair of the IEEE Quantum Initiative.Our initial goal is to address the current landscape of quantum technologies, identify challenges and opportunities, and engage the quantum community. With our current Quantum Week program, were well on track to deliver a first-rate quantum computing and engineering event.

QCE20skeynote speakersinclude the following quantum groundbreakers and leaders:

The week-longQCE20 tutorials programfeatures 15 tutorials by leading experts aimed squarely at workforce development and training considerations. The tutorials are ideally suited to develop quantum champions for industry, academia, and government and to build expertise for emerging quantum ecosystems.

Throughout the week, 19QCE20 workshopsprovide forums for group discussions on topics in quantum research, practice, education, and applications. The exciting workshops provide unique opportunities to share and discuss quantum computing and engineering ideas, research agendas, roadmaps, and applications.

The deadline for submittingtechnical papersto the eight technical paper tracks isMay 22. Papers accepted by QCE20 will be submitted to the IEEE Xplore Digital Library. The best papers will be invited to the journalsIEEE Transactions on Quantum Engineering(TQE)andACM Transactions on Quantum Computing(TQC).

QCE20 provides attendees a unique opportunity to discuss challenges and opportunities with quantum researchers, scientists, engineers, entrepreneurs, developers, students, practitioners, educators, programmers, and newcomers. QCE20 is co-sponsored by the IEEE Computer Society, IEEE Communications Society, IEEE Council on Superconductivity,IEEE Electronics Packaging Society (EPS), IEEE Future Directions Quantum Initiative, IEEE Photonics Society, and IEEETechnology and Engineering Management Society (TEMS).

Registerto be a part of the highly anticipated inaugural IEEE Quantum Week 2020. Visitqce.quantum.ieee.orgfor event news and all program details, including sponsorship and exhibitor opportunities.

About the IEEE Computer Society

The IEEE Computer Society is the worlds home for computer science, engineering, and technology. A global leader in providing access to computer science research, analysis, and information, the IEEE Computer Society offers a comprehensive array of unmatched products, services, and opportunities for individuals at all stages of their professional career. Known as the premier organization that empowers the people who drive technology, the IEEE Computer Society offers international conferences, peer-reviewed publications, a unique digital library, and training programs. Visitwww.computer.orgfor more information.

About the IEEE Communications Society

TheIEEE Communications Societypromotes technological innovation and fosters creation and sharing of information among the global technical community. The Society provides services to members for their technical and professional advancement and forums for technical exchanges among professionals in academia, industry, and public institutions.

About the IEEE Council on Superconductivity

TheIEEE Council on Superconductivityand its activities and programs cover the science and technology of superconductors and their applications, including materials and their applications for electronics, magnetics, and power systems, where the superconductor properties are central to the application.

About the IEEE Electronics Packaging Society

TheIEEE Electronics Packaging Societyis the leading international forum for scientists and engineers engaged in the research, design, and development of revolutionary advances in microsystems packaging and manufacturing.

About the IEEE Future Directions Quantum Initiative

IEEE Quantumis an IEEE Future Directions initiative launched in 2019 that serves as IEEEs leading community for all projects and activities on quantum technologies. IEEE Quantum is supported by leadership and representation across IEEE Societies and OUs. The initiative addresses the current landscape of quantum technologies, identifies challenges and opportunities, leverages and collaborates with existing initiatives, and engages the quantum community at large.

About the IEEE Photonics Society

TheIEEE Photonics Societyforms the hub of a vibrant technical community of more than 100,000 professionals dedicated to transforming breakthroughs in quantum physics into the devices, systems, and products to revolutionize our daily lives. From ubiquitous and inexpensive global communications via fiber optics, to lasers for medical and other applications, to flat-screen displays, to photovoltaic devices for solar energy, to LEDs for energy-efficient illumination, there are myriad examples of the Societys impact on the world around us.

About the IEEE Technology and Engineering Management Society

IEEE TEMSencompasses the management sciences and practices required for defining, implementing, and managing engineering and technology.

Source: IEEE Computer Society

Read more:
Registration Open for Inaugural IEEE International Conference on Quantum Computing and Engineering - HPCwire

Quantum Computing Market Growth Trends, Key Players, Analysis, Competitive Strategies and Forecasts to 2026 – News Distinct

1qb Information Technologies

Quantum Computing Market Competitive Analysis:

Consistent technological developments, surging industrialization, raw material affluence, increasing demand for the Quantum Computing , and rising disposable incomes, soaring product awareness are adding considerable revenue to the market. According to the report, the Quantum Computing market is expected to report a healthy CAGR from 2020 to 2026. Affairs such as product innovations, industrialization, increasing urbanization in the developing and developed countries are likely to boost market demand in the near future.

The report further sheds light on the current and forthcoming opportunities and challenges in the Quantum Computing market and provide succinct analysis that assists clients in improving their business gains. Potential market threats, risks, uncertainties, and obstacles are also highlighted in this report that helps market players to lower the possible losses to their Quantum Computing business. The report also employs various analytical models such as Porters Five Forces and SWOT analysis to evaluate several bargaining powers, threats, and opportunities in the market.

Quantum Computing Market Segments:

Moreover, the leading Quantum Computing manufacturers and companies are illuminated in the report with extensive market intelligence. The report enfolds detailed and precise assessments of companies based on their financial operations, revenue, market size, share, annual growth rates, production cost, sales volume, gross margins, and CAGR. Their manufacturing details are also enlightened in the report, which comprises analysis of their production processes, volume, product specifications, raw material sourcing, key vendors, clients, distribution networks, organizational structure, and global presence.

The report also underscores their strategics planning including mergers, acquisitions, ventures, partnerships, product launches, and brand developments. Additionally, the report renders the exhaustive analysis of crucial market segments, which includes Quantum Computing types, applications, and regions. The segmentation sections cover analytical and forecast details of each segment based on their profitability, global demand, current revue, and development prospects. The report further scrutinizes diverse regions including North America, Asia Pacific, Europe, Middle East, and Africa, and South America. The report eventually helps clients in driving their Quantum Computing business wisely and building superior strategies for their Quantum Computing businesses.

To get Incredible Discounts on this Premium Report, Click Here @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=24845&utm_source=NDN&utm_medium=003

Table of Content

1 Introduction of Quantum Computing Market

1.1 Overview of the Market1.2 Scope of Report1.3 Assumptions

2 Executive Summary

3 Research Methodology

3.1 Data Mining3.2 Validation3.3 Primary Interviews3.4 List of Data Sources

4 Quantum Computing Market Outlook

4.1 Overview4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.3 Porters Five Force Model4.4 Value Chain Analysis

5 Quantum Computing Market, By Deployment Model

5.1 Overview

6 Quantum Computing Market, By Solution

6.1 Overview

7 Quantum Computing Market, By Vertical

7.1 Overview

8 Quantum Computing Market, By Geography

8.1 Overview8.2 North America8.2.1 U.S.8.2.2 Canada8.2.3 Mexico8.3 Europe8.3.1 Germany8.3.2 U.K.8.3.3 France8.3.4 Rest of Europe8.4 Asia Pacific8.4.1 China8.4.2 Japan8.4.3 India8.4.4 Rest of Asia Pacific8.5 Rest of the World8.5.1 Latin America8.5.2 Middle East

9 Quantum Computing Market Competitive Landscape

9.1 Overview9.2 Company Market Ranking9.3 Key Development Strategies

10 Company Profiles

10.1.1 Overview10.1.2 Financial Performance10.1.3 Product Outlook10.1.4 Key Developments

11 Appendix

11.1 Related Research

Get Complete Report @ https://www.verifiedmarketresearch.com/product/Quantum-Computing-Market/?utm_source=NDN&utm_medium=003

About us:

Verified Market Research is a leading Global Research and Consulting firm servicing over 5000+ customers. Verified Market Research provides advanced analytical research solutions while offering information enriched research studies. We offer insight into strategic and growth analyses, Data necessary to achieve corporate goals and critical revenue decisions.

Our 250 Analysts and SMEs offer a high level of expertise in data collection and governance use industrial techniques to collect and analyse data on more than 15,000 high impact and niche markets. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise and years of collective experience to produce informative and accurate research.

We study 14+ categories from Semiconductor & Electronics, Chemicals, Advanced Materials, Aerospace & Defence, Energy & Power, Healthcare, Pharmaceuticals, Automotive & Transportation, Information & Communication Technology, Software & Services, Information Security, Mining, Minerals & Metals, Building & construction, Agriculture industry and Medical Devices from over 100 countries.

Contact us:

Mr. Edwyne Fernandes

US: +1 (650)-781-4080UK: +44 (203)-411-9686APAC: +91 (902)-863-5784US Toll Free: +1 (800)-7821768

Email: [emailprotected]

Tags: Quantum Computing Market Size, Quantum Computing Market Trends, Quantum Computing Market Growth, Quantum Computing Market Forecast, Quantum Computing Market Analysis NMK, Majhi Naukri, Sarkari Naukri, Sarkari Result

Our Trending Reports

Rugged Display Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Quantum Computing Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Read this article:
Quantum Computing Market Growth Trends, Key Players, Analysis, Competitive Strategies and Forecasts to 2026 - News Distinct

Oracle Offers Machine Learning Workshop to Transform DBA Skills – Database Trends and Applications

AI and machine learning are turning a corner, marking this year with new and improved platforms and use cases. However, database administrators dont always have the tools and skills necessary to manage this new minefield of technology.

DBTA recently held a webinar featuring Charlie Berger, senior director, product management, machine learning, AI, and, Cognitive Analytics, Oracle who discussed how to gain an attainable, logical, evolutionary path to add machine learning to users Oracle data skills.

Operational DBAs spend a lot of time on maintenance, security, and reliability, Berger said. The Oracle Autonomous Database can help. It automates all database and infrastructure management, monitoring, tuning; protects from both external attacks and malicious internal users; and protects from all downtime including planned maintenance.

The Autonomous Database removes tactical drudgery, allowing more time for strategic contribution, according to Berger.

Machine learning allows algorithms to automatically sift through large amounts of data to discover hidden patterns, new insights, and make predictions, he explained.

Oracle Machine Learning extends Oracle Autonomous Database and enables users to build AI applications and analytics dashboards. OML delivers powerful in-database machine learning algorithms, automated ML functionality, and integration with open source Python and R.

From a database developer to a data scientist, Oracle can transform the data management platform into a combined/hybrid data management and machine learning platform.

There are 6 major steps to becoming a data scientist that include:

An archived on-demand replay of this webinar is availablehere.

Read more:
Oracle Offers Machine Learning Workshop to Transform DBA Skills - Database Trends and Applications

How can machine learning benefit the healthcare sector? – Open Access Government

Machine learning is one aspect of the AI portfolio of capability that has been with us in various forms for decades, so its hardly a product of science fiction. Its widely used as a means of processing high volumes of customer data to provide a better service and hence increase profits.

Yet things become more complex when the technology is brought into the public sector, where many decisions can greatly affect our lives. AI is often feared, particularly around removing the human touch that could lead to unfair judgements or decisions that could cause injury, death or even the complete destruction of humanity. If we think about medical diagnoses or the unfair denial of welfare for a citizen, its apparent where the first two fears arise. Hollywood can take credit for the final scenario.

Whatever the fear, we shouldnt throw the baby out with the bathwater. Local services in the UK face a 7.8 billion funding gap by 2025. With services already cut to the bone, central and local government organisations, along with the NHS, need new approaches and technologies to drive efficiency while also improving the service quality. Often this means collaboration between service providers, but collaboration between man and machine can also play a part.

Machine learning systems can help transform the public sector, driving better decisions through more accurate insights and streamlining service delivery through automation. Whats important, however, is that we dont try to replace human judgement and creativity with machine efficiency we need to combine them.

Theres a strong case to be made for greater adoption of machine learning across a diverse range of activities. The basic premise of machine learning is that a computer can derive a formula from looking at lots of historical data that enables the prediction of certain things the data describes. This formula is often termed an algorithm or a model. We use this algorithm with new data to make decisions for a specific task, or we use the additional insight that the algorithm provides to enrich our understanding and drive better decisions.

For example, machine learning can analyse patients interactions in the healthcare system and highlight which combinations of therapies in what sequence offer the highest success rates for patients; and maybe how this regime is different for different age ranges. When combined with some decisioning logic that incorporates resources (availability, effectiveness, budget, etc.) its possible to use the computers to model how scarce resources could be deployed with maximum efficiency to get the best-tailored regime for patients.

When we then automate some of this, machine learning can even identify areas for improvement in real-time and far faster than humans and it can do so without bias, ulterior motives or fatigue-driven error. So, rather than being a threat, it should perhaps be viewed as a reinforcement for human effort in creating fairer and more consistent service delivery.

Machine learning is also an iterative process; as the machine is exposed to new data and information, it adapts through a continuous feedback loop, which in turn provides continuous improvement. As a result, it produces more reliable results over time and ever more finely tuned and improved decision-making. Ultimately, its a tool for driving better outcomes.

The opportunities for AI to enhance service delivery are many. Another example in healthcare is Computer Vision (another branch of AI), which is being used in cancer screening and diagnosis. Were already at the stage where AI, trained from huge libraries of images of cancerous growths, is better at detecting cancer than human radiologists. This application of AI has numerous examples, such as work being done at Amsterdam UMC to increase the speed and accuracy of tumour evaluations.

But lets not get this picture wrong. Here, the true value is in giving the clinician more accurate insight or a second opinion that informs their diagnosis and, ultimately, the patients final decision regarding treatment. A machine is there to do the legwork, but the human decision to start a programme for cancer treatment, remains with the humans.

Acting with this enhanced insight enables doctors to become more efficient as well as effective. Combining the results of CT scans with advanced genomics using analytics, the technology can assess how patients will respond to certain treatments. This means clinicians avoid the stress, side effects and cost of putting patients through procedures with limited efficacy, while reducing waiting times for those patients whose condition would respond well. Yet, full-scale automation could run the risk of creating a lot more VOMIT.

Victims Of Modern Imaging Technology (VOMIT) is a new phenomenon where a condition such as a malignant tumour is detected by imaging and thus at first glance it would seem wise to remove it. However, medical procedures to remove it carry a morbidity risk which may be greater than the risk the tumour presents during the patients likely lifespan. Here, ignorance could be bliss for the patient and doctors would examine the patient holistically, including mental health, emotional state, family support and many other factors that remain well beyond the grasp of AI to assimilate into an ethical decision.

All decisions like these have a direct impact on peoples health and wellbeing. With cancer, the faster and more accurate these decisions are, the better. However, whenever cost and effectiveness are combined there is an imperative for ethical judgement rather than financial arithmetic.

Healthcare is a rich seam for AI but its application is far wider. For instance, machine learning could also support policymakers in planning housebuilding and social housing allocation initiatives, where they could both reduce the time for the decision but also make it more robust. Using AI in infrastructural departments could allow road surface inspections to be continuously updated via cheap sensors or cameras in all council vehicles (or cloud-sourced in some way). The AI could not only optimise repair work (human or robot) but also potentially identify causes and determine where strengthened roadways would cost less in whole-life costs versus regular repairs.

In the US, government researchers are already using machine learning to help officials make quick and informed policy decisions on housing. Using analytics, they analyse the impact of housing programmes on millions of lower-income citizens, drilling down into factors such as quality of life, education, health and employment. This instantly generates insightful, accessible reports for the government officials making the decisions. Now they can enact policy decisions as soon as possible for the benefit of residents.

While some of the fears about AI are fanciful, there is a genuine concern about the ethical deployment of such technology. In our healthcare example, allocation of resources based on gender, sexuality, race or income wouldnt be appropriate unless these specifically had an impact on the prescribed treatment or its potential side-effects. This is self-evident to a human, but a machine would need this to be explicitly defined otherwise. Logically, a machine would likely display bias to those groups whose historical data gave better resultant outcomes, thus perpetuating any human equality gap present in the training data

The recent review by the Committee on Standards in Public Life into AI and its ethical use by government and other public bodies concluded that there are serious deficiencies in regulation relating to the issue, although it stopped short of recommending the establishment of a new regulator.

SAS welcomed the review and contributed to it. We believe these concerns are best addressed proactively by organisations that use AI in a manner which is fair, accountable, transparent and explainable.

The review was chaired by crossbench peer Lord Jonathan Evans, who commented:

Explaining AI decisions will be the key to accountability but many have warned of the prevalence of Black Box AI. However, our review found that explainable AI is a realistic and attainable goal for the public sector, so long as government and private companies prioritise public standards when designing and building AI systems.

Todays increased presence of machine learning should be viewed as complementary to human decision-making within the public sector. Its an assistive tool that turns growing data volumes into positive outcomes for people, quickly and fairly. As the cost of computational power continues to fall, ever-increasing opportunities will emerge for machine learning to enhance public services and help transform lives.

Editor's Recommended Articles

Continued here:
How can machine learning benefit the healthcare sector? - Open Access Government

Canaan’s Kendryte K210 and the Future of Machine Learning – CapitalWatch

Author: CapitalWatch Staff

Canaan Inc. (Nasdaq: CAN) became publicly traded in New York in late November. It raised $90 million in its IPO, which Canaan's founder, chairman, and chief executive officer,Nangeng Zhang modestly called "a good start." Since that time, the company has met significant milestones in its mission to disrupt the supercomputing industry.

Operating since 2013, Hangzhou-based Canaan delivers supercomputing solutions tailored to client needs. The company focuses on the research and development of artificial intelligence (AI) technology specifically, AI chips, AI algorithms, AI architectures, system on a chip (SoC) integration, and chip integration. Canaan is also known as a top manufacturer of mining hardware in China the global leader in digital currency mining.

Since IPO, Canaan has made strides in accomplishing new projects, despite the hard-hit cross-industry crisis Covid-19 has caused worldwide. In a recent announcement, Canaan said it has developed a SaaS product which its partners can use to operate a cloud mining platform. Cloud mining allows users to mine digital currency without having to buy and maintain mining hardware and spend on electricity a trend that has been gaining popularity.

A Chip of the Future

Earlier this year, Canaan participatedat the 2020 International Consumer Electronics Show in Las Vegas, the world's largest tech show that attracts innovators from across the globe. Canaan impressed, showcasing its Kendryte K210 the world's first-ever RISC-V-based edge AI chip. The chip was released in September 2018 and has been in mass-production ever since.

K210 is Canaan's first chip. The AI chip is designed to carry out machine learning. The primary functions of the K210 are machine vision and semantic, which includes the KPU for computing convolutional neural networks and an APU for processing microphone array inputs. KPU is a general-purpose neural network processor with built-in convolution, batch normalization, activation, and pooling operations. The next-generation chip can detect faces and objects in real-time. Despite the high computing power, K210 consumes only 0.3W while other typical devices consume 1W.

More Than Just Chipping Away at Sales

As of September 30, 2019, Canaan has shipped more than 53,000 AI chips and development kits to AI product developers since release.

Currently, the sales of K210 are growing exponentially, according to CEO Zhang .

The company has moved quickly to the commercialization of chips, and developed modules, products and back-end SaaS, offering customers a "full flow of AI solutions."

Based on the first generation of K210, Canaan has formed critical strategic partnerships.

For example, the company launched joint projects with a leading AI algorithm provider, a top agricultural science and technology enterprise, and a well-known global soft drink manufacturer to deliversmart solutionsfor variousindustrialmarkets.

The Booming Blockchain Industry

Currently, Canaan is working under the development strategy of "Blockchain + AI." The company has made several breakthroughs in the blockchain and AI industry, including algorithm development and optimization, standard unit design, low-voltage and high-efficiency operation, high-performance design system and heat dissipation, etc. The company has also accumulated extensive experience in ASIC chip manufacturing, laying the foundation for its future growth.

Canaan released first-generation products based on Samsung's 8nm and SMIC's 14nm technologies in Q4 last year. The former has been shipped in Q1 this year, while the latter will be shipped in Q2. In February, it launched the second generation of the product which is more efficient, more cost-effective and offers better performance.

Currently, TSMC's 5nm technology is under development. This technology will further improve the company's machines' computing power and ensure Canaan's leading position in the blockchain hardware space.

"We are the leader in the industry," says Zhang.

Canaan's Covid-19 Strategy

During the Covid-19 outbreak, Canaan improved the existing face recognition access control system. The new software can detect and identify people wearing masks. At the same time, the intelligent attendance system has been integrated to assist human resource management

Integrating mining machine learning and AI, the K210 chip has been used on Avalon mining machine, which can identify and monitor potential network viruses through intelligent algorithms. The company will explore more innovative integration in the future.

Second-Generation Gem

In terms of AI, the company will launch the second-generation AI chip K510 this year. The design of its architecture has been "greatly" optimized, and the computing power is several times more robust than the K210. Later this year, Canaan will use this tech in areas including smart energy consumption, smart industrial parks, smart driving, smart retail, and smart finance.

Canaan's Cash

In terms of operating costs and R&D, the company's last-year operating cost dropped 13.3% year-on-year. In 2018 and 2019, Canaan recorded R&D expenses of 189.7 million yuan and 169 million yuan, respectively347 million yuan were used to incentivize core R&D personnel.

In addition, the company currently has more than 500 million yuan in cash ($70.5 million), will continue to operate under the "blockchain + AI" strategy, with a continued focus on the commercialization of its AI technology.

A Fruitful Future

Canaan began as a manufacturer of Bitcoin mining machines, but it has become more than that. In the short term, the Bitcoin halving cycle is approaching (Estimated to occur on May 11, 2020 CW); this should promote the sales of company's mining machine, In the long term, now a global leader in ASIC technology, Canaan could be in a unique position to meet supercomputing demand.

"Blockchain is a good start, but we'll go beyond that," says Zhang. "When a seed grows up to be a big tree, it will bear fruit."

So far, it has done just that. Just how high that "tree" can get remains to be seen, but one thing is certain: The Kendryte K210 chip will be the driving force fueling the company's growth.

More here:
Canaan's Kendryte K210 and the Future of Machine Learning - CapitalWatch

Recent Research Answers the Future of Quantum Machine Learning on COVID-19 – Analytics Insight

We have all seen movies or read books about an apocalyptic world where humankind is fighting against a deadly pathogen, and researchers are in a race against time to find a cure for the same. But COVID-19 is not a fictional chapter, it is real, and scientists all over the world are frantically looking for patterns in data by employing powerful supercomputers with the hopes of finding a speedier breakthrough in vaccine discovery for the COVID-19.

A team of researchers from Penn State University has recently unearthed a solution that has the potential to expedite the process of discovering a novel coronavirus treatment that is by employing an innovative hybrid branch of research known as quantum machine learning. Quantum Machine Learning is the latest field that combines both machine learning and quantum physics. The team is led by Swaroop Ghosh, Joseph R., and Janice M. Monkowski Career Development Assistant Professor of Electrical Engineering and Computer Science and Engineering.

In cases where a computer science-driven approach is implemented to identify a cure, most methodologies leverage machine learning to focus on screening different compounds one at a time to see if they can find a bond with the virus main protease, or protein. And the quantum machine learning method could yield quicker results and is more economical than any current methods used for drug discovery.

According to Prof. Ghosh, discovering any new drug that can cure a disease is like finding a needle in a haystack. Further, it is an incredibly expensive, laborious, and time-consuming solution. Using the current conventional pipeline for discovering new drugs can take between five and ten years from the concept stage to being released to the market and could cost billions in the process.

He further adds, High-performance computing such as supercomputers and artificial intelligence canhelp accelerate this process by screeningbillions of chemical compounds quicklyto findrelevant drugcandidates.

This approach works when enough chemical compounds are available in the pipeline, but unfortunately, this is not true for COVID-19. This project will explorequantum machine learning to unlock new capabilities in drug discovery by generating complex compounds quickly, he explains.

The funding from the Penn State Institute for Computational and Data Sciences, coordinated through the Penn State Huck Institutes of the Life Sciences as part of their rapid-response seed funding for research across the University to address COVID-19, is supporting this work.

Ghosh and his electrical engineering doctoral students Mahabubul Alam and Abdullah Ash Saki and computer science and engineering postgraduate students Junde Li and Ling Qiu have earlier worked on developing a toolset for solving particular types of problems known as combinatorial optimization problems, using quantum computing. Drug discovery too comes under a similar category. And hence their experience in this sector has made it possible for the researchers to explore in the search for a COVID-19 treatment while using the same toolset that they had already developed.

Ghosh considers the usage of Artificial intelligence fordrug discovery to be a very new area. The biggest challenge is finding an unknown solution to the problem by using technologies thatare still evolving that is, quantum computing and quantum machine learning.Weare excited about the prospects of quantum computing in addressinga current critical issue and contributing our bit in resolving this grave challenge. he elaborates.

Based on a report by McKinsey & Partner, the field of quantum computing technology is expected to have a global market value of US$1 trillion by 2035. This exciting scope of quantum machine learning can further boost the economic value while helping the healthcare industry in defeating the COVID-19.

Excerpt from:
Recent Research Answers the Future of Quantum Machine Learning on COVID-19 - Analytics Insight

Quantzig Launches New Article Series on COVID-19’s Impact – ‘Understanding Why Online Food Delivery Companies Are Betting Big on AI and Machine…

LONDON--(BUSINESS WIRE)--As a part of its new article series that analyzes COVID-19s impact across industries, Quantzig, a premier analytics services provider, today announced the completion of its recent article Why Online Food Delivery Companies are Betting Big on AI and Machine Learning

The article also offers comprehensive insights on:

Human activity has slowed down due to the pandemic, but its impact on business operations has not. We offer transformative analytics solutions that can help you explore new opportunities and ensure business stability to thrive in the post-crisis world. Request a FREE proposal to gauge COVID-19s impact on your business.

With machine learning, you dont need to babysit your project every step of the way. Since it means giving machines the ability to learn, it lets them make predictions and also improve the algorithms on their own, says a machine learning expert at Quantzig.

After several years of being confined to technology labs and the pages of sci-fi books, today artificial intelligence (AI) and big data have become the dominant focal point for businesses across industries. Barely a day passes by without new magazine and paper articles, blog entries, and tweets about such advancements in the field of AI and machine learning. Having said that, its not very surprising that AI and machine learning in the food and beverage industry have played a crucial role in the rapid developments that have taken place over the past few years.

Talk to us to learn how our advanced analytics capabilities combined with proprietary algorithms can support your business initiatives and help you thrive in todays competitive environment.

Benefits of AI and Machine Learning

Want comprehensive solution insights from an expert who decodes data? Youre just a click away! Request a FREE demo to discover how our seasoned analytics experts can help you.

As cognitive technologies transform the way people use online services to order food, it becomes imperative for online food delivery companies to comprehend customer needs, identify the dents, and bridge gaps by offering what has been missing in the online food delivery business. The combination of big data, AI, and machine learning is driving real innovation in the food and beverage industry. Such technologies have been proven to deliver fact-based results to online food delivery companies that possess the data and the required analytics expertise.

At Quantzig, we analyze the current business scenario using real-time dashboards to help global enterprises operate more efficiently. Our ability to help performance-driven organizations realize their strategic and operational goals within a short span using data-driven insights has helped us gain a leading edge in the analytics industry. To help businesses ensure business continuity amid the crisis, weve curated a portfolio of advanced COVID-19 impact analytics solutions that not just focus on improving profitability but help enhance stakeholder value, boost customer satisfaction, and help achieve financial objectives.

Request more information to know more about our analytics capabilities and solution offerings.

About Quantzig

Quantzig is a global analytics and advisory firm with offices in the US, UK, Canada, China, and India. For more than 15 years, we have assisted our clients across the globe with end-to-end data modeling capabilities to leverage analytics for prudent decision making. Today, our firm consists of 120+ clients, including 45 Fortune 500 companies. For more information on our engagement policies and pricing plans, visit: https://www.quantzig.com/request-for-proposal

Original post:
Quantzig Launches New Article Series on COVID-19's Impact - 'Understanding Why Online Food Delivery Companies Are Betting Big on AI and Machine...

Twitter adds former Google VP and A.I. guru Fei-Fei Li to board as it seeks to play catch up with Google and Facebook – CNBC

Twitter has appointed Stanford professor and former Google vice president Fei-Fei Li to its board as an independent director.

The social media platform said that Li's expertise in artificial intelligence (AI) will bring relevant perspectives to the board. Li's appointment may also help Twitter to attract top AI talent from other companies in Silicon Valley.

Li left her role as chief scientist of AI/ML (artificial intelligence/machine learning) at Google Cloud in October 2018 after being criticized for comments she made in relation to the controversial Project Maven initiative with the Pentagon, which saw Google AI used to identify drone targets from blurry drone video footage.

When details of the project emerged, Google employees objected, saying that they didn't want their AI technology used in military drones. Some quit in protest and around 4,000 staff signed a petition that called for "a clear policy stating that neither Google nor its contractors will ever build warfare technology."

While Li wasn't directly involved in the project, a leaked email suggested she was more concerned about what the public would make of Google's involvement in the project as opposed to the ethics of the project itself.

"This is red meat to the media to find all ways to damage Google," she wrote, according to a copy of the emailobtained by the Intercept. "You probably heardElon Muskand his comment about AI causing WW3."

"I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry. Google Cloud has been building our theme on Democratizing AI in 2017, and Diane (Greene, head of Google Cloud) and I have been talking about Humanistic AI for enterprise. I'd be super careful to protect these very positive images."

Up until that point, Li was seen very much as a rising star at Google. In the one year and 10 months she was there, she oversaw basic science AI research, all of Google Cloud's AI/ML products and engineering efforts, and a newGoogle AI lab in China.

While at Google she maintained strong links to Stanford and in March 2019 she launched the Stanford University Human-Centered AI Institute (HAI), which aims to advance AI research, education, policy and practice to benefit humanity.

"With unparalleled expertise in engineering, computer science and AI, Fei-Fei brings relevant perspectives to the board as Twitter continues to utilize technology to improve our service and achieve our long-term objectives," said Omid Kordestani, executive chairman of Twitter.

Twitter has been relatively slow off the mark in the AI race. Itacquired British start-up Magic Pony Technologies in 2016 for up to $150 million as part of an effort to beef up its AI credentials, but its AI efforts remain fairly small compared to other firms. It doesn't have the same reputation as companies like Google and Facebook when it comes to AI and machine-learning breakthroughs.

Today the company uses an AI technique called deep learning to recommend tweets to its users and it also uses AI to identify racist content and hate speech, or content from extremist groups.

Competition for AI talent is fierce in Silicon Valley and Twitter will no doubt be hoping that Li can bring in some big names in the AI world given she is one of the most respected AI leaders in the industry.

"Twitter is an incredible example of how technology can connect the world in powerful ways and I am honored to join the board at such an important time in the company's history," said Li.

"AI and machine learning can have an enormous impact on technology and the people who use it. I look forward to leveraging my experience for Twitter as it harnesses this technology to benefit everyone who uses the service."

See original here:
Twitter adds former Google VP and A.I. guru Fei-Fei Li to board as it seeks to play catch up with Google and Facebook - CNBC

Genomics and Machine Learning for In Vitro Sensitization Testing of Challenging Chemicals, Upcoming Webinar Hosted by Xtalks – PR Web

Xtalks Life Science Webinars

TORONTO (PRWEB) May 11, 2020

Predictive toxicology is a discipline that aims to proactively identify adverse human health and environmental effects in response to chemical exposure. GARD Genomic Allergen Rapid Detection is a next-generation, animal-free testing strategy framework for assessment and characterization of chemical sensitizers. The GARD platform integrates state-of-the-art technological components, including utilization of cell cultures of human immunological cells, omics-based evaluation of transcriptional patterns of endpoint-specific genomic biomarker signatures and machine learning-assisted classification-models.

To this end, the GARD platform provides accurate, cost effective and efficient assessment of skin and respiratory sensitizing capabilities of neat chemicals, complex formulations, mixtures and solid materials. GARD assays are successfully applied throughout the value chain of chemical and life science industries, including safety-based screening of candidates during preclinical research and development, monitoring of protocol changes and batch variations, monitoring of occupational health and for registration and regulatory approval.

This webinar will introduce the developmental phases of the GARD assays and discuss the technological origins of the observed high predictive performance, how the assays help industries overcome their specific challenges in safety testing in a broad applicability domain, and illustrate how GARD assays facilitate efficient decision-making in compliance with the principles of the 3Rs.

Join Andy Forreryd, PhD, SenzaGen AB and Henrik Johansson, PhD, Chief Scientist, SenzaGen AB in a live webinar on Wednesday, May 26, 2020 at 10am EDT (3pm BST/UK).

For more information or to register for this event, visit Genomics and Machine Learning for In Vitro Sensitization Testing of Challenging Chemicals.

ABOUT XTALKS

Xtalks, powered by Honeycomb Worldwide Inc., is a leading provider of educational webinars to the global life science, food and medical device community. Every year thousands of industry practitioners (from life science, food and medical device companies, private & academic research institutions, healthcare centers, etc.) turn to Xtalks for access to quality content. Xtalks helps Life Science professionals stay current with industry developments, trends and regulations. Xtalks webinars also provide perspectives on key issues from top industry thought leaders and service providers.

To learn more about Xtalks visit http://xtalks.comFor information about hosting a webinar visit http://xtalks.com/why-host-a-webinar/

Share article on social media or email:

More:
Genomics and Machine Learning for In Vitro Sensitization Testing of Challenging Chemicals, Upcoming Webinar Hosted by Xtalks - PR Web

How to overcome AI and machine learning adoption barriers – Gigabit Magazine – Technology News, Magazine and Website

Matt Newton, Senior Portfolio Marketing Manager at AVEVA, on how to overcome adoption barriers for AI and machine learning in the manufacturing industry

There has been a considerable amount of hype around Artificial Intelligence (AI) and Machine Learning (ML) technologies in the last five or so years.

So much so that AI has become somewhat of a buzzword full of ideas and promise, but something that is quite tricky to execute in practice.

At present, this means that the challenge we run into with AI and ML is a healthy dose of scepticism.

For example, weve seen several large companies adopt these capabilities, often announcing they intend to revolutionize operations and output with such technologies but then failing to deliver.

In turn, the ongoing evolution and adoption of these technologies is consequently knocked back. With so many potential applications for AI and ML it can be daunting to identify opportunities for technology adoption that can demonstrate real and quantifiable return on investment.

Many industries have effectively reached a sticking point in their adoption of AI and ML technologies.

Typically, this has been driven by unproven start-up companies delivering some type of open source technology and placing a flashy exterior around it, and then relying on a customer to act as a development partner for it.

However, this is the primary problem customers are not looking for prototype and unproven software to run their industrial operations.

Instead of offering a revolutionary digital experience, many companies are continuing to fuel their initial scepticism of AI and ML by providing poorly planned pilot projects that often land the company in a stalled position of pilot purgatory, continuous feature creep and a regular rollout of new beta versions of software.

This practice of the never ending pilot project is driving a reluctance for customers to then engage further with innovative companies who are truly driving digital transformation in their sector with proven AI and ML technology.

A way to overcome these challenges is to demonstrate proof points to the customer. This means showing how AI and ML technologies are real and are exactly like wed imagine them to be.

Naturally, some companies have better adopted AI and ML than others, but since much of this technology is so new, many are still struggling to identify when and where to apply it.

For example, many are keen to use AI to track customer interests and needs.

In fact, even greater value can be discovered when applying AI in the form of predictive asset analytics on pieces of industrial process control and manufacturing equipment.

AI and ML can provide detailed, real-time insights on machinery operations, exposing new insights that humans cannot necessarily spot. Insights that can drive huge impact on businesses bottom line.

AI and ML is becoming incredibly popular in manufacturing industries, with advanced operations analysis often being driven by AI. Many are taking these technologies and applying it to their operating experiences to see where economic savings can be made.

All organisations want to save money where they can and with AI making this possible.

These same organisations are usually keen to invest in further digital technologies. Successfully implementing an AI or ML technology can significantly reduce OPEX and further fuel the digital transformation of an overall enterprise.

Understandably, we are seeing the value of AI and ML best demonstrated in the manufacturing sector in both process and batch automation.

For example, using AI to figure out how to optimize the process to achieve higher production yields and improve production quality. In the food and beverage sectors, AI is being used to monitor production line oven temperatures, flagging anomalies - including moisture, stack height and color - in a continually optimised process to reach the coveted golden batch.

The other side of this is to use predictive maintenance to monitor the behaviour of equipment and improve operational safety and asset reliability.

A combination of both AI and ML is fused together to create predictive and prescriptive maintenance. Where AI is used to spot anomalies in the behavior of assets and recommended solution is prescribed to remediate potential equipment failure.

Predictive and Prescriptive maintenance assist with reducing pressure on O&M costs, improving safety, and reducing unplanned shutdowns.

Both AI, machine learning and predictive maintenance technologies are enabling new connections to be made within the production line, offering new insights and suggestions for future operations.

Now is the time for organisations to realise that this adoption and innovation is offering new clarity on the relationship between different elements of the production cycle - paving the way for new methods to create better products at both faster speeds and lower costs.

See the article here:
How to overcome AI and machine learning adoption barriers - Gigabit Magazine - Technology News, Magazine and Website

Understanding The Recognition Pattern Of AI – Forbes

Image and object recognition

Of the seven patterns of AI that represent the ways in which AI is being implemented, one of the most common is the recognition pattern. The main idea of the recognition pattern of AI is that were using machine learning and cognitive technology to help identify and categorize unstructured data into specific classifications. This unstructured data could be images, video, text, or even quantitative data. The power of this pattern is that were enabling machines to do the thing that our brains seem to do so easily: identify what were perceiving in the real world around us.

The recognition pattern is notable in that it was primarily the attempts to solve image recognition challenges that brought about heightened interest in deep learning approaches to AI, and helped to kick off this latest wave of AI investment and interest. The recognition pattern however is broader than just image recognition In fact, we can use machine learning to recognize and understand images, sound, handwriting, items, face, and gestures. The objective of this pattern is to have machines recognize and understand unstructured data. This pattern of AI is such a huge component of AI solutions because of its wide variety of applications.

The difference between structured and unstructured data is that structured data is already labelled and easy to interpret. However unstructured data is where most entities struggle. Up to 90% of an organization's data is unstructured data. It becomes necessary for businesses to be able to understand and interpret this data and that's where AI steps in. Whereas we can use existing query technology and informatics systems to gather analytic value from structured data, it is almost impossible to use those approaches with unstructured data. This is what makes machine learning such a potent tool when applied to these classes of problems.

Machine learning has a potent ability to recognize or match patterns that are seen in data. Specifically, we use supervised machine learning approaches for this pattern. With supervised learning, we use clean well-labeled training data to teach a computer to categorize inputs into a set number of identified classes. The algorithm is shown many data points, and uses that labeled data to train a neural network to classify data into those categories. The system is making neural connections between these images and it is repeatedly shown images and the goal is to eventually get the computer to recognize what is in the image based on training. Of course, these recognition systems are highly dependent on having good quality, well-labeled data that is representative of the sort of data that the resultant model will be exposed to in the real world. Garbage in is garbage out with these sorts of systems.

The many applications of the recognition pattern

The recognition pattern allows a machine learning system to be able to essentially look at unstructured data, categorize it, classify it, and make sense of what otherwise would just be a blob of untapped value. Applications of this pattern can be seen across a broad array of applications from medical imaging to autonomous vehicles, handwriting recognition to facial recognition, voice and speech recognition, or identifying even the most detailed things in videos and data of all types. Machine-learning enabled recognition has added significant power to security and surveillance systems, with the power to observe multiple simultaneous video streams in real time and recognize things such as delivery trucks or even people who are in a place they ought not be at a certain time of day.

The business applications of the recognition pattern are also plentiful. For example, in online retail and ecommerce industries, there is a need to identify and tag pictures for products that will be sold online. Previously humans would have to laboriously catalog each individual image according to all its attributes, tags, and categories. Nowadays, machine learning-based recognition systems are able to quickly identify products that are not already in the catalog and apply the full range of data and metadata necessary to sell those products online without any human interaction. This is a great place for AI to step in and be able to do the task much faster and much more efficiently than a human worker who is going to get tired out or bored. Not to mention these systems can avoid human error and allow for workers to be doing things of more value.

Not only is this recognition pattern being used with images, it's also used to identify sound in speech. There are lots of apps that exist that can tell you what song is playing or even recognize the voice of somebody speaking. Another application of this recognition pattern is recognizing animal sounds. The use of automatic sound recognition is proving to be valuable in the world of conservation and wildlife study. Using machines that can recognize different animal sounds and calls can be a great way to track populations and habits and get a better all-around understanding of different species. There could even be the potential to use this in areas such as vehicle repair where the machine can listen to different sounds being made by an engine and tell the operator of the vehicle what is wrong and what needs to be fixed and how soon.

One of the most widely adopted applications of the recognition pattern of artificial intelligence is the recognition of handwriting and text. While weve had optical character recognition (OCR) technology that can map printed characters to text for decades, traditional OCR has been limited in its ability to handle arbitrary fonts and handwriting. Machine learning-enabled handwriting and text recognition is significantly better at this job, in which it can not only recognize text in a wide range of printed or handwritten mode, but it can also recognize the type of data that is being recorded. For example, if there is text formatted into columns or a tabular format, the system can identify the columns or tables and appropriately translate to the right data format for machine consumption. Likewise, the systems can identify patterns of the data, such as Social Security numbers or credit card numbers. One of the applications of this type of technology are automatic check deposits at ATMs. Customers insert their hand written checks into the machine and it can then be used to create a deposit without having to go to a real person to deposit your checks.

The recognition pattern of AI is also applied to human gestures. This is something already heavily in use by the video game industry. Players can make certain gestures or moves that then become in-game commands to move characters or perform a task. Another major application is allowing customers to virtually try on various articles of clothing and accessories. It's even being applied in the medical field by surgeons to help them perform tasks and even to train people on how to perform certain tasks before they have to perform them on a real person. Through the use of the recognition pattern, machines can even understand sign language and translate and interpret gestures as needed without human intervention.

In the medical industry, AI is being used to recognize patterns in various radiology imaging. For example, these systems are being used to recognize fractures, blockages, aneurysms, potentially cancerous formations, and even being used to help diagnose potential cases of tuberculosis or coronavirus infections. Analyst firm Cognilytica is predicting that within just a few years, machines will perform the first analysis of most radiology images with instant identification of anomalies or patterns before they go to a human radiologist for further evaluation.

The recognition pattern is also being applied to identify counterfeit products. Machine-learning based recognition systems are looking at everything from counterfeit products such as purses or sunglasses to counterfeit drugs.

The use of this pattern of AI is impacting every industry from using images to get insurance quotes to analyzing satellite images after natural disasters to assess damage.Given the strength of machine learning in identifying patterns and applying that to recognition, it should come as little surprise that this pattern of AI will continue to see widespread adoption. In fact, in just a few years we might come to take the recognition pattern of AI for granted and not even consider it to be AI. That just goes to the potency of this pattern of AI. .

See the original post:
Understanding The Recognition Pattern Of AI - Forbes

Major Companies in Machine Learning as a Service Market Struggle to Fulfil the Extraordinary Demand Intensified by COVID-81 – Jewish Life News

The latest report on the Machine Learning as a Service market provides an out an out analysis of the various factors that are projected to define the course of the Machine Learning as a Service market during the forecast period. The current trends that are expected to influence the future prospects of the Machine Learning as a Service market are analyzed in the report. Further, a quantitative and qualitative assessment of the various segments of the Machine Learning as a Service market is included in the report along with relevant tables, figures, and graphs. The report also encompasses valuable insights pertaining to the impact of the COVID-19 pandemic on the global Machine Learning as a Service market.

The report reveals that the Machine Learning as a Service market is expected to witness a CAGR growth of ~XX% over the forecast period (2019-2029) and reach a value of ~US$ XX towards the end of 2019. The regulatory framework, R&D activities, and technological advancements relevant to the Machine Learning as a Service market are enclosed in the report.

Request Sample Report @https://www.mrrse.com/sample/9077?source=atm

The market is segregated into different segments to provide a granular analysis of the Machine Learning as a Service market. The market is segmented on the basis of application, end-user, region, and more.

The market share, size, and forecasted CAGR growth of each Machine Learning as a Service market segment and sub-segment are included in the report.

competition landscape which include competition matrix, market share analysis of major players in the global machine learning as a service market based on their 2016 revenues and profiles of major players. Competition matrix benchmarks leading players on the basis of their capabilities and potential to grow. Factors including market position, offerings and R&D focus are attributed to companys capabilities. Factors including top line growth, market share, segment growth, infrastructure facilities and future outlook are attributed to companys potential to grow. This section also identifies and includes various recent developments carried out by the leading players.

Company profiling includes company overview, major business strategies adopted, SWOT analysis and market revenues for year 2014 to 2016. The key players profiled in the global machine learning as a service market include IBM Corporation, Google Inc., Amazon Web Services, Microsoft Corporation, BigMl Inc., FICO, Yottamine Analytics, Ersatz Labs Inc, Predictron Labs Ltd and H2O.ai. Other players include ForecastThis Inc., Hewlett Packard Enterprise, Datoin, Fuzzy.ai, and Sift Science Inc. among others.

The global machine learning as a service market is segmented as below:

By Deployment Type

By End-use Application

By Geography

Request For Discount On This Report @ https://www.mrrse.com/checkdiscount/9077?source=atm

Important Doubts Related to the Machine Learning as a Service Market Addressed in the Report:

Knowledgeable Insights Enclosed in the Report

Buy This Report @ https://www.mrrse.com/checkout/9077?source=atm

View post:
Major Companies in Machine Learning as a Service Market Struggle to Fulfil the Extraordinary Demand Intensified by COVID-81 - Jewish Life News

Four projects receive funding from University of Alabama CyberSeed program – Alabama NewsCenter

Four promising research projects received funding from the University of Alabama CyberSeed program, part of the UA Office for Research and Economic Development.

The pilot seed-funding program promotes research across disciplines on campus while ensuring a stimulating and well-managed environment for high-quality research.

The funded projects come from four major thrusts of the UA Cyber Initiative that include cybersecurity, critical infrastructure protection, applied machine learning and artificial intelligence, and cyberinfrastructure.

These projects are innovative in their approach to using cutting-edge solutions to tackle critical challenges, said Dr. Jeffrey Carver, professor of computer science and chair of the UA Cyber Initiative.

One project will study cybersecurity of drones and develop strategies to mitigate potential attacks. Led by Dr. Mithat Kisacikoglu, assistant professor of electrical and computer engineering, and Dr. Travis Atkison, assistant professor of computer science, the research will produce a plan for the secure design of the power electronics in drones with potential for other applications.

Another project will use machine learning to probe the nature of dark matter using existing data from NASA. The work should position the research team, led by Dr. Sergei Gleyzer, assistant professor of physics and astronomy, and Dr. Brendan Ames, assistant professor of mathematics, to analyze images expected later this year from the Vera Rubin Observatory, the worlds largest digital camera.

The CyberSeed program is also funding work planning to use machine learning to accelerate discovery of candidates within a new class of alloys that can be used in real-world experiments. These new alloys, called high-entropy alloys or multi-principal component alloys, are thought to enhance mechanical performance. This project involves Drs. Lin Li and Feng Yan, assistant professors of metallurgical and materials engineering, and Dr. Jiaqi Gong, who begins as associate professor of computer science this month.

A team of researchers is involved in a project to use state-of-the-art cyberinfrastructure technology and hardware to collect, visualize, analyze and disseminate hydrological information. The research aims to produce a proof-of-concept system. The team includes Dr. Sagy Cohen, associate professor of geography; Dr. Brad Peter, a postdoctoral researcher of geography; Dr. Hamid Moradkhani, director of the UA Center for Complex Hydrosystems; Dr. Zhe Jiang, assistant professor of computer science; Dr. D. Jay Cervino, executive director of the UA Office of Information Technology; and Dr. Andrew Molthan with NASA.

The CyberSeed program came from a process that began in April 2019 with the first internal UA cybersummit to meet and define future opportunities. In July, ORED led an internal search for the chair of the Cyber Initiative,announcing Carver in August. In October, Carver led the second internal cybersummit, at which it was agreed the Cyber Initiative would define major thrusts and develop the CyberSeed program.

While concentrating in these areas specifically, the Cyber Initiative will continue to interact with other researchers across campus to identify other promising cyber-related research areas to grow the portfolio, Carver said.

This story originally appeared on the University of Alabamas website.

Follow this link:
Four projects receive funding from University of Alabama CyberSeed program - Alabama NewsCenter

Millions of historic newspaper images get the machine learning treatment at the Library of Congress – TechCrunch

Historians interested in the way events and people were chronicled in the old days once had to sort through card catalogs for old papers, then microfiche scans, then digital listings but modern advances can index them down to each individual word and photo. A new effort from the Library of Congress has digitized and organized photos and illustrations from centuries of news using state of the art machine learning.

Led by Ben Lee, a researcher from the University of Washington occupying the Librarys Innovator in Residence position, the Newspaper Navigator collects and surfaces data from images from some 16 million pages of newspapers throughout American history.

Lee and his colleagues were inspired by work already being done in Chronicling America, an ongoing digitization effort for old newspapers and other such print materials. While that work used optical character recognition to scan the contents of all the papers, there was also a crowdsourced project in which people identified and outlined images for further analysis. Volunteers drew boxes around images relating to World War I, then transcribed the captions and categorized the picture.

This limited effort set the team thinking.

I loved it because it emphasized the visual nature of the pages seeing the visual diversity of the content coming out of the project, I just thought it was so cool, and I wondered what it would be like to chronicle content like this from all over America, Lee told TechCrunch.

He also realized that what the volunteers had created was in fact an ideal set of training data for a machine learning system. The question was, could we use this stuff to create an object detection model to go through every newspaper, to throw open the treasure chest?

The answer, happily, was yes. Using the initial human-powered work of outlining images and captions as training data, they built an AI agent that could do so on its own. After the usual tweaking and optimizing, they set it loose on the full Chronicling America database of newspaper scans.

It ran for 19 days nonstop definitely the largest computing job Ive ever run, said Lee. But the results are remarkable: millions of images spanning three centuries (from 1789 to 1963) and organized with metadata pulled from their own captions. The team describes their work in a paper you can read here.

Assuming the captions are at all accurate, these images until recently only accessible by trudging through the archives date by date and document by document can be searched for by their contents, like any other corpus.

Looking for pictures of the president in 1870? No need to browse dozens of papers looking for potential hits and double-checking the contents in the caption just search Newspaper Navigator for president 1870. Or if you want editorial cartoons from the World War II era, you can just get all illustrations from a date range. (The team has already zipped up the photos into yearly packages and plans other collections.)

Here are a few examples of newspaper pages with the machine learning systems determinations overlaid on them (warning: plenty of hat ads and racism):

Thats fun for a few minutes for casual browsers, but the key thing is what it opens up for researchers and other sets of documents. The team is throwing a data jam today to celebrate the release of the data set and tools, during which they hope to both discover and enable new applications.

Hopefully it will be a great way to get people together to think of creative ways the data set can be used, said Lee. The idea Im really excited by from a machine learning perspective is trying to build out a user interface where people can build their own data set. Political cartoons or fashion ads, just let users define theyre interested in and train a classifier based on that.

A sample of what you might get if you asked for maps from the Civil War era.

In other words, Newspaper Navigators AI agent could be the parent for a whole brood of more specific ones that could be used to scan and digitize other collections. Thats actually the plan within the Library of Congress, where the digital collections team has been delighted by the possibilities brought up by Newspaper Navigator, and machine learning in general.

One of the things were interested in is how computation can expand the way were enabling search and discovery, said Kate Zwaard. Because we have OCR, you can find things it would have taken months or weeks to find. The Librarys book collection has all these beautiful plates and illustrations. But if you want to know like, what pictures are there of the Madonna and child, some are categorized, but others are inside books that arent catalogued.

That could change in a hurry with an image-and-caption AI systematically poring over them.

Newspaper Navigator, the code behind it and all the images and results from it are completely public domain, free to use or modify for any purpose. You can dive into the code at the projects GitHub.

Read more:
Millions of historic newspaper images get the machine learning treatment at the Library of Congress - TechCrunch

Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights

Last year, the fastest-growing job title in the world was that of the machine learning (ML) engineer, and this looks set to continue for the foreseeable future. According to Indeed, the average base salary of an ML engineer in the US is $146,085, and the number of machine learning engineer openings grew by 344% between 2015 and 2018. Machine learning engineers dominate the job postings around artificial intelligence (A.I.), with 94% of job advertisements that contain AI or ML terminology targeting machine learning engineers specifically.

This demonstrates that organizations understand how profound an effect machine learning promises to have on businesses and society. AI and ML are predicted to drive a Fourth Industrial Revolution that will see vast improvements in global productivity and open up new avenues for innovation; by 2030, its predicted that the global economy will be$15.7 trillion richersolely because of developments from these technologies.

The scale of demand for machine learning engineers is also unsurprising given how complex the role is. The goal of machine learning engineers is todeploy and manage machine learning modelsthat process and learn from the patterns and structures in vast quantities of data, into applications running in production, to unlock real business value while ensuring compliance with corporate governance standards.

To do this, machine learning engineers have to sit at the intersection of three complex disciplines. The first discipline is data science, which is where the theoretical models that inform machine learning are created; the second discipline is DevOps, which focuses on the infrastructure and processes for scaling the operationalization of applications; and the third is software engineering, which is needed to make scalable and reliable code to run machine learning programs.

Its the fact that machine learning engineers have to be at ease in the language of data science, software engineering, and DevOps that makes them so scarceand their value to organizations so great. A machine learning engineer has to have a deep skill-set; they must know multiple programming languages, have a very strong grasp of mathematics, and be able to understand andapply theoretical topics in computer science and statistics. They have to be comfortable with taking state-of-the-art models, which may only work in a specialized environment, andconverting them into robust and scalable systems that are fit for a business environment.

As a burgeoning occupation, the role of a machine learning engineer is constantly evolving. The tools and capabilities that these engineers have in 2020 are radically different from those they had available in 2015, and this is set to continue evolve as the specialism matures. One of the best ways to understand what the role of a machine learning engineer means to an organization is to look at the challenges they face in practice, and how they evolve over time.

Four major challenges that every machine learning engineer has to deal with are data provenance, good data, reproducibility, and model monitoring.

Across a models development and deployment lifecycle, theres interaction between a variety of systems and teams. This results in a highly complex chain of data from a variety of sources. At the same time, there is a greater demand than ever for data to be audited, and there to be a clear lineage of its organizational uses. This is increasingly a priority for regulators, with financial regulators now demandingthat all machine learning data be stored for seven years for auditing purposes.

This does not only make the data and metadata used in models more complex, but it also makes the interactions between the constituent pieces of data far more complex. This means machine learning engineers need to put the right infrastructure in place to ensure the right data and metadata is accessible, all while making sure it is properly organized.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

In 2016, it was estimated that the US alonelost $3.1 trillionto bad datadata thats improperly formatted, duplicated, or incomplete. People and businesses across all sectors lose time and money because of this, but in a job that requires building and running accurate models reliant on input data, these issues can seriously jeopardize projects.

IBM estimates that around80 percent of a data scientists timeis spentfinding, cleaning up, and organizing the data they put into their models. Over time, however, increasingly sophisticated error and anomaly detection programs will likely be used to comb through datasets and screen out information that is incomplete or inaccurate.

This means that, as time goes on and machine learning capabilities continue to develop, well see machine learning engineers have more tools in their belt to clean up the information their programs use, and thus be able to focus more time spent on putting together ML programs themselves.

Reproducibility is often defined as the ability to be able to keep a snapshot of the state of a specific machine learning model, and being able to reproduce the same experiment with the exact same results regardless of the time and location. This involves a great level of complexity, given that machine learning requires reproducibility of three components: 1) code, 2) artifacts, and 3) data. If one of these change, then the result will change.

To add to this complexity, its also necessary to keep reproducibility of entire pipelines that may consist of two or more of these atomic steps, which introduces an exponential level of complexity. For machine learning, reproducibility is important because it lets engineers and data scientists know that the results of a model can be relied upon when they are deployed live, as they will be the same if they are run today as if they were run in two years.

Designing infrastructure for machine learning that is reproducible is a huge challenge. It will continue to be a thorn in the side of machine learning engineers for many years to come. One thing that may make this easier in coming years is the rise of universally accepted frameworks for machine learning test environments, which will provide a consistent barometer for engineers to measure their efforts against.

Its easy to forget that the lifecycle of a machine learning model only begins when its deployed to production. Consequently, a machine learning engineer not only needs to do the work of coding, testing, and deploying a model, but theyll have to also develop the right tools to monitor it.

The production environment of a model can often throw up scenarios the machine learning engineer didnt anticipate when they were creating it. Without monitoring and intervention after deployment, its likely that a model can end up being rendered dysfunctional or produce skewed results by unexpected data. Without accurate monitoring, results can often slowly drift away from what is expected due to input data becoming misaligned with the data a model was trained with, producing less and less effective or logical results.

Adversarial attacks on models, often far more sophisticated than tweets and a chatbot, are of increasing concern, and it is clear that monitoring by machine learning engineers is needed to stop a model being rendered counterproductive by unexpected data. As more machine learning models are deployed, and as more economic output becomes dependent upon these models, this challenge is only going to grow in prominence for machine learning engineers going forward.

One of the most exciting things about the role of the machine learning engineer is that its a job thats still being defined, and still faces so many open problems. That means machine learning engineers get the thrill of working in a constantly changing field that deals with cutting-edge problems.

Challenges such as data quality may be problems we can make major progress towards in the coming years. Other challenges, such monitoring, look set to become more pressing in the more immediate future. Given the constant flux of machine learning engineering as an occupation, its of little wonder that curiosity and an innovative mindset are essential qualities for this relatively new profession.

Alex Housley is CEO ofSeldon.

See more here:
Machine Learning Engineer: Challenges and Changes Facing the Profession - Dice Insights

How Machine Learning Is Redefining The Healthcare Industry – Small Business Trends

The global healthcare industry is booming. As per recent research, it is expected to cross the $2 trillion mark this year, despite the sluggish economic outlook and global trade tensions. Human beings, in general, are living longer and healthier lives.

There is increased awareness about living organ donation. Robots are being used for gallbladder removals, hip replacements, and kidney transplants. Early diagnosis of skin cancers with minimum human error is a reality. Breast reconstructive surgeries have enabled breast cancer survivors to partake in rebuilding their glands.

All these jobs were unthinkable sixty years ago. Now is an exciting time for the global health care sector as it progresses along its journey for the future.

However, as the worldwide population of 7.7 billion is likely to reach 8.5 billion by 2030, meeting health needs could be a challenge. That is where significant advancements in machine learning (ML) can help identify infection risks, improve the accuracy of diagnostics, and design personalized treatment plans.

source: Deloitte Insights 2020 global health care outlook

In many cases, this technology can even enhance workflow efficiency in hospitals. The possibilities are endless and exciting, which brings us to an essential segment of the article:

Do you understand the concept of the LACE index?

Designed in Ontario in 2004, it identifies patients who are at risk of readmission or death within 30 days of being discharged from the hospital. The calculation is based on four factors length of stay of the patient in the hospital, acuity of admission, concurring diseases, and emergency room visits.

The LACE index is widely accepted as a quality of care barometer and is famously based on the theory of machine learning. Using the past health records of the patients, the concept helps to predict their future state of health. It enables medical professionals to allocate resources on time to reduce the mortality rate.

This technological advancement has started to lay the foundation for closer collaboration among industry stakeholders, affordable and less invasive surgery options, holistic therapies, and new care delivery models. Here are five examples of current and emerging ML innovations:

From the initial screening of drug compounds to calculating the success rates of a specific medicine based on physiological factors of the patients the Knight Cancer Institute in Oregon and Microsofts Project Hanover are currently applying this technology to personalize drug combinations to cure blood cancer.

Machine learning has also given birth to new methodologies such as precision medicine and next-generation sequencing that can ensure a drug has the right effect on the patients. For example, today, medical professionals can develop algorithms to understand disease processes and innovative design treatments for ailments like Type 2 diabetes.

Signing up volunteers for clinical trials is not easy. Many filters have to be applied to see who is fit for the study. With machine learning, collecting patient data such as past medical records, psychological behavior, family health history, and more is easy.

In addition, the technology is also used to monitor biological metrics of the volunteers and the possible harm of the clinical trials in the long-run. With such compelling data in hand, medical professionals can reduce the trial period, thereby reducing overall costs and increasing experiment effectiveness.

Every human body functions differently. Reactions to a food item, medicine, or season differ. That is why we have allergies. When such is the case, why is customizing the treatment options based on the patients medical data still such an odd thought?

Machine learning helps medical professionals determine the risk for each patient, depending on their symptoms, past medical records, and family history using micro-bio sensors. These minute gadgets monitor patient health and flag abnormalities without bias, thus enabling more sophisticated capabilities of measuring health.

Cisco reports that machine-to-machine connection in global healthcare is growing at a rate of 30% CAGR which is the highest compared to any other industry!

Machine learning is mainly used to mine and analyze patient data to find out patterns and carry out the diagnosis of so many medical conditions, one of them being skin cancer.

Over 5.4mn people in the US are diagnosed with this disease annually. Unfortunately, the diagnosis is a virtual and time-taking process. It relies on long clinical screenings, comprising a biopsy, dermoscopy, and histopathological examination.

But machine learning changes all that. Moleanalyzer, an Australia-based AI software application, calculates and compares the size, diameter, and structure of the moles. It enables the user to take pictures at predefined intervals to help differentiate between benign and malignant lesions on the skin.

The analysis lets oncologists confirm their skin cancer diagnosis using evaluation techniques combined with ML, and they can start the treatment faster than usual. Where experts could identify malignant skin tumors, only 86.6% correctly, Moleanalyzer successfully detected 95%.

Healthcare providers have to ideally submit reports to the government with necessary patient records that are treated at their hospitals.

Compliance policies are continually evolving, which is why it is even more critical to ensure the hospital sites to check if they are being compliant and functioning within the legal boundaries. With machine learning, it is easy to collect data from different sources, using different methods and formatting them correctly.

For data managers, comparing patient data from various clinics to ensure they are compliant could be an overwhelming process. Machine learning helps gather, compare, and maintain that data as per the standards laid down by the government, informs Dr. Nick Oberheiden, Founder and Attorney, Oberheiden P.C.

The healthcare industry is steadily transforming through innovative technologies like AI and ML. The latter will soon get integrated into practice as a diagnostic aid, particularly in primary care. It plays a crucial role in shaping a predictive, personalized, and preventive future, making treating people a breeze. What are your thoughts?

Image: Depositphotos.com

Continue reading here:
How Machine Learning Is Redefining The Healthcare Industry - Small Business Trends

Tackling climate change with machine learning: Covid-19 and the energy transition – pv magazine International

The effect the coronavirus pandemic is having on energy systems and environmental policy in Europe was discussed at a recent machine learning and climate change workshop, along with the help artificial intelligence can offer to those planning electricity access in Africa.

The impact of Covid-19 on the energy system was discussed in an online climate change workshop that also considered how machine learning can help electricity planning in Africa.

This years International Conference on Learning Representations event included a workshop held by the Climate Change AI group of academics and artificial intelligence industry representatives which considered how machine learning can help tackle climate change.

Bjarne Steffen, senior researcher at the energy politics group at ETH Zrich, shared his insights at the workshop on how Covid-19 and the accompanying economic crisis are affecting recently introduced green policies. The crisis hit at a time when energy policies were experiencing increasing momentum towards climate action, especially in Europe, said Steffen, who added the coronavirus pandemic has cast into doubt the implementation of such progressive policies.

The academic said there was a risk of overreacting to the public health crisis, as far as progress towards climate change goals was concerned.

Lobbying

Many interest groups from carbon-intensive industries are pushing to remove the emissions trading system and other green policies, said Steffen. In cases where those policies are having a serious impact on carbon-emitting industries, governments should offer temporary waivers during this temporary crisis, instead of overhauling the regulatory structure.

However, the ETH Zrich researcher said any temptation to impose environmental conditions to bail-outs for carbon-intensive industries should be resisted. While it is tempting to push a green agenda in the relief packages, tying short-term environmental conditions to bail-outs is impractical, given the uncertainty in how long this crisis will last, he said. It is better to include provisions that will give more control over future decisions to decarbonize industries, such as the government taking equity shares in companies.

Steffen shared with pv magazine readers an article published in Joule which can be accessed here, and which articulates his arguments about how Covid-19 could affect the energy transition.

Covid-19 in the U.K.

The electricity system in the U.K. is also being affected by Covid-19, according to Jack Kelly, founder of London-based, not-for-profit, greenhouse gas emission reduction research laboratory Open Climate Fix.

The crisis has reduced overall electricity use in the U.K., said Kelly. Residential use has increased but this has not offset reductions in commercial and industrial loads.

Steve Wallace, a power system manager at British electricity system operator National Grid ESO recently told U.K. broadcaster the BBC electricity demand has fallen 15-20% across the U.K. The National Grid ESO blog has stated the fall-off makes managing grid functions such as voltage regulation more challenging.

Open Climate Fixs Kelly noted even events such as a nationally-coordinated round of applause for key workers was followed by a dramatic surge in demand, stating:On April 16, the National Grid saw a nearly 1 GW spike in electricity demand over 10 minutes after everyone finished clapping for healthcare workers and went about the rest of their evenings.

Read pv magazines coverage of Covid-19; and tell us how it is affecting your solar and energy storage operations. Email editors@pv-magazine.com to share your experiences.

Climate Change AI workshop panelists also discussed the impact machine learning could have on improving electricity planning in Africa. The Electricity Growth and Use in Developing Economies (e-Guide) initiative funded by fossil fuel philanthropic organization the Rockefeller Foundationaims to use data to improve the planning and operation of electricity systems in developing countries.

E-Guide members Nathan Williams, an assistant professor at the Rochester Institute of Technology (RIT) in New York state, and Simone Fobi, a PhD student at Columbia University in NYC, spoke about their work at the Climate Change AI workshop, which closed on Thursday. Williams emphasized the importance of demand prediction, saying: Uncertainty around current and future electricity consumption leads to inefficient planning. The weak link for energy planning tools is the poor quality of demand data.

Fobi said: We are trying to use machine learning to make use of lower-quality data and still be able to make strong predictions.

The market maturity of individual solar home systems and PV mini-grids in Africa mean more complex electrification plan modeling is required.

Modeling

When we are doing [electricity] access planning, we are trying to figure out where the demand will be and how much demand will exist so we can propose the right technology, added Fobi. This makes demand estimation crucial to efficient planning.

Unlike many traditional modeling approaches, machine learning is scalable and transferable. Rochesters Williams has been using data from nations such as Kenya, which are more advanced in their electrification efforts, to train machine learning models to make predictions to guide electrification efforts in countries which are not as far down the track.

Williams also discussed work being undertaken by e-Guide members at the Colorado School of Mines, which uses nighttime satellite imagery and machine learning to assess the reliability of grid infrastructure in India.

Rural power

Another e-Guide project, led by Jay Taneja at the University of Massachusetts, Amherst and co-funded by the Energy and Economic Growth program by police reform organization Oxford Policy Management uses satellite imagery to identify productive uses of electricity in rural areas by detecting pollution signals from diesel irrigation pumps.

Though good quality data is often not readily available for Africa, Williams added, it does exist.

We have spent years developing trusting relationships with utilities, said the RIT academic. Once our partners realize the value proposition we can offer, they are enthusiastic about sharing their data We cant do machine learning without high-quality data and this requires that organizations can effectively collect, organize, store and work with data. Data can transform the electricity sector but capacity building is crucial.

By Dustin Zubke

This article was amended on 06/05/20 to indicate the Energy and Economic Growth program is administered by Oxford Policy Management, rather than U.S. university Berkeley, as previously stated.

More here:
Tackling climate change with machine learning: Covid-19 and the energy transition - pv magazine International

Machine Learning Engineers Will Not Exist In 10 Years – Machine Learning Times – machine learning & data science news – The Predictive Analytics…

Originally published in Medium, April 28, 2020

The landscape is evolving quickly. Machine Learning will transition to a commonplace part of every Software Engineers toolkit.

In every field we get specialized roles in the early days, replaced by the commonplace role over time. It seems like this is another case of just that.

Lets unpack.

Machine Learning Engineer as a role is a consequence of the massive hype fueling buzzwords like AI and Data Science in the enterprise. In the early days of Machine Learning, it was a very necessary role. And it commanded a nice little pay bump for many! But Machine Learning Engineer has taken on many different personalities depending on who you ask.

The purists among us say a Machine Learning Engineer is someone who takes models out of the lab and into production. They scale Machine Learning systems, turn reference implementations into production-ready software, and oftentimes cross over into Data Engineering. Theyre typically strong programmers who also have some fundamental knowledge of the models they work with.

But this sounds a lot like a normal software engineer.

Ask some of the top tech companies what Machine Learning Engineer means to them and you might get 10 different answers from 10 survey participants. This should be unsurprising. This is a relatively young role and the folks posting these jobs are managers, oftentimes of many decades who dont have the time (or will) to understand the space.

To continue reading this article, click here.

More:
Machine Learning Engineers Will Not Exist In 10 Years - Machine Learning Times - machine learning & data science news - The Predictive Analytics...

Udacity partners with AWS to offer scholarships on machine learning for working professionals – Business Insider India

All applicants will be able to join the AWS Machine Learning Foundations Course. While applications are on currently, enrollment for the course begins on May 19.

This course will provide an understanding of software engineering and AWS machine learning concepts including production-level coding and practice object-oriented programming. They will also learn about deep learning techniques and its applications using AWS DeepComposer. Advertisement

A major reason behind the increasing uptake of such niche courses among the modern-age learners has to do with the growing relevance of technology across all spheres the world over. In its wake, many high-value job roles are coming up that require a person to possess immense technical proficiency and knowledge in order to assume them. And machine learning is one of the key components of the ongoing AI revolution driving digital transformation worldwide, said Gabriel Dalporto, CEO of Udacity.

The top 325 performers in the foundation course will be awarded with a scholarship to join Udacitys Machine Learning Engineer Nanodegree program. In this advanced course, the students will work on ML tools from AWS. This includes real-time projects that are focussed on specific machine learning skills.

Advertisement

The Nanodegree program scholarship will begin on August 19.

See also:Advertisement

Here are five apps you need to prepare for JEE Main and NEET competitive exams

See the article here:
Udacity partners with AWS to offer scholarships on machine learning for working professionals - Business Insider India