Daily Archives: May 4, 2020

Pivot Or Perish – We are on the cusp of a new automation era – Livemint

Posted: May 4, 2020 at 11:10 pm

The coronavirus pandemic has upturned businesses worldwide (Reuters ) 2 min read . Updated: 05 May 2020, 08:09 AM ISTStaff Writer

We are entering an era of the Big Reboot, a rare, once-in-a-lifetime moment that will change everything forever. Businesses will not be the same. The way of doing business will change, as will key markets and customers. Mints Pivot or Perish campaign focuses on how companies are responding to a new normal across key sectors.

In our second interview in the Pivot or Perish series, wherein top companies share their vision for mapping the future after the coronavirus outbreak, S.S. Kim, managing director, Hyundai Motors India Ltd, the countrys second-largest carmaker, shared how digital will play a critical role in sales operations of automobile companies. Usage of vehicles will also change drastically towards greener solutions, Kim said. (Read full interview here)

Post-coronavirus outbreak, digital will play a critical role in sales operations of automobile companies. Usage of vehicles will also change drastically towards greener solutions. The manufacturing process and supply chain will evolve differently in the near term, as companies focus on reducing dependencies, said Hyundai Motors India MD S.S. Kim. (Read full interview here)

Survival calls for adaptation. This sounds Darwinian, but the fittest" businesses that survive the impact of covid-19 will have no time for trial and error down the generationsor cycles of production, in this case. It would require transformations made at warp speed, based on forecasts that look uncertain and driven by decisions that would test every leaders mettle. (Read Mint Views)

"The electric vehicle (EV) plan can have some delay for some months for sure as it is more of an investment for the future. That said, we want to maintain our strategy because I think that it is a good opportunity for the three-wheeler business in the next 3-5 years is going to come from last mile transportation and electrification. We do not want to miss that opportunity," says CEO Diego Graffi.

The Nikkei Manufacturing Purchasing Managers' Index, compiled by IHS Markit, plunged to 27.4 last month from March's 51.8, by far its lowest since the survey began in March 2005 and its first time below the 50-mark separating growth from contraction in nearly three years. (Read here)

Diego Graffi speaks about the tough calls the company may have to take as uncertainty looms on resuming operations. 'The situation has become more complicated under the third phase of lockdown. We have received the permission to resume operations at our manufacturing plants in Baramati, but the problem is that our suppliers located in the Pune region cannot resume production," he says. Read full interview here

See the original post:

Pivot Or Perish - We are on the cusp of a new automation era - Livemint

Posted in Automation | Comments Off on Pivot Or Perish – We are on the cusp of a new automation era – Livemint

ANZ chief data officer Emma Gray promoted to group executive, data and automation – Finextra

Posted: at 11:10 pm

ANZ today announced the appointment of Emma Gray to the Group Executive Committee in a newly-created role as Group Executive Data and Automation.

Prior to joining ANZ in 2016, Mrs Gray spent three years at Woolworths where she most recently served as Chief Loyalty and Data Officer, reporting to the CEO. Before that she was a partner at Bain & Company, having worked across the United States, Europe and Australia for 15 years.

In her expanded role, Mrs Gray will continue to lead the transformation of the strategic use of data, as well as creating new customer insights and driving automation to ultimately improve customer experience.

Commenting on the appointment, ANZ Chief Executive Officer Shayne Elliott said: The effective use of data, insights and automation will be a key in preparing the bank for the future, particularly as we respond to the challenges presented by COVID-19.

Emma is an experienced international executive with significant experience in data and customer insights and the creation of this new group executive role recognises the critical role of data, insights and automation has in the continued digital transformation of ANZ, Mr Elliott said.

The appointment is effective 1 May 2020 and reports to Maile Carnegie, Group Executive Digital and Australia Transformation.

See the original post:

ANZ chief data officer Emma Gray promoted to group executive, data and automation - Finextra

Posted in Automation | Comments Off on ANZ chief data officer Emma Gray promoted to group executive, data and automation – Finextra

IT Process Automation Tool Market 2020 | In-Depth Study On The Current State Of The Industry And Key Insights Of The Business Scenario By 2027 Cole…

Posted: at 11:10 pm

Market Expertz have recently published a new report on the global IT Process Automation Tool market. The study provides profound insights into updated market events and market trends. This, in turn, helps one in better comprehending the market factors, and strongly they influence the market. Also, the sections related to regions, players, dynamics, and strategies are segmented and sub-segmented to simplify the actual conditions of the industry.

The study is updated with the impacts of the coronavirus and the future analysis of the industrys trends. This is done to ensure that the resultant predictions are most accurate and genuinely calculated. The pandemic has affected all industries, and this report evaluates its impact on the global market.

To get a sample pdf with the skeleton of the Global IT Process Automation Tool Market Report, click here: https://www.marketexpertz.com/sample-enquiry-form/114183

The report displays all leading market players profiles functioning in the globalIT Process Automation Toolmarket with their SWOT analysis, fiscal status, present development, acquisitions, and mergers. The research report comprises of extensive study about various market segments and regions, emerging trends, major market drivers, challenges, opportunities, obstructions, and growth limiting factors in the market.

The report also emphasizes the initiatives undertaken by the companies operating in the market including product innovation, product launches, and technological development to help their organization offer more effective products in the market. It also studies notable business events, including corporate deals, mergers and acquisitions, joint ventures, partnerships, product launches, and brand promotions.

Leading IT Process Automation Tool manufacturers/companies operating at both regional and global levels:

OptessaBroadcomSMA TechnologiesMicrosoftAutomationEdgeMicro FocusBMCVmwareResolve SystemsServiceNowAdvanced Systems ConceptsAyehuCortexPMG

The report also inspects the financial standing of the leading companies, which includes gross profit, revenue generation, sales volume, sales revenue, manufacturing cost, individual growth rate, and other financial ratios.

Order Your Copy Now (Customized report delivered as per your specific requirement) @ https://www.marketexpertz.com/checkout-form/114183

Dominant participants of the market analyzed based on:

The competitors are segmented into the size of their individual enterprise, buyers, products, raw material usage, consumer base, etc. Additionally, the raw material chain and the supply chain are described to make the user aware of the prevailing costs in the market. Lastly, their strategies and approaches are elucidated for better comprehension. In short, the market research report classifies the competitive spectrum of this globalIT Process Automation Toolindustry in elaborate detail.

Key highlights of the report:

Market revenue splits by most promising business segments by type, by application, and any other business segment if applicable within the scope of the globalIT Process Automation Toolmarket report. The country break-up will help you determine trends and opportunities. The prominent players are examined, and their strategies analyzed.

The Global IT Process Automation Tool Market is segmented:

On the basis of product, we research the production, revenue, price, market share and growth rate, primarily split intoCloud-BasedOn-Premises

For the end users/applications, this report focuses on the status and outlook for major applications/end users, consumption (sales), market share and growth rate of IT Process Automation Tool for each application, includingLarge Enterprises (1000+Users)Medium-Sized Enterprise (499-1000 Users)Small Enterprises (1-499Users)

This IT Process Automation Tool report umbrellas vital elements such as market trends, share, size, and aspects that facilitate the growth of the companies operating in the market to help readers implement profitable strategies to boost the growth of their business. This report also analyses the expansion, market size, key segments, market share, application, key drivers, and restraints.

!!! Limited Time DISCOUNT Available!!! Get Your Copy at Discounted [emailprotected] https://www.marketexpertz.com/discount-enquiry-form/114183

Insights into the IT Process Automation Tool market scenario:

Moreover, the report studies the competitive landscape that this industry offers to new entrants. Therefore, it gives a supreme edge to the user over the other competitors in the form of reliable speculations of the market. The key developments in the industry are shown with respect to the current scenario and the approaching advancements. The market report consists of prime information, which could be an efficient read such as investment return analysis, trends analysis, investment feasibility analysis and recommendations for growth.

The data in this report presented is thorough, reliable, and the result of extensive research, both primary and secondary. Moreover, the globalIT Process Automation Toolmarket report presents the production, and import and export forecast by type, application, and region from 2020 to 2027.

Customization of the Report:

Market Expertz also provides customization options to tailor the reports as per client requirements. This report can be personalized to cater to your research needs. Feel free to get in touch with our sales team, who will ensure that you get a report as per your needs.

Thank you for reading this article. You can also get chapter-wise sections or region-wise report coverage for North America, Europe, Asia Pacific, Latin America, and Middle East & Africa.

Read the full Research Report along with a table of contents, facts and figures, charts, graphs, etc. @ https://www.marketexpertz.com/industry-overview/it-process-automation-tool-global-market

To summarize, the global IT Process Automation Tool market report studies the contemporary market to forecast the growth prospects, challenges, opportunities, risks, threats, and the trends observed in the market that can either propel or curtail the growth rate of the industry. The market factors impacting the global sector also include provincial trade policies, international trade disputes, entry barriers, and other regulatory restrictions.

About Us:Planning to invest in market intelligence products or offerings on the web? Then marketexpertz has just the thing for you reports from over 500 prominent publishers and updates on our collection daily to empower companies and individuals catch-up with the vital insights on industries operating across different geography, trends, share, size and growth rate. Theres more to what we offer to our customers. With marketexpertz you have the choice to tap into the specialized services without any additional charges.

Contact Us:John WatsonHead of Business Development40 Wall St. 28th floor New York CityNY 10005 United StatesDirect Line: +1-800-819-3052Visit our News Site: http://newssucceed.com

Go here to read the rest:

IT Process Automation Tool Market 2020 | In-Depth Study On The Current State Of The Industry And Key Insights Of The Business Scenario By 2027 Cole...

Posted in Automation | Comments Off on IT Process Automation Tool Market 2020 | In-Depth Study On The Current State Of The Industry And Key Insights Of The Business Scenario By 2027 Cole…

Intelligent automation technology can relieve strain on the NHS – Open Access Government

Posted: at 11:10 pm

One of the challenges facing the public sector during this global pandemic is the issue of whether NHS staff that have developed coronavirus symptoms should stay at home. There is still significant confusion around official self-isolation guidelines and the services capacity to cope with patient demand is already being pushed to its limits across the country. It is therefore crucial that NHS trusts deploy whatever tools they have at their disposal to limit unnecessary self-isolation and maximise resources. This, in turn, will result in improved doctor performance and increase the number of positive patient outcomes.

With fast-changing government and NHS advice often resulting in misinterpretation, large numbers of NHS workers are (albeit with the best intentions) needlessly staying at home. Aside from lower capacity for treatment across the nation, hospitals HR departments are experiencing added confusion when it comes to managing the workforce, assigning shift work and predicting their ability to cope with the huge volume of cases the NHS faces.

One way in which these issues can be addressed is through the use of intelligent automation technology. This is because it can often be quickly designed, developed, and implemented to effectively and seamlessly support human workforces, dramatically cutting the time and resources required to carry out mission-critical tasks.

We have witnessed these benefits first-hand after launching an online tool for the NHS which provides personalised, contextual advice on self-isolation to staff showing COVID-19 symptoms. The tool, which takes into account staff members unique circumstances (and those of others in their households), can be tailored to individual trusts and adapted on-the-go, in line with their requirements.

The portal combines both government and NHS advice, giving straightforward, safe and accurate recommendations to the services vital, frontline healthcare professionals. Additionally, the system automatically notifies line managers of workers using the tool of their upcoming absence or lack thereof. Staff are directed to a printable PDF, which is linked directly to the departments online database and should be handed to HR departments.

The results of the tools rollout so far have been extremely promising. While we were hearing of occupational health teams receiving several hundreds of calls a day (focused on staff and rota management and allocation), these plummeted within two weeks of the tool being deployed by the Norfolk and Norwich University Hospitals Foundation Trust (NNUH). This not only significantly freed up time for these healthcare workers, it also allowed them to focus other resources on more pressing concerns.

Looking to the future, a key benefit of this technology will be its flexibility: its ability to be rapidly set up and amended in line with dynamic, complex requirements is vital its being regularly updated to reflect newly-discovered secondary symptoms of COVID-19. In addition, were in early development of a second tool that will perform a comprehensive evaluation of various factors to determine in which frontline zones specific members of staff may operate.

Of course, this is just one case study of how automation technology can be used to streamline and alleviate pressure on public services and organisations. However, I believe it is emblematic of how the digital revolution currently spreading across all sectors, can make a tangible difference to peoples lives, when implemented responsibly and carefully.

Editor's Recommended Articles

See the article here:

Intelligent automation technology can relieve strain on the NHS - Open Access Government

Posted in Automation | Comments Off on Intelligent automation technology can relieve strain on the NHS – Open Access Government

How Machine Learning Is Redefining The Healthcare Industry – Small Business Trends

Posted: at 11:09 pm

The global healthcare industry is booming. As per recent research, it is expected to cross the $2 trillion mark this year, despite the sluggish economic outlook and global trade tensions. Human beings, in general, are living longer and healthier lives.

There is increased awareness about living organ donation. Robots are being used for gallbladder removals, hip replacements, and kidney transplants. Early diagnosis of skin cancers with minimum human error is a reality. Breast reconstructive surgeries have enabled breast cancer survivors to partake in rebuilding their glands.

All these jobs were unthinkable sixty years ago. Now is an exciting time for the global health care sector as it progresses along its journey for the future.

However, as the worldwide population of 7.7 billion is likely to reach 8.5 billion by 2030, meeting health needs could be a challenge. That is where significant advancements in machine learning (ML) can help identify infection risks, improve the accuracy of diagnostics, and design personalized treatment plans.

source: Deloitte Insights 2020 global health care outlook

In many cases, this technology can even enhance workflow efficiency in hospitals. The possibilities are endless and exciting, which brings us to an essential segment of the article:

Do you understand the concept of the LACE index?

Designed in Ontario in 2004, it identifies patients who are at risk of readmission or death within 30 days of being discharged from the hospital. The calculation is based on four factors length of stay of the patient in the hospital, acuity of admission, concurring diseases, and emergency room visits.

The LACE index is widely accepted as a quality of care barometer and is famously based on the theory of machine learning. Using the past health records of the patients, the concept helps to predict their future state of health. It enables medical professionals to allocate resources on time to reduce the mortality rate.

This technological advancement has started to lay the foundation for closer collaboration among industry stakeholders, affordable and less invasive surgery options, holistic therapies, and new care delivery models. Here are five examples of current and emerging ML innovations:

From the initial screening of drug compounds to calculating the success rates of a specific medicine based on physiological factors of the patients the Knight Cancer Institute in Oregon and Microsofts Project Hanover are currently applying this technology to personalize drug combinations to cure blood cancer.

Machine learning has also given birth to new methodologies such as precision medicine and next-generation sequencing that can ensure a drug has the right effect on the patients. For example, today, medical professionals can develop algorithms to understand disease processes and innovative design treatments for ailments like Type 2 diabetes.

Signing up volunteers for clinical trials is not easy. Many filters have to be applied to see who is fit for the study. With machine learning, collecting patient data such as past medical records, psychological behavior, family health history, and more is easy.

In addition, the technology is also used to monitor biological metrics of the volunteers and the possible harm of the clinical trials in the long-run. With such compelling data in hand, medical professionals can reduce the trial period, thereby reducing overall costs and increasing experiment effectiveness.

Every human body functions differently. Reactions to a food item, medicine, or season differ. That is why we have allergies. When such is the case, why is customizing the treatment options based on the patients medical data still such an odd thought?

Machine learning helps medical professionals determine the risk for each patient, depending on their symptoms, past medical records, and family history using micro-bio sensors. These minute gadgets monitor patient health and flag abnormalities without bias, thus enabling more sophisticated capabilities of measuring health.

Cisco reports that machine-to-machine connection in global healthcare is growing at a rate of 30% CAGR which is the highest compared to any other industry!

Machine learning is mainly used to mine and analyze patient data to find out patterns and carry out the diagnosis of so many medical conditions, one of them being skin cancer.

Over 5.4mn people in the US are diagnosed with this disease annually. Unfortunately, the diagnosis is a virtual and time-taking process. It relies on long clinical screenings, comprising a biopsy, dermoscopy, and histopathological examination.

But machine learning changes all that. Moleanalyzer, an Australia-based AI software application, calculates and compares the size, diameter, and structure of the moles. It enables the user to take pictures at predefined intervals to help differentiate between benign and malignant lesions on the skin.

The analysis lets oncologists confirm their skin cancer diagnosis using evaluation techniques combined with ML, and they can start the treatment faster than usual. Where experts could identify malignant skin tumors, only 86.6% correctly, Moleanalyzer successfully detected 95%.

Healthcare providers have to ideally submit reports to the government with necessary patient records that are treated at their hospitals.

Compliance policies are continually evolving, which is why it is even more critical to ensure the hospital sites to check if they are being compliant and functioning within the legal boundaries. With machine learning, it is easy to collect data from different sources, using different methods and formatting them correctly.

For data managers, comparing patient data from various clinics to ensure they are compliant could be an overwhelming process. Machine learning helps gather, compare, and maintain that data as per the standards laid down by the government, informs Dr. Nick Oberheiden, Founder and Attorney, Oberheiden P.C.

The healthcare industry is steadily transforming through innovative technologies like AI and ML. The latter will soon get integrated into practice as a diagnostic aid, particularly in primary care. It plays a crucial role in shaping a predictive, personalized, and preventive future, making treating people a breeze. What are your thoughts?

Image: Depositphotos.com

Read this article:

How Machine Learning Is Redefining The Healthcare Industry - Small Business Trends

Comments Off on How Machine Learning Is Redefining The Healthcare Industry – Small Business Trends

Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights

Posted: at 11:09 pm

Last year, the fastest-growing job title in the world was that of the machine learning (ML) engineer, and this looks set to continue for the foreseeable future. According to Indeed, the average base salary of an ML engineer in the US is $146,085, and the number of machine learning engineer openings grew by 344% between 2015 and 2018. Machine learning engineers dominate the job postings around artificial intelligence (A.I.), with 94% of job advertisements that contain AI or ML terminology targeting machine learning engineers specifically.

This demonstrates that organizations understand how profound an effect machine learning promises to have on businesses and society. AI and ML are predicted to drive a Fourth Industrial Revolution that will see vast improvements in global productivity and open up new avenues for innovation; by 2030, its predicted that the global economy will be$15.7 trillion richersolely because of developments from these technologies.

The scale of demand for machine learning engineers is also unsurprising given how complex the role is. The goal of machine learning engineers is todeploy and manage machine learning modelsthat process and learn from the patterns and structures in vast quantities of data, into applications running in production, to unlock real business value while ensuring compliance with corporate governance standards.

To do this, machine learning engineers have to sit at the intersection of three complex disciplines. The first discipline is data science, which is where the theoretical models that inform machine learning are created; the second discipline is DevOps, which focuses on the infrastructure and processes for scaling the operationalization of applications; and the third is software engineering, which is needed to make scalable and reliable code to run machine learning programs.

Its the fact that machine learning engineers have to be at ease in the language of data science, software engineering, and DevOps that makes them so scarceand their value to organizations so great. A machine learning engineer has to have a deep skill-set; they must know multiple programming languages, have a very strong grasp of mathematics, and be able to understand andapply theoretical topics in computer science and statistics. They have to be comfortable with taking state-of-the-art models, which may only work in a specialized environment, andconverting them into robust and scalable systems that are fit for a business environment.

As a burgeoning occupation, the role of a machine learning engineer is constantly evolving. The tools and capabilities that these engineers have in 2020 are radically different from those they had available in 2015, and this is set to continue evolve as the specialism matures. One of the best ways to understand what the role of a machine learning engineer means to an organization is to look at the challenges they face in practice, and how they evolve over time.

Four major challenges that every machine learning engineer has to deal with are data provenance, good data, reproducibility, and model monitoring.

Across a models development and deployment lifecycle, theres interaction between a variety of systems and teams. This results in a highly complex chain of data from a variety of sources. At the same time, there is a greater demand than ever for data to be audited, and there to be a clear lineage of its organizational uses. This is increasingly a priority for regulators, with financial regulators now demandingthat all machine learning data be stored for seven years for auditing purposes.

This does not only make the data and metadata used in models more complex, but it also makes the interactions between the constituent pieces of data far more complex. This means machine learning engineers need to put the right infrastructure in place to ensure the right data and metadata is accessible, all while making sure it is properly organized.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

In 2016, it was estimated that the US alonelost $3.1 trillionto bad datadata thats improperly formatted, duplicated, or incomplete. People and businesses across all sectors lose time and money because of this, but in a job that requires building and running accurate models reliant on input data, these issues can seriously jeopardize projects.

IBM estimates that around80 percent of a data scientists timeis spentfinding, cleaning up, and organizing the data they put into their models. Over time, however, increasingly sophisticated error and anomaly detection programs will likely be used to comb through datasets and screen out information that is incomplete or inaccurate.

This means that, as time goes on and machine learning capabilities continue to develop, well see machine learning engineers have more tools in their belt to clean up the information their programs use, and thus be able to focus more time spent on putting together ML programs themselves.

Reproducibility is often defined as the ability to be able to keep a snapshot of the state of a specific machine learning model, and being able to reproduce the same experiment with the exact same results regardless of the time and location. This involves a great level of complexity, given that machine learning requires reproducibility of three components: 1) code, 2) artifacts, and 3) data. If one of these change, then the result will change.

To add to this complexity, its also necessary to keep reproducibility of entire pipelines that may consist of two or more of these atomic steps, which introduces an exponential level of complexity. For machine learning, reproducibility is important because it lets engineers and data scientists know that the results of a model can be relied upon when they are deployed live, as they will be the same if they are run today as if they were run in two years.

Designing infrastructure for machine learning that is reproducible is a huge challenge. It will continue to be a thorn in the side of machine learning engineers for many years to come. One thing that may make this easier in coming years is the rise of universally accepted frameworks for machine learning test environments, which will provide a consistent barometer for engineers to measure their efforts against.

Its easy to forget that the lifecycle of a machine learning model only begins when its deployed to production. Consequently, a machine learning engineer not only needs to do the work of coding, testing, and deploying a model, but theyll have to also develop the right tools to monitor it.

The production environment of a model can often throw up scenarios the machine learning engineer didnt anticipate when they were creating it. Without monitoring and intervention after deployment, its likely that a model can end up being rendered dysfunctional or produce skewed results by unexpected data. Without accurate monitoring, results can often slowly drift away from what is expected due to input data becoming misaligned with the data a model was trained with, producing less and less effective or logical results.

Adversarial attacks on models, often far more sophisticated than tweets and a chatbot, are of increasing concern, and it is clear that monitoring by machine learning engineers is needed to stop a model being rendered counterproductive by unexpected data. As more machine learning models are deployed, and as more economic output becomes dependent upon these models, this challenge is only going to grow in prominence for machine learning engineers going forward.

One of the most exciting things about the role of the machine learning engineer is that its a job thats still being defined, and still faces so many open problems. That means machine learning engineers get the thrill of working in a constantly changing field that deals with cutting-edge problems.

Challenges such as data quality may be problems we can make major progress towards in the coming years. Other challenges, such monitoring, look set to become more pressing in the more immediate future. Given the constant flux of machine learning engineering as an occupation, its of little wonder that curiosity and an innovative mindset are essential qualities for this relatively new profession.

Alex Housley is CEO ofSeldon.

See the original post:

Machine Learning Engineer: Challenges and Changes Facing the Profession - Dice Insights

Comments Off on Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights

Tackling climate change with machine learning: Covid-19 and the energy transition – pv magazine International

Posted: at 11:09 pm

The effect the coronavirus pandemic is having on energy systems and environmental policy in Europe was discussed at a recent machine learning and climate change workshop, along with the help artificial intelligence can offer to those planning electricity access in Africa.

The impact of Covid-19 on the energy system was discussed in an online climate change workshop that also considered how machine learning can help electricity planning in Africa.

This years International Conference on Learning Representations event included a workshop held by the Climate Change AI group of academics and artificial intelligence industry representatives which considered how machine learning can help tackle climate change.

Bjarne Steffen, senior researcher at the energy politics group at ETH Zrich, shared his insights at the workshop on how Covid-19 and the accompanying economic crisis are affecting recently introduced green policies. The crisis hit at a time when energy policies were experiencing increasing momentum towards climate action, especially in Europe, said Steffen, who added the coronavirus pandemic has cast into doubt the implementation of such progressive policies.

The academic said there was a risk of overreacting to the public health crisis, as far as progress towards climate change goals was concerned.

Lobbying

Many interest groups from carbon-intensive industries are pushing to remove the emissions trading system and other green policies, said Steffen. In cases where those policies are having a serious impact on carbon-emitting industries, governments should offer temporary waivers during this temporary crisis, instead of overhauling the regulatory structure.

However, the ETH Zrich researcher said any temptation to impose environmental conditions to bail-outs for carbon-intensive industries should be resisted. While it is tempting to push a green agenda in the relief packages, tying short-term environmental conditions to bail-outs is impractical, given the uncertainty in how long this crisis will last, he said. It is better to include provisions that will give more control over future decisions to decarbonize industries, such as the government taking equity shares in companies.

Steffen shared with pv magazine readers an article published in Joule which can be accessed here, and which articulates his arguments about how Covid-19 could affect the energy transition.

Covid-19 in the U.K.

The electricity system in the U.K. is also being affected by Covid-19, according to Jack Kelly, founder of London-based, not-for-profit, greenhouse gas emission reduction research laboratory Open Climate Fix.

The crisis has reduced overall electricity use in the U.K., said Kelly. Residential use has increased but this has not offset reductions in commercial and industrial loads.

Steve Wallace, a power system manager at British electricity system operator National Grid ESO recently told U.K. broadcaster the BBC electricity demand has fallen 15-20% across the U.K. The National Grid ESO blog has stated the fall-off makes managing grid functions such as voltage regulation more challenging.

Open Climate Fixs Kelly noted even events such as a nationally-coordinated round of applause for key workers was followed by a dramatic surge in demand, stating:On April 16, the National Grid saw a nearly 1 GW spike in electricity demand over 10 minutes after everyone finished clapping for healthcare workers and went about the rest of their evenings.

Read pv magazines coverage of Covid-19; and tell us how it is affecting your solar and energy storage operations. Email editors@pv-magazine.com to share your experiences.

Climate Change AI workshop panelists also discussed the impact machine learning could have on improving electricity planning in Africa. The Electricity Growth and Use in Developing Economies (e-Guide) initiative funded by fossil fuel philanthropic organization the Rockefeller Foundationaims to use data to improve the planning and operation of electricity systems in developing countries.

E-Guide members Nathan Williams, an assistant professor at the Rochester Institute of Technology (RIT) in New York state, and Simone Fobi, a PhD student at Columbia University in NYC, spoke about their work at the Climate Change AI workshop, which closed on Thursday. Williams emphasized the importance of demand prediction, saying: Uncertainty around current and future electricity consumption leads to inefficient planning. The weak link for energy planning tools is the poor quality of demand data.

Fobi said: We are trying to use machine learning to make use of lower-quality data and still be able to make strong predictions.

The market maturity of individual solar home systems and PV mini-grids in Africa mean more complex electrification plan modeling is required.

Modeling

When we are doing [electricity] access planning, we are trying to figure out where the demand will be and how much demand will exist so we can propose the right technology, added Fobi. This makes demand estimation crucial to efficient planning.

Unlike many traditional modeling approaches, machine learning is scalable and transferable. Rochesters Williams has been using data from nations such as Kenya, which are more advanced in their electrification efforts, to train machine learning models to make predictions to guide electrification efforts in countries which are not as far down the track.

Williams also discussed work being undertaken by e-Guide members at the Colorado School of Mines, which uses nighttime satellite imagery and machine learning to assess the reliability of grid infrastructure in India.

Rural power

Another e-Guide project, led by Jay Taneja at the University of Massachusetts, Amherst and co-funded by the Energy and Economic Growth program on development spending based at Berkeley uses satellite imagery to identify productive uses of electricity in rural areas by detecting pollution signals from diesel irrigation pumps.

Though good quality data is often not readily available for Africa, Williams added, it does exist.

We have spent years developing trusting relationships with utilities, said the RIT academic. Once our partners realize the value proposition we can offer, they are enthusiastic about sharing their data We cant do machine learning without high-quality data and this requires that organizations can effectively collect, organize, store and work with data. Data can transform the electricity sector but capacity building is crucial.

By Dustin Zubke

Read more from the original source:

Tackling climate change with machine learning: Covid-19 and the energy transition - pv magazine International

Comments Off on Tackling climate change with machine learning: Covid-19 and the energy transition – pv magazine International

SoftSmile Reveals New Aligner Software Solution Using Biomechanics and Advanced Machine Learning Algorithms at Annual Meeting of American Association…

Posted: at 11:09 pm

"We have long been perfecting our technology to provide orthodontists with the tools they need to fully utilize their skills, lower the cost of the treatment for the patients and usher in the next revolution in orthodontics," said Khamzat Asabaev, Co-Founder and CEO. "We are confident that the proprietary technology and algorithms integrated in our software will help to transform orthodontists' practices and such problems as uncertainty in the result, high cost and lack of accessible treatment will vanish. We believe that SoftSmile is the solution orthodontists and patients have been waiting for."

The SoftSmile aligner software helps to create a 3-D model of the orthodontic treatment plan, including the revolutionary visualization of teeth roots and movement of the lower jaw during the treatment. It models optimized teeth movement and suggests, along with the knowledge and skill from the orthodontist, the exact number of aligners required for achieving the optimized result. The software is enhanced by AI, which generates key aspects of a treatment plan. Machine learning algorithms automate the time-consuming processes and facilitate the final setup of aligned teeth based on orthodontic measurements, allowing for the industry's most precise software for modelling a digitally optimized treatment plan.

Following the creation of the treatment plan, SoftSmile's software automates 3-D printing for orthodontists, making the entire manufacturing process smooth, intuitive and straightforward. SoftSmile is currently working with the pioneering 3D printing company Carbon to integrate software and manufacturing so that distributed dental printing will become accessible to a wider group of orthodontists.

"I was very impressed when I first started using the SoftSmile technology in my practice that specializes in the application of lingual braces," said Roberto Stradi, DDS MSc in Caserta, Italy. "The learning curve for this technology was fairly minimal and the software was completely customizable, allowing me to remain in control and produce a successful result for my patients."

The SoftSmile team started developing their aligner software technology in mid-2019. Their digital platform is based on years of work and research by SoftSmile co-founders, experienced orthodontists and engineers including pioneers who first printed lingual braces with a 3-D printer who devoted years to building the best solutions for orthodontists. Over 100 of the most acclaimed orthodontists from around the world have tested the technology developed by the SoftSmile team, treating over 1,000 people outside of the U.S. In the United States, SoftSmile is backed by the Columbia Start Up Lab and cooperates with Columbia University in general.

To date, the SoftSmile team has been granted 12 U.S. patents for their innovative technologies (over 10 additional U.S. patents are pending). Within the upcoming months SoftSmile will be submitting to the U.S. Federal Drug Administration a Section 510(k) premarket notification of intent to market its software. Ahead of the notification, orthodontists can request to use the SoftSmile software in their practice on a trial basis.

About SoftSmile, Inc.

SoftSmile is a New York-based innovation company that helps orthodontists to deliver custom, high-quality and affordable treatment to their patients. Established in 2019, SoftSmile designs and develops an advanced, AI-driven orthodontic software package that applies cutting-edge machine learning technology with sound biomechanical and mathematical principles in a user-friendly interface, giving orthodontists the ability to create aligners or lingual braces treatment plans in-house. Precise and high-quality aligners and braces are created using robust 3-D printing technologies that can be produced on-site at an orthodontist's office or provided by SoftSmile. These products give orthodontists unparalleled control and precision of the treatment they deliver to their patients.

For additional information about SoftSmile, Inc. please visit https://www.softsmile.com/.

SOURCE SoftSmile

https://www.softsmile.com

View original post here:

SoftSmile Reveals New Aligner Software Solution Using Biomechanics and Advanced Machine Learning Algorithms at Annual Meeting of American Association...

Comments Off on SoftSmile Reveals New Aligner Software Solution Using Biomechanics and Advanced Machine Learning Algorithms at Annual Meeting of American Association…

Microsoft: This is how to protect your machine-learning applications – TechRepublic

Posted: at 11:09 pm

Understanding failures and attacks can help us build safer AI applications.

Modern machine learning (ML) has become an important tool in a very short time. We're using ML models across our organisations, either rolling our own in R and Python, using tools like TensorFlow to learn and explore our data, or building on cloud- and container-hosted services like Azure's Cognitive Services. It's a technology that helps predict maintenance schedules, spots fraud and damaged parts, and parses our speech, responding in a flexible way.

SEE:Prescriptive analytics: An insider's guide (free PDF)(TechRepublic)

The models that drive our ML applications are incredibly complex, training neural networks on large data sets. But there's a big problem: they're hard to explain or understand. Why does a model parse a red blob with white text as a stop sign and not a soft drink advert? It's that complexity which hides the underlying risks that are baked into our models, and the possible attacks that can severely disrupt the business processes and services we're building using those very models.

It's easy to imagine an attack on a self-driving car that could make it ignore stop signs, simply by changing a few details on the sign, or a facial recognition system that would detect a pixelated bandanna as Brad Pitt. These adversarial attacks take advantage of the ML models, guiding them to respond in a way that's not how they're intended to operate, distorting the input data by changing the physical inputs.

Microsoft is thinking a lot about how to protect machine learning systems. They're key to its future -- from tools being built into Office, to its Azure cloud-scale services, and managing its own and your networks, even delivering security services through ML-powered tools like Azure Sentinel. With so much investment riding on its machine-learning services, it's no wonder that many of Microsoft's presentations at the RSA security conference focused on understanding the security issues with ML and on how to protect machine-learning systems.

Attacks on machine-learning systems need access to the models used, so you need to keep your models private. That goes for small models that might be helping run your production lines as much as the massive models that drive the likes of Google, Bing and Facebook. If I get access to your model, I can work out how to affect it, either looking for the right data to feed it that will poison the results, or finding a way past the model to get the results I want.

Much of this work has been published in a paper in conjunction with the Berkman Klein Center, on failure modes in machine learning. As the paper points out, a lot of work has been done in finding ways to attack machine learning, but not much on how to defend it. We need to build a credible set of defences around machine learning's neural networks, in much the same way as we protect our physical and virtual network infrastructures.

Attacks on ML systems are failures of the underlying models. They are responding in unexpected, and possibly detrimental ways. We need to understand what the failure modes of machine-learning systems are, and then understand how we can respond to those failures. The paper talks about two failure modes: intentional failures, where an attacker deliberately subverts a system, and unintentional failures, where there's an unsafe element in the ML model being used that appears correct but delivers bad outcomes.

By understanding the failure modes we can build threat models and apply them to our ML-based applications and services, and then respond to those threats and defend our new applications.

The paper suggests 11 different attack classifications, many of which get around our standard defence models. It's possible to compromise a machine-learning system without needing access to the underlying software and hardware, so standard authorisation techniques can't protect ML-based systems and we need to consider alternative approaches.

What are these attacks? The first, perturbation attacks, modify queries to change the response to one the attackers desire. That's matched by poisoning attacks, which achieve the same result by contaminating the training data. Machine-learning models often include important intellectual property, and some attacks like model inversion aim to extract that data. Similarly, a membership inference attack will try to determine whether specific data was in the initial training set. Closely related is the concept of model stealing, using queries to extract the model.

SEE:5G: What it means for IoT(free PDF)

Other attacks include reprogramming the system around the ML model, so that either results or inputs are changed. Closely related are adversarial attacks that change physical objects, adding duct tape to signs to confuse navigation or using specially printed bandanas to disrupt facial-recognition systems. Some attacks depend on the provider: a malicious provider can extract training data from customer systems. They can add backdoors to systems, or compromise models as they're downloaded.

While many of these attacks are new and targeted specifically at machine-learning systems, they are still computer systems and applications, and are vulnerable to existing exploits and techniques, allowing attackers to use familiar approaches to disrupt ML applications.

It's a long list of attack types, but understanding what's possible allows us to think about the threats our applications face. More importantly they provide an opportunity to think about defences and how we protect machine-learning systems: building better, more secure training sets, locking down ML platforms, and controlling access to inputs and outputs, working with trusted applications and services.

Attacks are not the only risk: we must be aware of unintended failures -- problems that come from the algorithms we use or from how we've designed and tested our ML systems. We need to understand how reinforcement learning systems behave, how systems respond in different environments, if there are natural adversarial effects, or how changing inputs can change results.

If we're to defend machine-learning applications, we need to ensure that they have been tested as fully as possible, in as many conditions as possible. The apocryphal stories of early machine-learning systems that identified trees instead of tanks, because all the training images were of tanks under trees, are a sign that these aren't new problems, and that we need to be careful about how we train, test, and deploy machine learning. We can only defend against intentional attacks if we know that we've protected ourselves and our systems from mistakes we've made. The old adage "test, test, and test again" is key to building secure and safe machine learning -- even when we're using pre-built models and service APIs.

Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays

See more here:

Microsoft: This is how to protect your machine-learning applications - TechRepublic

Comments Off on Microsoft: This is how to protect your machine-learning applications – TechRepublic

Machine Learning Engineers Will Not Exist In 10 Years – Machine Learning Times – machine learning & data science news – The Predictive Analytics…

Posted: at 11:09 pm

Originally published in Medium, April 28, 2020

The landscape is evolving quickly. Machine Learning will transition to a commonplace part of every Software Engineers toolkit.

In every field we get specialized roles in the early days, replaced by the commonplace role over time. It seems like this is another case of just that.

Lets unpack.

Machine Learning Engineer as a role is a consequence of the massive hype fueling buzzwords like AI and Data Science in the enterprise. In the early days of Machine Learning, it was a very necessary role. And it commanded a nice little pay bump for many! But Machine Learning Engineer has taken on many different personalities depending on who you ask.

The purists among us say a Machine Learning Engineer is someone who takes models out of the lab and into production. They scale Machine Learning systems, turn reference implementations into production-ready software, and oftentimes cross over into Data Engineering. Theyre typically strong programmers who also have some fundamental knowledge of the models they work with.

But this sounds a lot like a normal software engineer.

Ask some of the top tech companies what Machine Learning Engineer means to them and you might get 10 different answers from 10 survey participants. This should be unsurprising. This is a relatively young role and the folks posting these jobs are managers, oftentimes of many decades who dont have the time (or will) to understand the space.

To continue reading this article, click here.

See the original post here:

Machine Learning Engineers Will Not Exist In 10 Years - Machine Learning Times - machine learning & data science news - The Predictive Analytics...

Comments Off on Machine Learning Engineers Will Not Exist In 10 Years – Machine Learning Times – machine learning & data science news – The Predictive Analytics…