Daily Archives: January 12, 2020

The 4 Hottest Trends in Data Science for 2020 – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Posted: January 12, 2020 at 11:46 pm

Originally published in Towards Data Science, January 8, 2020

2019 was a big year for all of Data Science.

Companies all over the world across a wide variety of industries have been going through what people are calling a digital transformation. That is, businesses are taking traditional business processes such as hiring, marketing, pricing, and strategy, and using digital technologies to make them 10 times better.

Data Science has become an integral part of those transformations. With Data Science, organizations no longer have to make their important decisions based on hunches, best-guesses, or small surveys. Instead, theyre analyzing large amounts of real data to base their decisions on real, data-driven facts. Thats really what Data Science is all about creating value through data.

This trend of integrating data into the core business processes has grown significantly, with an increase in interest by over four times in the past 5 years according to Google Search Trends. Data is giving companies a sharp advantage over their competitors. With more data and better Data Scientists to use it, companies can acquire information about the market that their competitors might not even know existed. Its become a game of Data or perish.

Google search popularity of Data Science over the past 5 years. Generated by Google Trends.

In todays ever-evolving digital world, staying ahead of the competition requires constant innovation. Patents have gone out of style while Agile methodology and catching new trends quickly is very much in.

Organizations can no longer rely on their rock-solid methods of old. If a new trend like Data Science, Artificial Intelligence, or Blockchain comes along, it needs to be anticipated beforehand and adapted quickly.

The following are the 4 hottest Data Science trends for the year 2020. These are trends which have gathered increasing interest this year and will continue to grow in 2020.

(1) Automated Data Science

Even in todays digital age, Data Science still requires a lot of manual work. Storing data, cleaning data, visualizing and exploring data, and finally, modeling data to get some actual results. That manual work is just begging for automation, and thus has been the rise of automated Data Science and Machine Learning.

Nearly every step of the Data Science pipeline has been or is in the process of becoming automated.

Auto-Data Cleaning has been heavily researched over the past few years. Cleaning big data often takes up most of a Data Scientists expensive time. Both startups and large companies such as IBM offer automation and tooling for data cleaning.

Another large part of Data Science known as feature engineering has undergone significant disruption. Featuretools offers a solution for automatic feature engineering. On top of that, modern Deep Learning techniques such as Convolutional and Recurrent Neural Networks learn their own features without the need for manual feature design.

Perhaps the most significant automation is occurring in the Machine Learning space. Both Data Robot and H2O have established themselves in the industry by offering end-to-end Machine Learning platforms, giving Data Scientists a very easy handle on data management and model building. AutoML, a method for automatic model design and training, has also boomed over 2019 as these automated models surpass the state-of-the-art. Google, in particular, is investing heavily in Cloud AutoML.

In general, companies are investing heavily in building and buying tools and services for automated Data Science. Anything to make the process cheaper and easier. At the same time, this automation also caters to smaller and less technical organizations who can leverage these tools and services to have access to Data Science without building out their own team.

(2) Data Privacy and Security

Privacy and security are always sensitive topics in technology. All companies want to move fast and innovate, but losing the trust of their customers over privacy or security issues can be fatal. So, theyre forced to make it a priority, at least to a bare minimum of not leaking private data.

Data privacy and security has become an incredibly hot topic over the past year as the issues are magnified by enormous public hacks. Just recently on November 22, 2019, an exposed server with no security was discovered on Google Cloud. The server contained the personal information of 1.2 Billion unique people including names, email addresses, phone numbers, and LinkedIn and Facebook profile information. Even the FBI came in to investigate. Its one of the largest data exposures of all time.

To continue reading this article click here.

Link:

The 4 Hottest Trends in Data Science for 2020 - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Comments Off on The 4 Hottest Trends in Data Science for 2020 – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Going Beyond Machine Learning To Machine Reasoning – Forbes

Posted: at 11:46 pm

From Machine Learning to Machine Reasoning

The conversation around Artificial Intelligence usually revolves around technology-focused topics: machine learning, conversational interfaces, autonomous agents, and other aspects of data science, math, and implementation. However, the history and evolution of AI is more than just a technology story. The story of AI is also inextricably linked with waves of innovation and research breakthroughs that run headfirst into economic and technology roadblocks. There seems to be a continuous pattern of discovery, innovation, interest, investment, cautious optimism, boundless enthusiasm, realization of limitations, technological roadblocks, withdrawal of interest, and retreat of AI research back to academic settings. These waves of advance and retreat seem to be as consistent as the back and forth of sea waves on the shore.

This pattern of interest, investment, hype, then decline, and rinse-and-repeat is particularly vexing to technologists and investors because it doesn't follow the usual technology adoption lifecycle. Popularized by Geoffrey Moore in his book "Crossing the Chasm", technology adoption usually follows a well-defined path. Technology is developed and finds early interest by innovators, and then early adopters, and if the technology can make the leap across the "chasm", it gets adopted by the early majority market and then it's off to the races with demand by the late majority and finally technology laggards. If the technology can't cross the chasm, then it ends up in the dustbin of history. However, what makes AI distinct is that it doesn't fit the technology adoption lifecycle pattern.

But AI isn't a discrete technology. Rather it's a series of technologies, concepts, and approaches all aligning towards the quest for the intelligent machine. This quest inspires academicians and researchers to come up with theories of how the brain and intelligence works, and their concepts of how to mimic these aspects with technology. AI is a generator of technologies, which individually go through the technology lifecycle. Investors aren't investing in "AI, but rather they're investing in the output of AI research and technologies that can help achieve the goals of AI. As researchers discover new insights that help them surmount previous challenges, or as technology infrastructure finally catches up with concepts that were previously infeasible, then new technology implementations are spawned and the cycle of investment renews.

The Need for Understanding

It's clear that intelligence is like an onion (or a parfait) many layers. Once we understand one layer, we find that it only explains a limited amount of what intelligence is about. We discover there's another layer thats not quite understood, and back to our research institutions we go to figure out how it works. In Cognilyticas exploration of the intelligence of voice assistants, the benchmark aims to tease at one of those next layers: understanding. That is, knowing what something is recognizing an image among a category of trained concepts, converting audio waveforms into words, identifying patterns among a collection of data, or even playing games at advanced levels, is different from actually understanding what those things are. This lack of understanding is why users get hilarious responses from voice assistant questions, and is also why we can't truly get autonomous machine capabilities in a wide range of situations. Without understanding, there's no common sense. Without common sense and understanding, machine learning is just a bunch of learned patterns that can't adapt to the constantly evolving changes of the real world.

One of the visual concepts thats helpful to understand these layers of increasing value is the "DIKUW Pyramid":

DIKUW Pyramid

While the Wikipedia entry above conveniently skips the Understanding step in their entry, we believe that understanding is the next logical threshold of AI capability. And like all previous layers of this AI onion, tackling this layer will require new research breakthroughs, dramatic increases in compute capabilities, and volumes of data. What? Don't we have almost limitless data and boundless computing power? Not quite. Read on.

The Quest for Common Sense: Machine Reasoning

Early in the development of artificial intelligence, researchers realized that for machines to successfully navigate the real world, they would have to gain an understanding of how the world works and how various different things are related to each other. In 1984, the world's longest-lived AI project started. The Cyc project is focused on generating a comprehensive "ontology" and knowledge base of common sense, basic concepts and "rules of thumb" about how the world works. The Cyc ontology uses a knowledge graph to structure how different concepts are related to each other, and an inference engine that allows systems to reason about facts.

The main idea behind Cyc and other understanding-building knowledge encodings is the realization that systems can't be truly intelligent if they don't understand what the underlying things they are recognizing or classifying are. This means we have to dig deeper than machine learning for intelligence. We need to peel this onion one level deeper, scoop out another tasty parfait layer. We need more than machine learning - we need machine reasoning.

Machine reason is the concept of giving machines the power to make connections between facts, observations, and all the magical things that we can train machines to do with machine learning. Machine learning has enabled a wide range of capabilities and functionality and opened up a world of possibility that was not possible without the ability to train machines to identify and recognize patterns in data. However, this power is crippled by the fact that these systems are not really able to functionally use that information for higher ends, or apply learning from one domain to another without human involvement. Even transfer learning is limited in application.

Indeed, we're rapidly facing the reality that we're going to soon hit the wall on the current edge of capabilities with machine learning-focused AI. To get to that next level we need to break through this wall and shift from machine learning-centric AI to machine reasoning-centric AI. However, that's going to require some breakthroughs in research that we haven't realized yet.

The fact that the Cyc project has the distinction as being the longest-lived AI project is a bit of a back-handed compliment. The Cyc project is long lived because after all these decades the quest for common sense knowledge is proving elusive. Codifying commonsense into a machine-processable form is a tremendous challenge. Not only do you need to encode the entities themselves in a way that a machine knows what you're talking about but also all the inter-relationships between those entities. There are millions, if not billions, of "things" that a machine needs to know. Some of these things are tangible like "rain" but others are intangible such as "thirst". The work of encoding these relationships is being partially automated, but still requires humans to verify the accuracy of the connections... because after all, if machines could do this we would have solved the machine recognition challenge. It's a bit of a chicken and egg problem this way. You can't solve machine recognition without having some way to codify the relationships between information. But you can't scalable codify all the relationships that machines would need to know without some form of automation.

Are we still limited by data and compute power?

Machine learning has proven to be very data-hungry and compute-intensive. Over the past decade, many iterative enhancements have lessened compute load and helped to make data use more efficient. GPUs, TPUs, and emerging FPGAs are helping to provide the raw compute horsepower needed. Yet, despite these advancements, complicated machine learning models with lots of dimensions and parameters still require intense amounts of compute and data. Machine reasoning is easily one order or more of complexity beyond machine learning. Accomplishing the task of reasoning out the complicated relationships between things and truly understanding these things might be beyond today's compute and data resources.

The current wave of interest and investment in AI doesn't show any signs of slowing or stopping any time soon, but it's inevitable it will slow at some point for one simple reason: we still don't understand intelligence and how it works. Despite the amazing work of researchers and technologists, we're still guessing in the dark about the mysterious nature of cognition, intelligence, and consciousness. At some point we will be faced with the limitations of our assumptions and implementations and we'll work to peel the onion one more layer and tackle the next set of challenges. Machine reasoning is quickly approaching as the next challenge we must surmount on the quest for artificial intelligence. If we can apply our research and investment talent to tackling this next layer, we can keep the momentum going with AI research and investment. If not, the pattern of AI will repeat itself, and the current wave will crest. It might not be now or even within the next few years, but the ebb and flow of AI is as inevitable as the waves upon the shore.

Original post:

Going Beyond Machine Learning To Machine Reasoning - Forbes

Comments Off on Going Beyond Machine Learning To Machine Reasoning – Forbes

The Problem with Hiring Algorithms – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Posted: at 11:46 pm

Originally published in EthicalSystems.org, December 1, 2019

In 2004, when a webcam was relatively unheard-of tech, Mark Newman knew that it would be the future of hiring. One of the first things the 20-year old did, after getting his degree in international business, was to co-found HireVue, a company offering a digital interviewing platform. Business trickled in. While Newman lived at his parents house, in Salt Lake City, the company, in its first five years, made just $100,000 in revenue. HireVue later received some outside capital, expanded and, in 2012, boasted some 200 clientsincluding Nike, Starbucks, and Walmartwhich would pay HireVue, depending on project volume, between $5,000 and $1 million. Recently, HireVue, which was bought earlier this year by the Carlyle Group, has become the source of some alarm, or at least trepidation, for its foray into the application of artificial intelligence in the hiring process. No longer does the company merely offer clients an asynchronous interviewing service, a way for hiring managers to screen thousands of applicants quickly by reviewing their video interview HireVue can now give companies the option of letting machine-learning algorithms choose the best candidates for them, based on, among other things, applicants tone, facial expressions, and sentence construction.

If that gives you the creeps, youre not alone. A 2017 Pew Research Center report found few Americans to be enthused, and many worried, by the prospect of companies using hiring algorithms. More recently, around a dozen interviewees assessed by HireVues AI told the Washington Post that it felt alienating and dehumanizing to have to wow a computer before being deemed worthy of a companys time. They also wondered how their recording might be used without their knowledge. Several applicants mentioned passing on the opportunity because thinking about the AI interview, as one of them told the paper, made my skin crawl. Had these applicants sat for a standard 30-minute interview, comprised of a half-dozen questions, the AI could have analyzed up to 500,000 data points. Nathan Mondragon, HireVues chief industrial-organizational psychologist, told the Washington Post that each one of those points become ingredients in the persons calculated score, between 1 and 100, on which hiring decisions candepend. New scores are ranked against a store of traitsmostly having to do with language use and verbal skillsfrom previous candidates for a similar position, who went on to thrive on the job.

HireVue wants you to believe that this is a good thing. After all, their pitch goes, humans are biased. If something like hunger can affect a hiring managers decisionlet alone classism, sexism, lookism, and other ismsthen why not rely on the less capricious, more objective decisions of machine-learning algorithms? No doubt some job seekers agree with the sentiment Loren Larsen, HireVues Chief Technology Officer, shared recently with theTelegraph: I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day. Of course, the appeal of AI hiring isnt just about doing right by the applicants. As a 2019 white paper, from the Society for Industrial and Organizational Psychology, notes, AI applied to assessing and selecting talent offers some exciting promises for making hiring decisions less costly and more accurate for organizations while also being less burdensome and (potentially) fairer for job seekers.

Do HireVues algorithms treat potential employees fairly? Some researchers in machine learning and human-computer interaction doubt it. Luke Stark, a postdoc at Microsoft Research Montreal who studies how AI, ethics, and emotion interact, told the Washington Post that HireVues claimsthat its automated software can glean a workers personality and predict their performance from such things as toneshould make us skeptical:

Systems like HireVue, he said, have become quite skilled at spitting out data points that seem convincing, even when theyre not backed by science. And he finds this charisma of numbers really troubling because of the overconfidence employers might lend them while seeking to decide the path of applicants careers.

The best AI systems today, he said, are notoriously prone to misunderstanding meaning and intent. But he worried that even their perceived success at divining a persons true worth could help perpetuate a homogenous corporate monoculture of automatons, each new hire modeled after the last.

Eric Siegel, an expert in machine learning and author of Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, echoed Starks remarks. In an email, Siegel told me, Companies that buy into HireVue are inevitably, to a great degree, falling for that feeling of wonderment and speculation that a kid has when playing with a Magic Eight Ball. That, in itself, doesnt mean HireVues algorithms are completely unhelpful. Driving decisions with data has the potential to overcome human bias in some situations, but also, if not managed correctly, could easily instill, perpetuate, magnify, and automate human biases, he said.

To continue reading this article click here.

See original here:

The Problem with Hiring Algorithms - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Comments Off on The Problem with Hiring Algorithms – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

AI and machine learning trends to look toward in 2020 – Healthcare IT News

Posted: at 11:46 pm

Artificial intelligence and machine learning will play an even bigger role in healthcare in 2020 than they did in 2019, helping medical professionals with everything from oncology screenings to note-taking.

On top of actual deployments, increased investment activity is also expected this year, and with deeper deployments of AI and ML technology, a broader base of test cases will be available to collect valuable best practices information.

As AI is implemented more widely in real-world clinical practice, there will be more academic reports on the clinical benefits that have arisen from the real-world use, said Pete Durlach, senior vice president for healthcare strategy and new business development at Nuance.

"With healthy clinical evidence, we'll see AI become more mainstream in various clinical settings, creating a positive feedback loop of more evidence-based research and use in the field," he explained. "Soon, it will be hard to imagine a doctor's visit, or a hospital stay that doesn't incorporate AI in numerous ways."

In addition, AI and ambient sensing technology will help re-humanize medicine by allowing doctors to focus less on paperwork and administrative functions, and more on patient care.

"As AI becomes more commonplace in the exam room, everything will be voice enabled, people will get used to talking to everything, and doctors will be able to spend 100% of their time focused on the patient, rather than entering data into machines," Durlach predicted. "We will see the exam room of the future where clinical documentation writes itself."

The adoption of AI for robotic process automation ("RPA") for common and high value administrative functions such as the revenue cycle, supply chain and patient scheduling also has the potential to rapidly increase as AI helps automate or partially automate components of these functions, driving significantly enhanced financial outcomes to provider organizations.

Durlach also noted the fear that AI will replace doctors and clinicians has dissipated, and the goal now is to figure out how to incorporate AI as another tool to help physicians make the best care decisions possible effectively augmenting the intelligence of the clinician.

"However, we will still need to protect against phenomenon like alert fatigue, which occurs when users who are faced with many low-level alerts, ignore alerts of all levels, thereby missing crucial ones that can affect the health and safety of patients," he cautioned.

In the next few years, he predicts the market will see a technology that finds a balance between being too obtrusive while supporting doctors to make the best decisions for their patients as the learn to trust the AI powered suggestions and recommendations.

"So many technologies claim they have an AI component, but often there's a blurred line in which the term AI is used in a broad sense, when the technology that's being described is actually basic analytics or machine learning," Kuldeep Singh Rajput, CEO and founder of Boston-based Biofourmis, told Healthcare IT News. "Health system leaders looking to make investments in AI should ask for real-world examples of how the technology is creating ROI for other organizations."

For example, he pointed to a study of Brigham & Women's Home Hospital program, recently published in Annals of Internal Medicine, which employed AI-driven continuous monitoring combined with advanced physiology analytics and related clinical care as a substitute for usual hospital care.

The study found that the program--which included an investment in AI-driven predictive analytics as a key component--reduced costs, decreased healthcare use, and lowered readmissions while increasing physical activity compared with usual hospital care.

"Those types of outcomes could be replicated by other healthcare organizations, which makes a strong clinical and financial case to invest in that type of AI," Rajput said.

Nathan Eddy is a healthcare and technology freelancer based in Berlin.Email the writer:nathaneddy@gmail.comTwitter:@dropdeaded209

Visit link:

AI and machine learning trends to look toward in 2020 - Healthcare IT News

Comments Off on AI and machine learning trends to look toward in 2020 – Healthcare IT News

Machine Learning and Artificial Intelligence Are Poised to Revolutionize Asthma Care – Pulmonology Advisor

Posted: at 11:46 pm

The advent of large data sets from many sources (big data), machine learning, and artificial intelligence (AI) are poised to revolutionize asthma care on both the investigative and clinical levels, according to an article published in the Journal of Allergy and Clinical Immunology.

According to the researchers, a patient with asthma endures approximately 2190 hours of experiencing and treating or not treating their asthma symptoms. During 15-minute clinic visits, only a short amount of time is spent understanding and treating what is a complex disease, and only a fraction of the necessary data is captured in the electronic health record.

Our patients and the pace of data growth are compelling us to incorporate insights from Big Data to inform care, the researchers posit. Predictive analytics, using machine learning and artificial intelligence has revolutionized many industries, including the healthcare industry.

When used effectively, big data, in conjunction with electronic health record data, can transform the patients healthcare experience. This is especially important as healthcare continues to embrace both e-health and telehealth practices. The data resulting from these thoughtful digital health innovations can result in personalized asthma management, improve timeliness of care, and capture objective measures of treatment response.

According to the researchers, the use of machine learning algorithms and AI to predict asthma exacerbations and patterns of healthcare utilization are within both technical and clinical reach. The ability to predict who is likely to experience an asthma attack, as well as when that attack may occur, will ultimately optimize healthcare resources and personalize patient management.

The use of longitudinal birth cohort studies and multicenter collaborations like the Severe Asthma Research Program have given clinical investigators a broader understanding of the pathophysiology, natural history, phenotypes, seasonality, genetics, epigenetics, and biomarkers of the disease. Machine learning and data-driven methods have utilized this data, often in the form of large datasets, to cluster patients into genetic, molecular, and immune phenotypes. These clusters have led to work in the genomics and pharmacogenomics fields that should ultimately lead to high-fidelity exacerbation predictions and the advent of true precision medicine.

This work, the researchers noted, if translated into clinical practice can potentially link genetic traits to phenotypes that can for example predict rapid response, or non-response to medications like albuterol and steroids, or identify an individuals risk for cortisol suppression.

As with any innovation, though, challenges abound. One in particular is the siloed nature of the clinical and scientific insights about asthma that have come to light in recent years. Although data are now being generated and interpreted across various domains, researchers must still contend with a lack of data standards and disease definitions, data interoperability and sharing difficulties, and concerns about data quality and fidelity.

Machine learning and AI present their own challenges; namely, those who utilize these technologies must consider the issues of fairness, bias, privacy, and medical bioethics. Legal accountability and medical responsibility issues must also be considered as algorithms are adopted into routine practice.

We must, as clinicians and researchers, constructively transform the concern and lack of understanding many clinicians have about digital health, [machine learning], and [artificial intelligence] into educated and critical engagement, the researchers concluded. Our job is to use [machine learning and artificial intelligence] tools to understand and predict how asthma affects patients and help us make decisions at the patient and population levels to treat it better.

Reference

Messinger AI, Luo G, Deterding RR. The doctor will see you now: How machine learning and artificial intelligence can extend our understanding and treatment of asthma [published online December 25, 2019]. J Allergy Clin Immunol. doi: 10.1016/j.jaci.2019.12.898

Originally posted here:

Machine Learning and Artificial Intelligence Are Poised to Revolutionize Asthma Care - Pulmonology Advisor

Comments Off on Machine Learning and Artificial Intelligence Are Poised to Revolutionize Asthma Care – Pulmonology Advisor

Chemists are training machine learning algorithms used by Facebook and Google to find new molecules – News@Northeastern

Posted: at 11:46 pm

For more than a decade, Facebook and Google algorithms have been learning as much as they can about you. Its how they refine their systems to deliver the news you read, those puppy videos you love, and the political ads you engage with.

These same kinds of algorithms can be used to find billions of molecules and catalyze important chemical reactions that are currently induced with expensive and toxic metals, says Steven A. Lopez, an assistant professor of chemistry and chemical biology at Northeastern.

Lopez is working with a team of researchers to train machine learning algorithms to spot the molecular patterns that could help find new molecules in bulk, and fast. Its a much smarter approach than scanning through billionsand billionsof molecules without a streamlined process.

Were teaching the machines to learn the chemistry knowledge that we have, Lopez says. Why should I just have the chemical intuition for myself?

The alternative to using expensive metals is organic molecules, and particularly plastics, which are everywhere, Lopez says. Depending on their molecular structure and ability to absorb light, these plastics can be converted with chemistry to produce better materials for todays most important problems.

Lopez says the goal is to find molecules with the right properties and similar structures as metal catalysts. But to attain that goal, Lopez will need to explore an enormous number of molecules.

Thus far, scientists have been able to synthesize only about a million molecules. But conservative estimates of the number of possible molecules that could be analyzed is a quintillion, which is 10 raised to the power of 18, or the number one followed by 18 zeros.

Lopez thinks of this enormous number of possibilities as a vast ocean made up of billions of unexplored molecules. Such an immense molecular space is practically impossible to navigateeven if scientists were to combine experiments with supercomputer analysis.

Lopez says all of the calculations that have ever been done by computers add up to about a billion, or 10 to the ninth power. Thats about a million times less than the possible molecules.

Forget it, theres no chance, he says. We just have to use a smarter search technique.

Thats why Lopez is leading a team, supported by a grant from the National Science Foundation, that includes research from Tufts University, Washington University in St. Louis, Drexel University, and Colorado School of Mines. The team is using an open-access database of organic molecules called VERDE materials DB, which Lopez and colleagues recently published, to improve their algorithms and find more useful molecules.

The database will also register newly found molecules, and can serve as a data hub of information for researchers across several different domains, Lopez says. Thats because it can launch researchers toward finding different molecules with many new properties and applications.

In tandem with the database, the algorithms will allow scientists to use computational resources more efficiently. After molecules of interest are found, researchers will recalibrate the algorithm to find more similar groups of molecules.

The active-search algorithm, developed by Roman Garnett at Washington University in St. Louis, uses a process similar to the classic board game Battleship, in which two players guess hidden locations off a grid to target and destroy vessels within a naval fleet.

In that grid, players place vessels as far apart as possible to make opponents miss targets. Once a ship is hit, players can readjust their strategy and redirect their attacks to the coordinates surrounding that hit.

Thats exactly how Lopez thinks of the concept of exploring a vast ocean of molecules.

We are looking for regions within this ocean, he says. We are starting to set up the coordinates of all the possible molecules.

Hitting the right candidate molecules might also expand the understanding that chemists have of this unexplored chemical space.

Maybe well find out through this analysis that we have something really at the edge of what we call the ocean, and that we can expand this ocean out a bit more in that region, Lopez says. Those are things that we wouldnt [be able to find by searching] with a brute force, trial-and-error kind of approach.

For media inquiries, please contact Jessica Hair at j.hair@northeastern.edu or 617-373-5718.

Read more:

Chemists are training machine learning algorithms used by Facebook and Google to find new molecules - News@Northeastern

Comments Off on Chemists are training machine learning algorithms used by Facebook and Google to find new molecules – News@Northeastern

Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core – The Register

Posted: at 11:45 pm

MIT boffins have devised a software-based tool for predicting how processors will perform when executing code for specific applications.

In three papers released over the past seven months, ten computer scientists describe Ithemal (Instruction THroughput Estimator using MAchine Learning), a tool for predicting the number processor clock cycles necessary to execute an instruction sequence when looped in steady state, and include a supporting benchmark and algorithm.

Throughput stats matter to compiler designers and performance engineers, but it isn't practical to make such measurements on-demand, according to MIT computer scientists Saman Amarasinghe, Eric Atkinson, Ajay Brahmakshatriya, Michael Carbin, Yishen Chen, Charith Mendis, Yewen Pu, Alex Renda, Ondrej Sykora, and Cambridge Yang.

So most systems rely on analytical models for their predictions. LLVM offers a command-line tool called llvm-mca that can presents a model for throughput estimation, and Intel offers a closed-source machine code analyzer called IACA (Intel Architecture Code Analyzer), which takes advantage of the company's internal knowledge about its processors.

Michael Carbin, a co-author of the research and an assistant professor and AI researcher at MIT, told the MIT News Service on Monday that performance model design is something of a black art, made more difficult by Intel's omission of certain proprietary details from its processor documentation.

The Ithemal paper [PDF], presented in June at the International Conference on Machine Learning, explains that these hand-crafted models tend to be an order of magnitude faster than measuring basic block throughput sequences of instructions without branches or jumps. But building these models is a tedious, manual process that's prone to errors, particularly when processor details aren't entirely disclosed.

Using a neural network, Ithemal can learn to predict throughout using a set of labelled data. It relies on what the researchers describe as "a hierarchical multiscale recurrent neural network" to create its prediction model.

"We show that Ithemals learned model is significantly more accurate than the analytical models, dropping the mean absolute percent error by more than 50 per cent across all benchmarks, while still delivering fast estimation speeds," the paper explains.

A second paper presented in November at the IEEE International Symposium on Workload Characterization, "BHive: A Benchmark Suite and Measurement Framework for Validating x86-64 Basic Block Performance Models," describes the BHive benchmark for evaluating Ithemal and competing models, IACAm llvm-mca, and OSACA (Open Source Architecture Code Analyzer). It found Ithemal outperformed other models except on vectorized basic blocks.

And in December at the NeurIPS conference, the boffins presented a third paper titled Compiler Auto-Vectorization with Imitation Learning that describes a way to automatically generate compiler optimizations in a way that outperforms LLVMs SLP vectorizer.

The academics argue that their work shows the value of machine learning in the context of performance analysis.

"Ithemal demonstrates that future compilation and performance engineering tools can be augmented with datadriven approaches to improve their performance and portability, while minimizing developer effort," the paper concludes.

Sponsored: Detecting cyber attacks as a small to medium business

Go here to see the original:

Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core - The Register

Comments Off on Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core – The Register

Tiny Machine Learning On The Attiny85 – Hackaday

Posted: at 11:45 pm

We tend to think that the lowest point of entry for machine learning (ML) is on a Raspberry Pi, which it definitely is not. [EloquentArduino] has been pushing the limits to the low end of the scale, and managed to get a basic classification model running on the ATtiny85.

Using his experience of running ML models on an old Arduino Nano, he had created a generator that can export C code from a scikit-learn. He tried using this generator to compile a support-vector colour classifier for the ATtiny85, but ran into a problem with the Arduino ATtiny85 compiler not supporting a variadic function used by the generator. Fortunately he had already experimented with an alternative approach that uses a non-variadic function, so he was able to dust that off and get it working. The classifier accepts inputs from an RGB sensor to identify a set of objects by colour. The model ended up easily fitting into the capabilities of the diminutive ATtiny85, using only 41% of the available flash and 4% of the available ram.

Its important to note what [EloquentArduino] isnt doing here: running an artificial neural network. Theyre just too inefficient in terms of memory and computation time to fit on an ATtiny. But neural nets arent the only game in town, and if your task is classifying something based on a few inputs, like reading a gesture from accelerometer data, or naming a color from a color sensor, the approach here will serve you well. We wonder if this wouldnt be a good solution to the pesky problem of identifying bats by their calls.

We really like how approachable machine learning has become and if youre keen to give ML a go, have a look at the rest of the EloquentArduino blog, its a small goldmine.

Were getting more and more machine learning related hacks, like basic ML on an Arduino Uno, and Lego sortings using ML on a Raspberry Pi.

Read more:

Tiny Machine Learning On The Attiny85 - Hackaday

Comments Off on Tiny Machine Learning On The Attiny85 – Hackaday

Here’s why digital marketing is as lucrative a career as data science and machine learning – Business Insider India

Posted: at 11:45 pm

In an interview with Business Insider, Mayank Kumar, Founder & MD of upGrad told how digital literacy is becoming a buzzword in the ecosystem. The requirement for experienced marketers is getting replaced by the demand for data-driven marketers.

In fact, Kumar says that professionals with 10+ years of experience in traditional marketing or sales are feeling the palpable need to upskill and do so, really fast.

As per LinkedIn, digital marketing specialist is one of the top 15 emerging job roles in India with Mumbai, Bangalore and Delhi attracting the most talent. However, it is no longer confined to traditional aspects of social media or content marketing. They would have to acquire skills with regards to Google Ads, Social Media Optimization, Google Analytics and Search Engine Optimization (SEO).

Nearly doubled salaries

They earn as much as data scientists and other techies who work for full stack development which is one the best paying software roles.

The top 20% of the transitioned learners graduated with an average hike of 177%, which is way above any industry benchmark. Those who were previously in profiles like software testing, software development, traditional marketing, sales and operations are now working with leading companies like HDFC Life, Facebook, IBM, Uber, Zomato, and Microsoft, upGrad said in a statement.

upGrad provides an industry connect for professionals who want to transition from their existing job roles.

We started our in-house placement support team which provides holistic placement services like resume building, interview preparation support and salary negotiation tips. As of today, we have over 300 corporates hiring from upGrads talent pool and we plan to add 50 new companies every quarter.

See also:This Indian startup gains as students from Tier 2 and 3 cities opt for online digital courses

Data scientists with 3 years experience can earn 20 lacs per annum

View original post here:

Here's why digital marketing is as lucrative a career as data science and machine learning - Business Insider India

Comments Off on Here’s why digital marketing is as lucrative a career as data science and machine learning – Business Insider India

How Will Your Hotel Property Use Machine Learning in 2020 and Beyond? | – Hotel Technology News

Posted: at 11:45 pm

Every hotel should ask the same question. How will our property use machine learning? Its not just a matter of gaining a competitive advantage; its imperative in order to stay in business.By Jason G. Bryant, Founder and CEO, Nor1 - 1.9.2020

Artificial intelligence (AI) implementation has grown 270% over the past four years and 37% in the past year alone, according to Gartners 2019 CIO Survey of more than 3,000 executives. About the ubiquity of AI and machine learning (ML) Gartner VP Chris Howard notes, If you are a CIO and your organization doesnt use AI, chances are high that your competitors do and this should be a concern, (VentureBeat). Hotels may not have CIOs, but any business not seriously considering the implications of ML throughout the organization will find itself in multiple binds, from the inability to offer next-level guest service to operational inefficiencies.

Amazon is the poster child for a sophisticated company that is committed to machine learning both in offers (personalized commerce) as well as behind the scenes in their facilities. Amazon Founder & CEO Jeff Bezos attributes much of Amazons ongoing financial success and competitive dominance to machine learning. Further, he has suggested that the entire future of the company rests on how well it uses AI. However, as Forbes contributor Kathleen Walsh notes, There is no single AI group at Amazon. Rather, every team is responsible for finding ways to utilize AI and ML in their work. It is common knowledge that all senior executives at Amazon plan, write, and adhere to a six-page business plan. A piece of every business plan for every business function is devoted to answering the question: How will you utilize machine learning this year?

Every hotel should ask the same question. How will our property use machine learning? Its not just a matter of gaining a competitive advantage; its imperative in order to stay in business. In the 2017 Deloitte State of Cognitive Survey, which canvassed 1,500 mostly C-level executives, not a single survey respondent believed that cognitive technologies would not drive substantive change. Put more simply: every executive in every industry knows that AI is fundamentally changing the way we do business, both in services/products as well as operations. Further, 94% reported that artificial intelligence would substantially transform their companies within five years, most believing the transformation would occur by 2020.

Playing catch-up with this technology can be competitively dangerous as there is significant time between outward-facing results (when you realize your competition is outperforming you) and how long it will take you to achieve similar results and employ a productive, successful strategy. Certainly, revenue management and pricing will be optimized by ML, but operations, guest service, maintenance, loyalty, development, energy usage, and almost every single aspect of the hospitality enterprise will be impacted as well. Any facility where the speed and precision of tactical decision making can be improved will be positively impacted.

Hotels are quick to think that when ML means robotic housekeepers and facial recognition kiosks. While these are possibilities, ML can do so much more. Here are just a few of the ways hotels are using AI to save money, improve service, and become more efficient.

Hiltons Energy Program

The LightStay program at Hilton predicts energy, water, and waste usage and costs. The company can track actual consumption against predictive models, which allows them to manage year-over-year performance as well as performance against competitors. Further, some hotel brands can link in-room energy to the PMS so that when a room is empty, the air conditioner automatically turns off. The future of sustainability in the hospitality industry relies on ML to shave every bit off of energy usage and budget. For brands with hundreds and thousands of properties, every dollar saved on energy can affect the bottom line in a big way.

IHG & Human Resources

IHG employs 400,000 people across 5,723 hotels. Holding fast to the idea that the ideal guest experience begins with staff, IHG implemented AI strategies tofind the right team member who would best align and fit with each of the distinct brand personalities, notes Hazel Hogben, Head of HR, Hotel Operations, IHG Europe. To create brand personas and algorithms, IHG assessed its top customer-facing senior managers across brands using cognitive, emotional, and personality assessments. They then correlated this with KPI and customer data. Finally, this was cross-referenced with values at the different brands. The algorithms are used to create assessments to test candidates for hire against the personas using gamification-based tools, according to The People Space. Hogben notes that in addition to improving the candidate experience (they like the gamification of the experience), it has also helped in eliminating personal or preconceived bias among recruiters. Regarding ML uses for hiring, Harvard Business Review says in addition to combatting human bias by automatically flagging biased language in job descriptions, ML also identifies highly qualified candidates who might have been overlooked because they didnt fit traditional expectations.

Accor Hotels Upgrades

A 2018 study showed that 70% of hotels say they never or only sometimes promote upgrades or upsells at check-in (PhocusWire). In an effort to maximize the value of premium inventory and increase guest satisfaction, Accor Hotels partnered with Nor1 to implement eStandby Upgrade. With the ML-powered technology, Accor Hotels offers guests personalized upgrades based on previous guest behavior at a price that the guest has shown a demonstrated willingness to pay at booking and during the pre-arrival period, up to 24 hours before check-in. This allows the brand to monetize and leverage room features that cant otherwise be captured by standard room category definitions and to optimize the allocation of inventory available on the day of arrival. ML technology can create offers at any point during the guest pathway, including the front desk. Rather than replacing agents as some hotels fear, it helps them make better, quicker decisions about what to offer guests.

Understanding Travel Reviews

The luxury Dorchester Collection wanted to understand what makes their high-end guests tick. Instead of using the traditional secret shopper methods, which dont tell hotels everything they need to know about their experience, Dorchester Collection opted to analyze traveler feedback from across major review sites using ML. Much to their surprise, they discovered Dorchesters guests care a great deal more about breakfast than they thought. They also learned that guests want to customize breakfast, so they removed the breakfast menu and allowed guests to order whatever they like. As it turns out, guests love this.

In his May 2019 Google I/O Address, Google CEO Sundar Pichai said, Thanks to advances in AI, Google is moving beyond its core mission of organizing the worlds information. We are moving from a company that helps you find answers to a company that helps you get things done (ZDNet). Pichai has long held that we no longer live in a mobile-first world; we now inhabit an AI-first world. Businesses must necessarily pivot with this shift, evolving processes and products, sometimes evolving the business model, as in Googles case.

Hotels that embrace ML across operations will find that the technologies improve processes in substantive ways. ML improves the guest experience and increases revenue with precision decisioning and analysis across finance, human resources, marketing, pricing and merchandising, and guest services. Though the Hiltons, Marriotts, and IHGs of the hotel world are at the forefront of adoption, ML technologies are accessibleboth in price and implementationfor the full range of properties. The time has come to ask every hotel department: How will you use AI this year?

For more about Machine Learning and the impact on the hotel industry, download NOR1s ebook The Hospitality Executives Guide to Machine Learning: Will You Be a Leader, Follower, or Dinosaur?

Jason G. Bryant, Nor1 Founder and CEO, oversees day-to-day operations, provides visionary leadership and strategic direction for the upsell technology company. With Jason at the helm, Nor1 has matured into the technology leader in upsell solutions. Headquartered in Silicon Valley, Nor1 provides innovative revenue enhancement solutions to the hospitality industry that focus on the intersection of machine learning, guest engagement and operational efficiency. A seasoned entrepreneur, Jason has over 25 years experience building and leading international software development and operations organizations.

Related

Here is the original post:

How Will Your Hotel Property Use Machine Learning in 2020 and Beyond? | - Hotel Technology News

Comments Off on How Will Your Hotel Property Use Machine Learning in 2020 and Beyond? | – Hotel Technology News