Biotechnology Industry: Does Cara Therapeutics Inc (CARA) Stock Beat its Rivals? – InvestorsObserver

The 75 rating InvestorsObserver gives to Cara Therapeutics Inc (CARA) stock puts it near the top of the Biotechnology industry. In addition to scoring higher than 87 percent of stocks in the Biotechnology industry, CARAs 75 overall rating means the stock scores better than 75 percent of all stocks.

Trying to find the best stocks can be a daunting task. There are a wide variety of ways to analyze stocks in order to determine which ones are performing the strongest. Investors Observer makes the entire process easier by using percentile rankings that allows you to easily find the stocks who have the strongest evaluations by analysts.

This ranking system incorporates numerous factors used by analysts to compare stocks in greater detail. This allows you to find the best stocks available in any industry with relative ease. These percentile-ranked scores using both fundamental and technical analysis give investors an easy way to view the attractiveness of specific stocks. Stocks with the highest scores have the best evaluations by analysts working on Wall Street.

Cara Therapeutics Inc (CARA) stock is trading at $15.81 as of 11:14 AM on Wednesday, Apr 22, a rise of $0.39, or 2.53% from the previous closing price of $15.42. The stock has traded between $15.29 and $16.44 so far today. Volume today is below average. So far 328,715 shares have traded compared to average volume of 541,813 shares.

To see InvestorsObserver's Sentiment Score for Cara Therapeutics Inc click here.

Read the original:
Biotechnology Industry: Does Cara Therapeutics Inc (CARA) Stock Beat its Rivals? - InvestorsObserver

Where Does Intercept Pharmaceuticals Inc (ICPT) Stock Fall in the Biotechnology Field? – InvestorsObserver

Intercept Pharmaceuticals Inc (ICPT) is near the top in its industry group according to InvestorsObserver. ICPT gets an overall rating of 72. That means it scores higher than 72 percent of stocks. Intercept Pharmaceuticals Inc gets a 83 rank in the Biotechnology industry. Biotechnology is number 8 out of 148 industries.

Analyzing stocks can be hard. There are tons of numbers and ratios, and it can be hard to remember what they all mean and what counts as good for a given value. InvestorsObserver ranks stocks on eight different metrics. We percentile rank most of our scores to make it easy for investors to understand. A score of 72 means the stock is more attractive than 72 percent of stocks.

These rankings allows you to easily compare stocks and view what the strengths and weaknesses are of a given company. This lets you find the stocks with the best short and long term growth prospects in a matter of seconds. The combined score incorporates technical and fundamental analysis in order to give a comprehensive overview of a stocks performance. Investors who then want to focus on analysts rankings or valuations are able to see the separate scores for each section.

Intercept Pharmaceuticals Inc (ICPT) stock is trading at $81.86 as of 10:41 AM on Tuesday, Apr 21, a loss of -$0.75, or -0.91% from the previous closing price of $82.61. The stock has traded between $80.53 and $83.30 so far today. Volume today is less active than usual. So far 80,863 shares have traded compared to average volume of 667,682 shares.

To see InvestorsObserver's Sentiment Score for Intercept Pharmaceuticals Inc click here.

View post:
Where Does Intercept Pharmaceuticals Inc (ICPT) Stock Fall in the Biotechnology Field? - InvestorsObserver

Check Your Health: How RX Match can help find the right medication for you – KUTV 2News

KUTV

Rx Match is a test that analyze and interprets each patient;s unique genetic makeup to help doctors determine which medication and dosage might work best with the patient's genes.

"Rx Match is a molecular test that we use to look at a person's DNA to learn how the metabolize certain drugs," said Jason Gillman, Cancer Genomics Director, Intermountain Healthcare.

Variations in a person's DNA can impact how they metabolize and respond to different drugs. RX Match analyzes those variants using pharmacogenomics, a cutting edge field of precision medicine that studies how genes can relate to a patient's response of medication.

The test is a cheek swab, that is then sent to a lab. The results are returned to the doctor about 7 to 14 days later.

The RX Match report includes a response score on antidepressants, opioids, statins, immunosuppressants, antiabetics and many others.

By incorporating genomic information, doctors can pinpoint which medications are most likely to work for their patients. This may help reduce repeat visits for drug side effects.

:There are options there are different ways that we can be more precise in the ways that we prescribe. If a drug is not working or you feel like you might not be on the right drug speak with your physician to see if RX Match is a good match for you," said Gillman.

Joan Eggert has a history of hypertension.

"I've had high blood pressure since age 22. I've been on all kinds of medications up to three or more at once with not really great results," said Joan.

She was placed on a new blood pressure drug shortly before she went on vacation. While she was scuba diving in Fiji, she started to experience shortness of breath.

"Heard the gurgles in my lungs and knew I had Immersion Pulmonary Edema because I am a hyperbaric doctor," said Joan. "Finally we got to shore and after hours on oxygen, I was ok.

Joan was determined to figure out what went wrong. Eventually, she had an RX Match test done.

The test told her she didn't respond well to the medication she was on. Doctors switched her medication and now she is doing better.

"I finally got the answer to my question; the medication I was switched to, I don't metabolize properly," said Joan.

Talk with your doctor if you are interested in learning more about RX Match.

For more information, click here.

Follow Check Your Health on Facebook, Twitter, and YouTube.

View post:
Check Your Health: How RX Match can help find the right medication for you - KUTV 2News

Global Respiratory Infection Diagnostic Markets by Technology, PLEX, Place and Region – COVID-19 Impact & Forecasting/Analysis with Executive…

The "Respiratory Infection Diagnostic Markets by Technology, PLEX, Place and by Region with COVID-19 Impact & Forecasting/Analysis, and Executive and Consultant Guides 2020-2024" report has been added to ResearchAndMarkets.com's offering.

COVID-19 has broken open the market for point of care testing of respiratory infections. Now the competition for market share begins in earnest. Large new markets are opening up. In health facilities, clinics, physicians' offices and elsewhere. And let's not forget the screening market, not just for COVID, but for the rest of the 20 something respiratory pathogens as well. Multiplex vs single plex?

Explore the rapidly changing market as competitors jockey for position in new markets that are not yet well understood.

New technology is forever changing the diagnosis of respiratory infections. Shrinking time to result is opening up markets multiple times the size of current microbiology based practice. Diagnosis has already moved into the Emergency Room. It is now moving to the Physician's Office Lab. Could the Home be next?

The Multiplex factor is creating market confusion while lowering costs and improving care but important factors are holding back progress. The widespread nature of respiratory infections, (young people can get 8 colds a year) means that potential market sizes are enormous. Respiratory, already the largest infectious disease category could multiply in size. This is a growth opportunity for all diagnostic companies. Understand the opportunity and the risk with this in-depth report.

Key Topics Covered

i. Respiratory Infections Diagnostic Market - Strategic Situation Analysis and Impact of COVID-19

ii. Guide for Executives, Marketing, Sales and Business Development Staff

iii. Guide for Management Consultants and Investment Advisors

1. Introduction and Market Definition

1.1 What are Respiratory Infections?

1.2 The Role of Diagnosis & Treatment

1.3 Market Definition

1.3.1 Revenue Market Size

1.4 Methodology

1.4.1 Authors

1.4.2 Sources

1.5 A Spending Perspective on Clinical Laboratory Testing

1.5.1 An Historical Look at Clinical Testing

2. Market Overview

2.1 Players in a Dynamic Market

2.1.1 Academic Research Lab

2.1.2 Diagnostic Test Developer

2.1.3 Instrumentation Supplier

2.1.4 Distributor and Reagent Supplier

2.1.5 Independent Testing Lab

2.1.6 Public National/regional lab

2.1.7 Hospital lab

2.1.8 Physician Office Labs

2.1.9 Audit Body

2.1.10 Certification Body

2.2 Respiratory Infections

2.2.1 Upper vs. Lower - Marketing Implications

2.2.2 Understanding the Role of Pneumonia

2.2.3 Bacterial Infections

2.2.3.1 Streptococcal Infections

2.2.3.2 Acute Otitis Media

2.2.3.3 Bacterial Rhinosinusitis

2.2.3.4 Diphtheria

2.2.3.5 Pneumococcal Pneumonia

2.2.3.6 Haemophilus Pneumonia

2.2.3.7 Mycoplasma Pneumonia (Walking Pneumonia)

2.2.3.8 Chlamydial Pneumonias and Psittacosis

2.2.3.9 Health Care-Associated Pneumonia

2.2.3.10 Pseudomonas Pneumonia

2.2.3.11 Pertussis (Whooping Cough)

2.2.3.12 Legionnaires Disease

2.2.4 Tuberculosis - A Special Case

2.2.5 Viral Infections

2.2.5.1 The Common Cold

2.2.5.2 Influenza

2.2.5.3 Viral Pneumonia

2.2.5.4 SARS and MERS

2.2.5.5 Measles (Rubeola)

2.2.5.6 Rubella (German Measles)

2.2.5.7 Chickenpox and Shingles

2.2.6 Fungal and Other Pathogens

2.2.6.1 Histoplasmosis

2.2.6.2 Coccidioidomycosis.

2.2.6.3 Blastomycosis

2.2.6.4 Mucormycosis

2.2.6.5 Aspergillosis

2.2.6.6 Pneumocystis Pneumonia

2.2.6.7 Cryptococcosis

2.3 Diagnostics - A Changing Role

2.3.1 Historical Practice

2.3.2 Current Diagnostics

2.3.3 The Multiplex Vector

2.3.4 Future Diagnostics - The Question of When and Where

2.3.5 Respiratory Infection Diagnostics - The Destination

2.3.6 Diagnostics as Defensive Weapons

3. Market Trends

3.1 Factors Driving Growth

3.1.1. Syndromic Multiplexing

3.1.2 T.A.T

3.1.3 Antimicrobial Resistance Movement

3.1.4 Pandemic Mitigation

3.1.5 An Aging at Risk Population

3.2 Factors Limiting Growth

3.2.1 The Cost Curve

3.2.2 Regulation and coverage

3.2.3 Laissez Faire

3.3 Instrumentation and Automation

3.3.1 The Shrinking Multiplexing Machine

3.3.2 Bioinformatics Networking and Anonymous Reporting

3.4 Diagnostic Technology Development

3.4.1 The Key Role of Time to Result

3.4.2 Single Cell Genomics Changes the Picture

3.4.3 Pharmacogenomics Blurs Diagnosis and Treatment

3.4.4 Pathogen Identification - A Projected Timetable of the Future

Story continues

4. Respiratory Infection Diagnostics Recent Developments

4.1 Recent Developments - Importance and How to Use This Section

4.1.1 Importance of These Developments

4.1.2 How to Use This Section

5. Profiles of Key Players

6. The Global Market for Respiratory Infection Diagnostics

6.1 Global Market Overview by Country

6.1.1 Table - Global Market by Country

6.1.2 Chart - Global Market by Country

See more here:
Global Respiratory Infection Diagnostic Markets by Technology, PLEX, Place and Region - COVID-19 Impact & Forecasting/Analysis with Executive...

Interpace Biosciences to Host Conference Call and Webcast to Discuss Fourth Quarter, Full Year 2019 and Preliminary First Quarter 2020 Financial…

PARSIPPANY, NJ, April 22, 2020 (GLOBE NEWSWIRE) -- Interpace (IDXG) announced today that it will release its financial results for the fourth quarter and full year 2019 this afternoon. Company management will host a conference call and webcast to discuss its financial results and provide a general business update at5:00 pm today, Wednesday April 22, 2020 at 5:00 p.m. Eastern time.

The conference call can be accessed as follows:

Date and Time:Wednesday, April 22, 2020 at 5:00 p.m. ETDial-in Number (Domestic):+1 (877) 407-9716Dial-in Number (International):+1 (201) 493-6779Confirmation Number:137025801Webcast Access: http://public.viavid.com/index.php?id=139473

Following the conclusion of the conference call, a replay will be available through May 6, 2020. The live, listen-only webcast of the conference call may also be accessed by visiting the Investors section of the Companys website at http://www.interpacediagnostics.com. A replay of the webcast will be available following the conclusion of the call and will be archived on the Companys website for 90 days.

About Interpace Biosciences

Interpace Biosciences is a leader in enabling personalized medicine, offering specialized services along the therapeutic value chain from early diagnosis and prognostic planning to targeted therapeutic applications.

Clinical services, through the Interpace Diagnostics division, provide clinically useful molecular diagnostic tests, bioinformatics and pathology services for evaluating risk of cancer by leveraging the latest technology in personalized medicine for improved patient diagnosis and management. Interpace has four commercialized molecular tests and one test in a clinical evaluation process (CEP): PancraGEN for the diagnosis and prognosis of pancreatic cancer from pancreatic cysts; ThyGeNEXT for the diagnosis of thyroid cancer from thyroid nodules utilizing a next generation sequencing assay; ThyraMIR for the diagnosis of thyroid cancer from thyroid nodules utilizing a proprietary gene expression assay; and RespriDX that differentiates lung cancer of primary versus metastatic origin. In addition, BarreGEN for Barretts Esophagus, is currently in a clinical evaluation program whereby we gather information from physicians using BarreGEN to assist in positioning the product for full launch, partnering and potentially supporting reimbursement with payers.

Pharma services, through the Pharma Solutions Division, provides pharmacogenomics testing, genotyping, biorepository and other customized services to the pharmaceutical and biotech industries. Pharma services also advance personalized medicine by partnering with pharmaceutical, academic, and technology leaders to effectively integrate pharmacogenomics into their drug development and clinical trial programs with the goal of delivering safer, more effective drugs to market more quickly, and improving patient care.

For more information, please visit Interpace Biosciences website atwww.interpace.com.

Forward-looking Statements

This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934 and the Private Securities Litigation Reform Act of 1995, relating to the Company's future financial and operating performance. The Company has attempted to identify forward looking statements by terminology including "believes," "estimates," "anticipates," "expects," "plans," "projects," "intends," "potential," "may," "could," "might," "will," "should," "approximately" or other words that convey uncertainty of future events or outcomes to identify these forward-looking statements. These statements are based on current expectations, assumptions and uncertainties involving judgments about, among other things, future economic, competitive and market conditions and future business decisions, all of which are difficult or impossible to predict accurately and many of which are beyond the Company's control. These statements also involve known and unknown risks, uncertainties and other factors that may cause the Company's actual results to be materially different from those expressed or implied by any forward-looking statement, including the potential adverse impact of the Coronavirus (COVID-19) pandemic, our history of operating losses and the limited revenue generated by our clinical and pharma solutions customers, Additionally, all forward-looking statements are subject to the Risk Factors detailed from time to time in the Company's most recent Annual Report on Form 10-K filed on April 22, 2020. Because of these and other risks, uncertainties and assumptions, undue reliance should not be placed on these forward-looking statements. In addition, these statements speak only as of the date of this press release and, except as may be required by law, the Company undertakes no obligation to revise or update publicly any forward-looking statements for any reason.

Contacts:Investor RelationsEdison GroupJoseph Green(646) 653-7030jgreen@edisongroup.com

Continued here:
Interpace Biosciences to Host Conference Call and Webcast to Discuss Fourth Quarter, Full Year 2019 and Preliminary First Quarter 2020 Financial...

Microsoft Office 365: How these Azure machine-learning services will make you more productive and efficient – TechRepublic

Office can now suggest better phrases in Word or entire replies in Outlook, design your PowerPoint slides, and coach you on presenting them. Microsoft built those features with Azure Machine Learning and big models - while keeping your Office 365 data private.

The Microsoft Office clients have been getting smarter for several years: the first version of Editor arrived in Word in 2016, based on Bing's machine learning, and it's now been extended to include the promised Ideas feature with extra capabilities. More and more of the new Office features in the various Microsoft 365 subscriptions are underpinned by machine learning.

You get the basic spelling and grammar checking in any version of Word. But if you have a subscription, Word, Outlook and a new Microsoft Editor browser extension will be able to warn you if you're phrasing something badly, using gendered idioms so common that you may not notice who they exclude, hewing so closely to the way your research sources phrased something that you need to either write it in your own words or enter a citation, or just not sticking to your chosen punctuation rules.

SEE:Choosing your Windows 7 exit strategy: Four options(TechRepublic Premium)

Word can use the real-world number comparisons that Bing has had for a while to make large numbers more comprehensible. It can also translate the acronyms you use inside your organization -- and distinguish them from what someone in another industry would mean by them. It can even recognise that those few words in bold are a heading and ask if you want to switch to a heading style so they show up in the table of contents.

Outlook on iOS uses machine learning to turn the timestamp on an email to a friendlier 'half an hour ago' when you have it read out your messages. Mobile and web Outlook use machine learning and natural-language processing to suggest three quick replies for some messages, which might include scheduling a meeting.

Excel has the same natural-language queries for spreadsheets as Power BI, letting you ask questions about your data. PowerPoint Designer can automatically crop pictures, put them in the right place on the slide and suggest a layout and design; it uses machine learning for text and slide structure analysis, image categorisation, recommending content to include and ranking the layout suggestions it makes. The Presenter Coach tells you if you're slouching, talking in a monotone or staring down at your screen all the time while you're talking, using machine learning to analyse your voice and posture from your webcam.

How PowerPoint Designer uses AML (Azure Machine Learning).

Image: Microsoft

Many of these features are built using the Azure Machine Learning service, Erez Barak, partner group program manager for AI Platform Management, told TechRepublic. At the other extreme, some call the pre-built Azure Cognitive Services APIs for things like speech recognition in the presentation coach, as well as captioning PowerPoint presentations in real-time and live translation into 60-plus languages (and those APIs are themselves built using AML).

Other features are based on customising pre-trained models like Turing Neural Language Generation, a seventeen-billion parameter deep-learning language model that can answer questions, complete sentences and summarize text -- useful for suggesting alternative phrases in Editor or email replies in Outlook. "We use those models in Office after applying some transfer learning to customise them," Barak explained. "We leverage a lot of data, not directly but by the transfer learning we do; that's based on big data to give us a strong natural-language understanding base. For everything we do in Office requires that context; we try to leverage the data we have from big models -- from the Turing model especially given its size and its leadership position in the market -- in order to solve for specific Office problems."

AML is a machine-learning platform for both Microsoft product teams and customers to build intelligent features that can plug into business processes. It provides automated pipelines that take large amounts of data stored in Azure Data Lake, merge and pre-process the raw data, and feed them into distributed training running in parallel across multiple VMs and GPUs. The machine-learning version of the automated deployment common in DevOps is known as MLOps. Office machine-learning models are often built using frameworks like PyTorch or TensorFlow; the PowerPoint team uses a lot of Python and Jupiter notebooks.

The Office data scientists experiment with multiple different models and variations; the best model then gets stored back into Azure Data Lake and downloaded into AML using the ONNX runtime (open-sourced by Microsoft and Facebook) to run in production without having to be rebuilt. "Packaging the models in the ONNX runtime, especially for PowerPoint Designer, helps us to normalise the models, which is great for MLOps; as you tie these into pipelines, the more normalised assets you have, the easier, simpler and more productive that process becomes," said Barak.

ONNX also helps with performance when it comes to running the models in Office, especially for Designer. "If you think about the number of inference calls or scoring calls happening, performance is key: every small percentage and sub-percentage point matters," Barak pointed out.

A tool like Designer that's suggesting background images and videos to use as content needs a lot of compute and GPU to be fast enough. Some of the Turing models are so large that they run on the FPGA-powered Brainwave hardware inside Azure because otherwise they'd be too slow for workloads like answering questions in Bing searches. Office uses the AML compute layer for training and production which, Barak said, "provides normalised access to different types of compute, different types of machines, and also provides a normalised view into the performance of those machines".

"Office's training needs are pretty much bleeding edge: think long-running, GPU-powered, high-bandwidth training jobs that could run for days, sometimes for weeks, across multiple cores, and require a high level of visibility into the end process as well as a high level of reliability," Barak explained. "We leverage a lot of high-performing GPUs for both training the base models and transfer learning." Although the size of training data varies between the scenarios, Barak estimates that fine-tuning the Turing base model with six months of data would use 30-50TB of data (on top of the data used to train the original model).

Acronyms accesses your Office 365 data, because it needs to know which acronyms your organisation uses.

Image: Mary Branscombe/TechRepublic

The data used to train Editor's rewrite suggestions includes documents written by people with dyslexia, and many of the Office AI features use anonymised usage data from Office 365 usage. Acronyms is one of the few features that specifically uses your own Office 365 data, because it needs to find out which acronyms your organisation uses, but that isn't shared with any other Office users. Microsoft also uses public data for many features rather than trying to mine that from private Office documents. The similarity checker uses Bing data, and Editor's sentence rewrite uses public data like Wikipedia as well as public news data to train on.

As the home of so many documents, Office 365 has a wealth of data, but it also has strong compliance policies and processes that Microsoft's data scientists must follow. Those policies change over time as laws change or Office gets accredited to new standards -- "think of it as a moving target of policies and commitments Office has made in the past and will continue to make," Barak suggested. "In order for us to leverage a subset of the Office data in machine learning, naturally, we adhere to all those compliance promises."

LEARN MORE:Office 365 Consumer pricing and features

But models like those used in Presentation Designer need frequent retraining (at least every month) to deal with new data, such as which of the millions of slide designs it suggests get accepted and are retained in presentations. That data is anonymised before it's used for training, and the training is automated with AML pipelines. But it's important to score retrained models consistently with existing models so you can tell when there's an improvement, or if an experiment didn't pan out, so data scientists need repeated access to data.

"People continuously use that, so we continuously have new data around people's preferences and choices, and we want to continuously retrain. We can't have a system that needs to be adjusted over and over again, especially in the world of compliance. We need to have a system that's automatable. That's reproducible -- and frankly, easy enough for those users to use," Barak said.

"They're using AML Data Sets, which allow them to access this data while using the right policies and guard rails, so they're not creating copies of the data -- which is a key piece of keeping the compliance and trust promise we make to customers. Think of them as pointers and views into subsets of the data that data scientists want to use for machine learning."It's not just about access; it's about repeatable access, when the data scientists say 'let's bring in that bigger model, let's do some transfer learning using the data'. It's very dynamic: there's new data because there's more activity or more people [using it]. Then the big models get refreshed on a regular basis. We don't just have one version of the Turing model and then we're done with it; we have continuous versions of that model which we want to put in the hands of data scientists with an end-to-end lifecycle."

Those data sets can be shared without the risk of losing track of the data, which means other data scientists can run experiments on the same data sets. This makes it easier for them to get started developing a new machine-learning model.

Getting AML right for Microsoft product teams also helps enterprises who want to use AML for their own systems. "If we nail the likes and complexities of Office, we enable them to use machine learning in multiple business processes," Barak said. "And at the same time we learn a lot about automation and requirements around compliance that also very much applies to a lot of our third-party customers."

Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays

Read more:
Microsoft Office 365: How these Azure machine-learning services will make you more productive and efficient - TechRepublic

Apple is on a hiring freeze … except for its Hardware, Machine Learning and AI teams – Thinknum Media

Word in the tech community is that Apple ($NASDAQ:AAPL) employees are begnning to report hiring freezes for certain groups within the company. But other reports are that hiring is continuing at the Cupertino tech giant. In fact, we've reported on the former.

It turns out that both reports are correct. For some divisions, like Marketing and Corporate Functions, openings have been reduced. But for others, like Hardware and Machine Learning, openings and subsequent hiring appear to be as brisk as ever.

To be clear, overall, job listings at Apple have been cut back.

As recently as mid-March, Apple job listings were nearing the 6,000 mark, which would have been the company's most prolific hiring spree in history. But in late March, it became clear that no one would be going into the office any time soon, and openings quickly began disappearing from Apple's recruitment site. As of this week, openings at Apple are down to 5,240, signaling a decrease in hiring of about 13%.

But not all divisions are stalling their job listings. NeitherApple's "Hardware" or"Machine Learning and AI" groups show a decline in job listings of note.

Hardware openings are flat at worst. Today's 1,570 openings isn't significantly different than a high of 1,600 in March.

Apple's "Machine Learning and AI" group remains as healthy as ever when it comes to new listings being posted to the company's careers sites. As of this week, the team has 334 openings. Last month, that number was 300, an 11% increase in hiring activity.

However, other groups at Apple have seen significant decreases in job listings, including "Software and Services", "Marketing", and "Corporate Functions".

Apple's "Software and Services" team saw a siginificant drop in openings, particularly on April 10, when around 110 openings were cut from the company's recruiting website overnight. Since mid-March, openings on the team have fallen by about 12%.

Between April 14 and April 23, the number of listings for Apple's "Marketing" team dropped by 84. In late March, Apple was seeking 311 people for its Marketing team. Since then, openings have fallen by 36% for the team.

"Corporate Functions" jobs at Apple, which include everything from HR to Finance and Legal, have also seen a steep decline in recent weeks. In late March, Apple listed more than 300 openings for the team. As of this week, it has just around 200 openings, a roughly 1/3 hiring freeze.

So is Apple in the middle of a hiring freeze? Some parts of the company appear frozen. Others appear as hot as ever. Given the in-person nature of Marketing and Corporate Functions jobs, it's not surprising that the company would tap the breaks on interviewing for such positions. On the other hand, engineers working on hardware and machine learning can be remote interviewed and onboarded with equipment delivery.

So, yes, and yes. Apple is, and is not, in the middle of a hiring freeze.

Thinknum tracks companies using the information they post online - jobs, social and web traffic, product sales and app ratings - andcreates data sets that measure factors like hiring, revenue and foot traffic. Data sets may not be fully comprehensive (they only account for what is available on the web), but they can be used to gauge performance factors like staffing and sales.

See original here:
Apple is on a hiring freeze ... except for its Hardware, Machine Learning and AI teams - Thinknum Media

IBM’s The Weather Channel app using machine learning to forecast allergy hotspots – TechRepublic

The Weather Channel is now using artificial intelligence and weather data to help people make better decisions about going outdoors based on the likelihood of suffering from allergy symptoms.

Amid the COVID-19 pandemic, most people are taking precautionary measures in an effort to ward off coronavirus, which is highly communicable and dangerous. It's no surprise that we gasp at every sneeze, cough, or even sniffle, from others and ourselves. Allergy sufferers may find themselves apologizing awkwardly, quickly indicating they don't have COVID-19, but have allergies, which are often treated with sleep-inducing antihistamines that cloud critical thinking.

The most common culprits and indicators to predict symptomsragweed, grass, and tree pollen readingsare often inconsistently tracked across the country. But artificial intelligence (AI) innovation from IBM's The Weather Channel is coming to the rescue of those roughly 50 million Americans that suffer from allergies.

The Weather Channel's new tool shows a 15-day allergy forecast based on ML.

Image: Teena Maddox/TechRepublic

IBM's The Weather Channel is now using machine learning (ML) to forecast allergy symptoms. IBM data scientists developed a new tool on The Weather Channel app and weather.com, "Allergy Insights with Watson" to predict your risk of allergy symptoms.

Weather can also drive allergy behaviors. "As we began building this allergy model, machine learning helped us teach our models to use weather data to predict symptoms," said Misha Sulpovar, product leader, consumer AI and ML, IBM Watson media and weather. Sulpovar's role is focused on using machine learning and blockchain to develop innovative and intuitive new experiences for the users of the Weather Channel's digital properties, specifically, weather.com and The Weather Channel smart phone apps.

SEE: IBM's The Weather Channel launches coronavirus map and app to track COVID-19 infections (TechRepublic)

Any allergy sufferer will tell you it can be absolutely miserable. "If you're an allergy sufferer, you understand that knowing in advance when your symptom risk might change can help anyone plan ahead and take action before symptoms may flare up," Sulpovar said. "This allergy risk prediction model is much more predictive around users' symptoms than other allergy trackers you are used to, which mostly depend on pollenan imperfect factor."

Sulpovar said the project has been in development for about a year, and said, "We included the tool within The Weather Channel app and weather.com because digital users come to us for local weather-related information," and not only to check weather forecasts, "but also for details on lifestyle impacts of weather on things like running, flu, and allergy."

He added, "Knowing how patients feel helps improve the model. IBM MarketScan (research database) is anonymized data from doctor visits of 100 million patients."

Daily pollen counts are also available on The Weather Channel app.

Image: Teena Maddox/TechRepublic

"A lot of what drives allergies are environmental factors like humidity, wind, and thunderstorms, as well as when specific plants in specific areas create pollen," Sulpovar said. "Plants have predictable behaviorfor example, the birch tree requires high humidity for birch pollen to burst and create allergens. To know when that will happen in different locations for all different species of trees, grasses, and weeds is huge, and machine learning is a huge help to pull it together and predict the underlying conditions that cause allergens and symptoms. The model will select the best indicators for your ZIP code and be a better determinant of atmospheric behavior."

"Allergy Insights with Watson" anticipates allergy symptoms up to 15 days in advance. AI, Watson, and its open multi-cloud platform help predict and shape future outcomes, automate complex processes, and optimize workers' time. IBM's The Weather Channel and weather.com are using this machine learning Watson to alleviate some of the problems wrought by allergens.

Sulpovar said, "Watson is IBM's suite of enterprise-ready AI services, applications, and tooling. Watson helps unlock value from data in new ways, at scale."

Data scientists have discovered a more accurate representation of allergy conditions. "IBM Watson machine learning trained the model to combine multiple weather attributes with environmental data and anonymized health data to assess when the allergy symptom risk is high, Sulpovar explained. "The model more accurately reflects the impact of allergens on people across the country in their day-to-day lives."

The model is challenged by changing conditions and the impact of climate change, but there has been a 25% to 50% increase in better decision making, based on allergy symptoms.

It may surprise long-time allergy sufferers who often cite pollen as the cause of allergies that "We found pollen is not a good predictor of allergy risk alone and that pollen sources are unreliable and spotty and cover only a small subset of species," Sulpovar explained. "Pollen levels are measured by humans in specific locations, but sometimes those measurements are few and far between, or not updated often. Our team found that using AI and weather data instead of just pollen data resulted in a 25-50% increase in making better decisions based on allergy symptoms."

Available on The Weather Channel app for iOS and Android, you can also find the tool online atwww.weather.com. Users of the tool will be given an accurate forecast, be alerted to flare-ups, and be provided with practical tips to reduce seasonal allergies.

This story was updated on April 23, 2020 to correct the spelling of Misha Sulpovar's name.

If you can only read one tech story a day, this is it. Delivered Weekdays

Image: Getty Images/iStockphoto

Read the original here:
IBM's The Weather Channel app using machine learning to forecast allergy hotspots - TechRepublic

The industries that can’t rely on machine learning – The Urban Twist

Ever since we started relying on machines and automation, people have been worried about the future of work and, specifically, whether robots will take over their jobs. And it seems this worry is becoming increasingly justified, as an estimated 40% of jobs could be replaced by robots for automated tasks by 2035. There is even a website dedicated to workers worried about whether they could eventually be replaced by robots.

While machines and artificial intelligence are becoming more complex and, therefore, more able to replace humans for menial tasks, that doesnt necessarily apply to a wide number of industries. Here, well go through the sectors that continue to require the human touch.

Despite scientists best efforts, the language and translation industry cannot be replaced by machines. Currently, automatic translation programmes are being developed with deep learning, a form of artificial intelligence which allows the computer to identify and correct its own mistakes through prolonged use and understanding. However, this still isnt enough to guarantee a correct translation, as deep learning requires external factors, like language itself, to remain the same over time. As we know, language is constantly developing, often with changes so subtle, you cant tell its happening. For a machine to be able to accurately translate texts or speech, it would need to be constantly updated with every new modification, across all languages.

Machines are also less able to pick up on the nuances found in speech or text. Things like sarcasm, jokes, or pop culture references are not easily translated, as the new audience may not understand them. Translating idioms is a particularly common example of this, as these phrases are generally unique to their dialect. In the UK, for example, the phrase its raining cats and dogs means its raining heavily. You would not want this translated on a literal level. As London Translations state in an article on the importance of using professionals for financial text translation, literal translations are technically correct, but read awkwardly and can be difficult to comprehend due to poor knowledge of the source language. Needless to say, these issues would be totally unacceptable in a document as important as a financial report.

Translating with accuracy not only requires fluency in both languages, but also a complete understanding of cultural differences and how they can be compared. Machines are simply not able to naturally make these connections without having the information already inputted by a person.

Finding the perfect candidate for a role can get stressful, especially if you have a pool of excellent potential employees to choose from. However, there are now algorithms that recruiters can use to help speed the process up and, theoretically, pick the most suitable person for the job. The technology is being praised for its ability to remove discrimination, as it simply examines raw data, and thus omits any sense of natural prejudice. It can also work to speed up the hiring process, as a computer can quickly sift through applicants and present the most relevant ones, saving someone the job of having to manually read through every application before making a decision.

However, in practice, its not that simple. Recruiting the right candidate should be based on more than qualifications and experience. Personality, attitude, and cultural fit should also be considered when recruiters are finding a candidate, none of which can be picked up on by machines.

One way of minimising this risk could be to introduce the algorithm at an earlier stage, through targeted ads or to help sift through initial applications. This allows recruiters to look at relevant candidates, rather than those that wouldnt have passed the initial screening anyway. However, this could conversely work to introduce bias to the recruitment process. The Harvard Business Review found that the algorithm effectively shapes the pool of candidates, giving a selection of applications that are all similar, fitting the mould that the computer is looking for. The study found that targeted ads on social media for a cashier role were shown to 85% of women, while cab driver ads were shown to an audience that was around 75% black. This happened as the algorithm reproduced bias from the real world, without human intervention. Having people physically checking the applications can serve to prevent this bias, introducing a more conscious effort to carefully screen each candidate on their own merits.

More people than ever before are meeting their partners online, according to a study published by Stanford University. And while a matchmaking algorithm sounds like a dream for singletons, it doesnt mean that they are able to effectively set you up with your life partner. As these algorithms are actually the intellectual property of each app, Dr Samantha Joel, assistant professor at London, Canadas Western University, created her own app with colleagues. Volunteers were asked to complete a questionnaire about themselves and ideal partners, much like typical dating websites would. After answering over 100 questions, the data was analysed and volunteers were set up on four-minute-long speed dates with potential candidates. Joel then asked the volunteers about their feelings towards any of their dates.

These results then identified the three things needed to predict romantic interest: actor desire (how much people liked their dates), partner desire (how much people were liked by dates), and attractiveness. The researchers were able to subtract attractiveness from the scores of romantic interest, giving a measure of compatibility. However, while the algorithm could accurately predict actor and partner desire, it failed on compatibility. Instead, it may be worth sticking to the second most common way of meeting a partner through a mutual friend. Your friends will be able to make educated decisions about relationships, as they have a deeper understanding of preferences and compatibility in a way that a machine simply cant replicate.

Author Bio: Syna Smith is a chief editor of Business usa today. She has also good experience in digital marketing.

More:
The industries that can't rely on machine learning - The Urban Twist

Artificial Intelligence & Advanced Machine learning Market is expected to grow at a CAGR of 37.95% from 2020-2026 – Latest Herald

According toBlueWeave Consulting, The globalArtificial Intelligence market&Advanced Machinehas reached USD 29.8 Billion in 2019 and projected to reach USD 281.24 Billion by 2026 and anticipated to grow with CAGR of 37.95% during the forecast period from 2020-2026, owing to increasing overall global investment in Artificial Intelligence Technology.

Request to get the report sample pages at : https://www.blueweaveconsulting.com/artificial-intelligence-and-advanced-machine-learning-market-bwc19415/report-sample

Artificial Intelligence (AI) is a computer science algorithm and analytics-driven approach to replicate human intelligence in a machine and Machine learning (ML) is an enhanced application of artificial intelligence, which allows software applications to predict the resulted accurately. The development of powerful and affordable cloud computing infrastructure is having a substantial impact on the growth potential of artificial intelligence and advanced machine learning market. In addition, diversifying application areas of the technology, as well as a growing level of customer satisfaction by users of AI & ML services and products is another factor that is currently driving the Artificial Intelligence & Advanced Machine Learning market. Moreover, in the coming years, applications of machine learning in various industry verticals is expected to rise exponentially. Proliferation in data generation is another major driving factor for the AI & Advanced ML market. As natural learning develops, artificial intelligence and advanced machine learning technology are paving the way for effective marketing, content creation, and consumer interactions.

In the organization size segment, large enterprises segment is estimated to have the largest market share and the SMEs segment is estimated to grow at the highest CAGR over the forecast period of 2026. The rapidly developing and highly active SMEs have raised the adoption of artificial intelligence and machine learning solutions globally, as a result of the increasing digitization and raised the cyber risks to critical business information and data. Large enterprises have been heavily adopting artificial intelligence and machine learning to extract the required information from large amounts of data and forecast the outcome of various problems.

Predictive analysis and machine learning and is rapidly used in retail, finance, and healthcare. The trend is estimated to continue as major technology companies are investing resources in the development of AI and ML. Due to the large cost-saving, effort-saving, and the reliable benefits of AI automation, machine learning is anticipated to drive the global artificial intelligence and Advanced machine learning market during the forecast period of 2026.

Digitalization has become a vital driver of artificial intelligence and advanced machine learning market across the region. Digitalization is increasingly propelling everything from hotel bookings, transport to healthcare in many economies around the globe. Digitalization had led to rising in the volume of data generated by business processes. Moreover, business developers or crucial executives are opting for solutions that let them act as data modelers and provide them an adaptive semantic model. With the help of artificial intelligence and Advanced machine learning business users are able to modify dashboards and reports as well as help users filter or develop reports based on their key indicators.

Geographically, the Global Artificial Intelligence & Advanced Machine Learning market is bifurcated into North America, Asia Pacific, Europe, Middle East, Africa & Latin America. The North America is dominating the market due to the developed economies of the US and Canada, there is a high focus on innovations obtained from R&D. North America has rapidly changed, and the most competitive global market in the world. The Asia-pacific region is estimated to be the fastest-growing region in the global AI & Advanced ML market. The rising awareness for business productivity, supplemented with competently designed machine learning solutions offered by vendors present in the Asia-pacific region, has led Asia-pacific to become a highly potential market.

Request to get the report description pages at :https://www.blueweaveconsulting.com/artificial-intelligence-and-advanced-machine-learning-market-bwc19415/

Artificial Intelligence & Advanced Machine Learning Market: Competitive Landscape

The major market players in the Artificial Intelligence & Advanced Machine Learning market are ICarbonX, TIBCO Software Inc., SAP SE, Fractal Analytics Inc., Next IT, Iflexion, Icreon, Prisma Labs, AIBrain, Oracle Corporation, Quadratyx, NVIDIA, Inbenta, Numenta, Intel, Domino Data Lab, Inc., Neoteric, UruIT, Waverley Software, and Other Prominent Players are expanding their presence in the market by implementing various innovations and technology.

Read more here:
Artificial Intelligence & Advanced Machine learning Market is expected to grow at a CAGR of 37.95% from 2020-2026 - Latest Herald

Learning to Trust AI in Troubled Times – AiThority

As budgets tighten amidst a global crisis, marketers are scrambling to find better sources of truth. Whether its good prospecting performance, campaign management, or audience optimisation, there are many areas of success when it comes to the programmatic landscape. To meet this need, programmatic advertising is increasingly being driven by machine learning. So why would anyone doubt machine learning?

Machine learning models are, in many ways expert liars. Machine learning optimises by any means necessary, and if blurring the truth or taking into account irrelevant information helps to optimise, then this is what occurs. Its scary to think how much an unchecked model could get away with in the fast-paced world of programmatic, where seconds count.

In fact, Artificial Intelligence (AI) researcher, Sandra Wachter, actually calls machine learning algorithms black boxes saying: There is a lack of transparency around machine learning decisions and theyre not necessarily justified or legitimate.

So, how can anyone ensure a machine learning model is telling the truth? The best way is to treat the model like a job interview candidate; that is, any statements made should be treated with the due amount of scepticism, and facts must always be checked.

When it comes to performance, everyone wants it better. However, while a model might offer better performance on face value, its important to ask how exactly is that measured:

Machine learning technology can be time consuming and expensive, and its remarkably easy to waste money on a bad algorithm. Having good, solid proof a model works is a great way to avoid wasting budget. Fact-checking and asking for more evidence is vital if unsure of results, and if the model vendor cant offer access to an analyst who can back up the numbers with the work, move on.

Just because all data is accessible, doesnt mean it should be used, or that each point of data is as important as another.

Is knowing whether someone has bought a product before as important as the colour of their socks? If all data in the machine learning model is being used, marketers must ask how and why. Why is all of the data used? Why is all this important? What tests were run to prove it? Is the model even allowed to use all the data?

Everyones familiar with the concept and purpose of GDPR and similar global legislation. So, you must make sure you ask the question about how data is being used, or run the risk of severe fines.

Brands have clear metrics to hit and its the job of client services, together with data engineering, to ensure the machine learning optimises towards the KPIs. However, the beauty of machine learning is it frees up the client services team to do more than just achieve the brands KPI; it can help brands achieve business goals, too.

With thousands of successful campaigns under their belts, client services know what works and what doesnt. Users should expect to be able to contact a specialist at any time to make sure its doing what the clients want.

When talking about purchasing machine learning with a vendor who cant (or wont) answer your questions, its time to bail. Marketers must feel empowered to ask any and all questions of vendors, and just like a job interview, if the answer isnt a good fit then neither is the candidate.

Not knowing about or not understanding machine learning is accepted. However, whats not acceptable is to not be allowed to question machine learning just does it. In order to innovate, especially in volatile environments, everyone needs to better understand machine learning and to achieve this, a two-way conversation is vital.

Silverbullet is the new breed of data-smart marketing services, designed to empower businesses to achieve through a unique hybrid of data services, insight-informed content and programmatic. Our blend of artificial intelligence and human experience

More about Silver Bullet: http://www.wearesivlerbullet.com

Share and Enjoy !

The rest is here:
Learning to Trust AI in Troubled Times - AiThority

AI used to predict Covid-19 patients’ decline before proven to work – STAT

Dozens of hospitals across the country are using an artificial intelligence system created by Epic, the big electronic health record vendor, to predict which Covid-19 patients will become critically ill, even as many are struggling to validate the tools effectiveness on those with the new disease.

The rapid uptake of Epics deterioration index is a sign of the challenges imposed by the pandemic: Normally hospitals would take time to test the tool on hundreds of patients, refine the algorithm underlying it, and then adjust care practices to implement it in their clinics.

Covid-19 is not giving them that luxury. They need to be able to intervene to prevent patients from going downhill, or at least make sure a ventilator is available when they do. Because it is a new illness, doctors dont have enough experience to determine who is at highest risk, so they are turning to AI for help and in some cases cramming a validation process that often takes months or years into a couple weeks.

advertisement

Nobody has amassed the numbers to do a statistically valid test of the AI, said Mark Pierce, a physician and chief medical informatics officer at Parkview Health, a nine-hospital health system in Indiana and Ohio that is using Epics tool. But in times like this that are unprecedented in U.S. health care, you really do the best you can with the numbers you have, and err on the side of patient care.

Epics index uses machine learning, a type of artificial intelligence, to give clinicians a snapshot of the risks facing each patient. But hospitals are reaching different conclusions about how to apply the tool, which crunches data on patients vital signs, lab results, and nursing assessments to assign a 0 to 100 score, with a higher score indicating an elevated risk of deterioration. It was already used by hundreds of hospitals before the outbreak to monitor hospitalized patients, and is now being applied to those with Covid-19.

advertisement

At Parkview, doctors analyzed data on nearly 100 cases and found that 75% of hospitalized patients who received a score in a middle zone between 38 and 55 were eventually transferred to the intensive care unit. In the absence of a more precise measure, clinicians are using that zone to help determine who needs closer monitoring and whether a patient in an outlying facility needs to be transferred to a larger hospital with an ICU.

Meanwhile, the University of Michigan, which has seen a larger volume of patients due to a cluster of cases in that state, found in an evaluation of 200 patients that the deterioration index is most helpful for those who scored on the margins of the scale.

For about 9% of patients whose scores remained on the low end during the first 48 hours of hospitalization, the health system determined they were unlikely to experience a life-threatening event and that physicians could consider moving them to a field hospital for lower-risk patients. On the opposite end of the spectrum, it found 10% to 12% of patients who scored on the higher end of the scale were much more likely to need ICU care and should be closely monitored. More precise data on the results will be published in coming days, although they have not yet been peer-reviewed.

Clinicians in the Michigan health system have been using the score thresholds established by the research to monitor the condition of patients during rounds and in a command center designed to help manage their care. But clinicians are also considering other factors, such as physical exams, to determine how they should be treated.

This is not going to replace clinical judgement, said Karandeep Singh, a physician and health informaticist at the University of Michigan who participated in the evaluation of Epics AI tool. But its the best thing weve got right now to help make decisions.

Stanford University has also been testing the deterioration index on Covid-19 patients, but a physician in charge of the work said the health system has not seen enough patients to fully evaluate its performance. If we do experience a future surge, we hope that the foundation we have built with this work can be quickly adapted, said Ron Li, a clinical informaticist at Stanford.

Executives at Epic said the AI tool, which has been rolled out to monitor hospitalized patients over the past two years, is already being used to support care of Covid-19 patients in dozens of hospitals across the United States. They include Parkview, Confluence Health in Washington state, and ProMedica, a health system that operates in Ohio and Michigan.

Our approach as Covid was ramping up over the last eight weeks has been to evaluate does it look very similar to (other respiratory illnesses) from a machine learning perspective and can we pick up that rapid deterioration? said Seth Hain, a data scientist and senior vice president of research and development at Epic. What we found is yes, and the result has been that organizations are rapidly using this model in that context.

Some hospitals that had already adopted the index are simply applying it to Covid-19 patients, while others are seeking to validate its ability to accurately assess patients with the new disease. It remains unclear how the use of the tool is affecting patient outcomes, or whether its scores accurately predict how Covid-19 patients are faring in hospitals. The AI system was initially designed to predict deterioration of hospitalized patients facing a wide array of illnesses. Epic trained and tested the index on more than 100,000 patient encounters at three hospital systems between 2012 and 2016, and found that it could accurately characterize the risks facing patients.

When the coronavirus began spreading in the United States, health systems raced to repurpose existing AI models to help keep tabs on patients and manage the supply of beds, ventilators and other equipment in their hospitals. Researchers have tried to develop AI models from scratch to focus on the unique effects of Covid-19, but many of those tools have struggled with bias and accuracy issues, according to a review published in the BMJ.

The biggest question hospitals face in implementing predictive AI tools, whether to help manage Covid-19 or advanced kidney disease, is how to act on the risk score it provides. Can clinicians take actions that will prevent the deterioration from happening? If not, does it give them enough warning to respond effectively?

In the case of Covid-19, the latter question is the most relevant, because researchers have not yet identified any effective treatments to counteract the effects of the illness. Instead, they are left to deliver supportive care, including mechanical ventilation if patients are no longer able to breathe on their own.

Knowing ahead of time whether mechanical ventilation might be necessary is helpful, because doctors can ensure that an ICU bed and a ventilator or other breathing assistance is available.

Singh, the informaticist at the University of Michigan, said the most difficult part about making predictions based on Epics system, which calculates a score every 15 minutes, is that patients ratings tend to bounce up and down in a sawtooth pattern. A change in heart rate could cause the score to suddenly rise or fall. He said his research team found that it was often difficult to detect, or act on, trends in the data.

Because the score fluctuates from 70 to 30 to 40, we felt like its hard to use it that way, he said. A patient whos high risk right now might be low risk in 15 minutes.

In some cases, he said, patients bounced around in the middle zone for days but then suddenly needed to go to the ICU. In others, a patient with a similar trajectory of scores could be managed effectively without need for intensive care.

But Singh said that in about 20% of patients it was possible to identify threshold scores that could indicate whether a patient was likely to decline or recover. In the case of patients likely to decline, the researchers found that the system could give them up to 40 hours of warning before a life-threatening event would occur.

Thats significant lead time to help intervene for a very small percentage of patients, he said. As to whether the system is saving lives, or improving care in comparison to standard nursing practices, Singh said the answers will have to wait for another day. You would need a trial to validate that question, he said. The question of whether this is saving lives is unanswerable right now.

See the original post here:
AI used to predict Covid-19 patients' decline before proven to work - STAT

Rashed Ali Almansoori emphasizes on how Artificial Intelligence and Machine Learning will turn out to be game-changers – IBG NEWS

Rashed Ali Almansoori emphasizes on how Artificial Intelligence and Machine Learning will turn out to be game-changers

To be in the race, it is important to evolve with time. Technology is booming and the new-age era has seen many changes. In the past, since the world met the internet, things changed and how. From the time of cellphones to smartphones, computers to portable laptops, things have seamlessly changed with social media taking over everyone. Earlier Facebook was considered only for chatting and now it has become a medium to make money by creating content. Besides this, there are many other platforms like YouTube, TikTok, and Instagram to earn in millions. One of the key social media players, Rashed Ali Almansoori is a digital genius with years of experience.

He is a tech blogger who believes to cope up with the latest trends. Being a digital creator, Rashed loves to create meaningful yet informative content about technology. Authenticity is the key to establish your target audience over the web, says the blogger. His other expertise includes web development, web designing, SEO building, and promoting brands over the digital domain. Rashed states that many businesses have taken the digital route considering the popular social media has given in the last decade. The coming decade will see many other innovations out of which Artificial Intelligence will be the main highlight among all.

The digital expert is currently learning the fundamentals of Artificial Intelligence (AI) and Machine Learning (ML). It would not be a surprise if machines perform tasks effectively than humans in the coming time. Upgrading yourself to stay in the game is the only solution, quoted Rashed. By learning the courses, he aims to integrate them into his works. Bringing novelty in his work is what the blogger is doing and it will benefit him in the future. The past year, the 29-year old techie built a strong image of himself on social media and his website is garnering millions of visitors from the Middle East and other countries.

See more here:
Rashed Ali Almansoori emphasizes on how Artificial Intelligence and Machine Learning will turn out to be game-changers - IBG NEWS

Global Machine Learning in Education Market 2020 Study of Growing Trends, Future Scope, New investment, Regional Analysis, Upcoming Business…

The report entitled Machine Learning in Education Market: Global Industry Analysis 2020-2026is a comprehensive research study presenting significant data By Reportspedia.com

Global Machine Learning in Education Market 2020 Industry Research Report offers you market size, industry growth, share, investment plans and strategies, development trends, business idea and forecasts to 2026. The report highlights the exhaustive study of the major market along with present and forecast market scenario with useful business decisions.

Machine Learning in Education business report includes primary research together with the all-inclusive investigation of subjective as well as quantitative perspectives by different industry specialists, key supposition pioneers to gain a more profound understanding of the industry execution.[Request The COVID 19 Impact On This Market]. The report gives the sensible picture of the current industrial situation which incorporates authentic and anticipated market estimate in terms of value and volume, technological advancement, macroeconomic and governing factors in the market.

Top Key Manufacturers of Machine Learning in Education industry Report:-

IBMMicrosoftGoogleAmazonCognizanPearsonBridge-UDreamBox LearningFishtreeJellynoteQuantum Adaptive Learning

For Better Insights Go With This Free Sample Report Enabled With Respective Tables and Figures: https://www.reportspedia.com/report/technology-and-media/global-machine-learning-in-education-market-2019-by-company,-regions,-type-and-application,-forecast-to-2024/30939 #request_sample

Machine Learning in Education Market segment by Type-

Cloud-BasedOn-Premise

Machine Learning in Education Market segment by Application-

Intelligent Tutoring SystemsVirtual FacilitatorsContent Delivery SystemsInteractive WebsitesOthers

(***Our FREE SAMPLE COPY of the report gives a brief introduction to the research report outlook, TOC, list of tables and figures, an outlook to key players of the market and comprising key regions.***)

The report offers a multi-step view of the Global Machine Learning in Education Market. The first approach focuses through an impression of the market. This passage includes several definitions, arrangements, the chain assembly of the industry in one piece, and various segmentation on the basis of solution, product and regionalong with different geographic regions for the global market. This part of the section also integrates an all-inclusive analysis of the different government strategies and enlargement plans that influence the market, its cost assemblies and industrialized processes. The second subdivision of the report includes analytics on the Global Machine Learning in Education Market based on its revenue size in terms of value and volume.

Machine Learning in Education Market Segmentation Analysis:-

Global Machine Learning in Education market segmentation, by solution: Biological, Chemical, Mechanical, Global Machine Learning in Education market segmentation, by product: Stress Protection, Scarification, Pest Protection

Machine Learning in Education Market Regional Analysis:-North America(United States, Canada),Europe(Germany, Spain, France, UK, Russia, and Italy),Asia-Pacific(China, Japan, India, Australia, and South Korea),Latin America(Brazil, Mexico, etc.),The Middle East and Africa(GCC and South Africa).

We have designed the Machine Learning in Education report with a group of graphical representations, tables, and figures which portray a detailed picture of Machine Learning in Education industry. In addition, the report has a clear objective to mark probable shareholders of the company. Highlighting business chain framework explicitly offers an executive summary of market evolution. Thus it becomes easy to figure out the obstacles and uplifting profit stats. In accordance with a competitive prospect, this Machine Learning in Education report dispenses a broad array of features essential for measuring the current Machine Learning in Education market performance along with technological advancements, business abstract, strengths and weaknesses of market position and hurdles crossed by the leading Machine Learning in Education market players to gain leading position.

For more actionable insights into the competitive landscape of global Machine Learning in Education market, get a customized report here: https://www.reportspedia.com/report/technology-and-media/global-machine-learning-in-education-market-2019-by-company,-regions,-type-and-application,-forecast-to-2024/30939 #inquiry_before_buying

Some Notable Report Offerings:

Report Table of Content Overview Gives Exact Idea About International Machine Learning in Education Market Report:

Chapter 1 describes Machine Learning in Education report important market inspection, product cost structure, and analysis, Machine Learning in Education market size and scope forecast from 2017 to 2026. Although, Machine Learning in Education market gesture, factors affecting the expansion of business also deep study of arise and existing market holders.

Chapter 2 display top manufacturers of Machine Learning in Education market with sales and revenue and market share. Furthermore, report analyses the import and export scenario of industry, demand and supply ratio, labor cost, raw material supply, production cost, marketing sources, and downstream consumers of market.

Chapter 3, 4, 5 analyses Machine Learning in Education report competitive analysis based on product type, their region wise depletion and import/export analysis, the composite annual growth rate of market and foretell study from 2017 to 2026.

Chapter 6 gives an in-depth study of Machine Learning in Education business channels, market sponsors, vendors, dispensers, merchants, market openings and risk.

Chapter 7 gives Machine Learning in Education market Research Discoveries and Conclusion

Chapter 8 gives Machine Learning in Education Appendix

To Analyze Details Of Table Of Content (TOC) of Machine Learning in Education Market Report, Visit Here: https://www.reportspedia.com/report/technology-and-media/global-machine-learning-in-education-market-2019-by-company,-regions,-type-and-application,-forecast-to-2024/30939 #table_of_contents

Continue reading here:
Global Machine Learning in Education Market 2020 Study of Growing Trends, Future Scope, New investment, Regional Analysis, Upcoming Business...

One Supercomputers HPC And AI Battle Against The Coronavirus – The Next Platform

Normally, supercomputers installed at academic and national laboratories get configured once, acquired as quickly as possible before the money runs out, installed and tested, qualified for use, and put to work for a four or five or possibly longer tour of duty. It is a rare machine that is upgraded even once, much less a few times.

But that is not he case with the Corona system at Lawrence Livermore National Laboratory, which was commissioned in 2017 when North America had a total solar eclipse and hence its nickname. While this machine, procured under the Commodity Technology Systems (CTS-1) to not only do useful work, but to assess the CPU and GPU architectures provided by AMD, was not named after the coronavirus pandemic that is now spreading around the Earth, the machine is being upgraded one more time to be put into service as a weapon against the SARS-CoV-2 virus which caused the COVID-19 illness that has infected at least 2.75 million people (confirmed by test, with the number very likely being higher) and killed at least 193,000 people worldwide.

The Corona system was built by Penguin Computing, which has a long-standing relationship with Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories the so-called Tri-Labs that are part of the US Department of Energy and that coordinate on their supercomputer procurements. The initial Corona machine installed in 2018 had 164 compute nodes, each equipped with a pair of Naples Epyc 7401 processors, which have 24 cores each running at 2 GHz with an all core turbo boost of 2.8 GHz. The Penguin Tundra Extreme servers that comprise this cluster have 256 GB of main memory and 1.6 TB of PCI-Express flash. When the machine was installed in November 2018, half of the nodes were equipped with four of AMDs Radeon Instinct MI25 GPU accelerators, which had 16 GB of HBM2 memory each and which had 768 gigaflops of FP64 performance, 12.29 teraflops of FP32 performance, and 24.6 teraflops of FP16 performance. The 7,872 CPU cores in the system delivered 126 teraflops at FP64 double precision all by themselves, and the Radeon Instinct MI25 GPU accelerators added another 251.9 teraflops at FP64 double precision. The single precision performance for the machine was obviously much higher, at 4.28 petaflops across both the CPUs and GPUs. Interestingly, this machine was equipped with 200 Gb/sec HDR InfiniBand switching from Mellanox Technologies, which was obviously one of the earliest installations of this switching speed.

In November last year, just before the coronavirus outbreak or, at least we think that was before the outbreak, that may turn out to not be the case AMD and Penguin worked out a deal to installed four of the much more powerful Radeon Instinct MI60 GPU accelerators, based on the 7 nanometer Vega GPUs, in the 82 nodes in the system that didnt already have GPU accelerators in them. The Radeon Instinct MI60 has 32 GB of HBM2 memory, and has 6.6 teraflops of FP64 performance, 13.3 teraflops of FP32 performance, and 26.5 teraflops of FP16 performance. Now the machine has 8.9 petaflops of FP32 performance and 2.54 petaflops of FP64 performance, and this is a much more balanced 64-bit to 32-bit performance, and it makes these nodes more useful for certain kinds of HPC and AI workloads. Which turns out to be very important to Lawrence Livermore in its fight against the COVID-19 disease.

To find out more about how the Corona system and others are being deployed in the fight against COVID-19, and how HPC and AI workloads are being intertwined in that fight, we talked to Jim Brase, deputy associate director for data science at Lawrence Livermore.

Timothy Prickett Morgan: It is kind of weird that this machine was called Corona. Foreshadowing is how you tell the good literature from the cheap stuff. The doubling of performance that just happened late last year for this machine could not have come at a better time.

Jim Brase: It pretty much doubles the overall floating point performance of the machine, which is great because what we are mainly running on Corona is both the molecular dynamics calculations of various viral and human protein components and then machine learning algorithms for both predictive models and design optimization.

TPM: Thats a lot more oomph. So what specifically are you doing with it in the fight against COVID-19?

Jim Brase: There are two basic things were doing as part of the COVID-19 response, and this machine is almost entirely dedicated to this although several of our other clusters at Lawrence Livermore are involved as well.

We have teams that are doing both antibody and vaccine design. They are mainly focused on therapeutic antibodies right now. They are basically designing proteins that will interact with the virus or with the way the virus interacts with human cells. That involves hypothesizing different protein structures and computing what those structures actually look like in detail, then computing using molecular dynamics the interaction between those protein structures and the viral proteins or the viral and human cell interactions.

With this machine, we do this iteratively to basically design a set of proteins. We have a bunch of metrics that we try to optimize on binding strength, the stability of the binding, stuff like that and then we do a detailed molecular dynamics calculations to figure out the effective energy of those binding events. These metrics determine the quality of the potential antibody or vaccine that we design.

TPM: To wildly oversimplify, this SARS-CoV-2 virus is a ball of fat with some spikes on it that wreaks havoc as it replicates using our cells as raw material. This is a fairly complicated molecule at some level. What are we trying to do? Stick goo to it to try to keep it from replicating or tear it apart or dissolve it?

Jim Brase: In the case of in the case of antibodies, which is what were mostly focusing on right now, we are actually designing a protein that will bind to some part of the virus, and because of that the virus then changes its shape, and the change in shape means it will not be able to function. These are little molecular machines that they depend on their shape to do things.

TPM: Theres not something that will physically go in and tear it apart like a white blood cell eats stuff.

Jim Brase: No. Thats generally done by biology, which comes in after this and cleans up. What we are trying to do is what we call neutralizing antibodies. They go in and bind and then the virus cant do its job anymore.

TPM: And just for a reference, what is the difference between a vaccine and an antibody?

Jim Brase: In some sense, they are the opposite of each other. With a vaccine, we are putting in a protein that actually looks like the virus but it doesnt make you sick. It stimulates the human immune system to create its own antibodies to combat that virus. And those antibodies produced by the body do exactly the same thing we were just talking about Producing antibodies directly is faster, but the effect doesnt last. So it is more of a medical treatment for somebody who is already sick.

TPM: I was alarmed to learn that for certain coronaviruses, immunity doesnt really last very long. With the common cold, the reason we get them is not just because they change every year, but because if you didnt have a bad version of it, you dont generate a lot of antibodies and therefore you are susceptible. If you have a very severe cold, you generate antibodies and they last for a year or two. But then youre done and your body stops looking for that fight.

Jim Brase: The immune system is very complicated and for some things it creates antibodies that remembers them for a long time. For others, its much shorter. Its sort of a combination of the of the what we call the antigen the thing about that, the virus or whatever that triggers it and then the immune system sort of memory function together, cause the immunity not to last as long. Its not well understood at this point.

TPM: What are the programs youre using to do the antibody and protein synthesis?

Jim Brase: We are using a variety of programs. We use GROMACS, we use NAMD, we use OpenMM stuff. And then we have some specialized homegrown codes that we use as well that operate on the data coming from these programs. But its mostly the general, open source molecular mechanics and molecular dynamics codes.

TPM: Lets contrast this COVID-19 effort with like something like SARS outbreak in 2003. Say you had the same problem. Could you have even done the things you are doing today with SARS-CoV-2 back then with SARS? Was it even possible to design proteins and do enough of them to actually have an impact to get the antibody therapy or develop the vaccine?

Jim Brase: A decade ago, we could do single calculations. We could do them one, two, three. But what we couldnt do was iterate it as a design optimization. Now we can run enough of these fast enough that we can make this part of an actual design process where we are computing these metrics, then adjusting the molecules. And we have machine learning approaches now that we didnt have ten years ago that allow us to hypothesize new molecules and then we run the detailed physics calculations against this, and we do that over and over and over.

TPM: So not only do you have a specialized homegrown code that takes the output of these molecular dynamics programs, but you are using machine learning as a front end as well.

Jim Brase: We use machine learning in two places. Even with these machines and we are using our whole spectrum of systems on this effort we still cant do enough molecular dynamics calculations, particularly the detailed molecular dynamics that we are talking about here. What does the new hardware allow us to do? It basically allows us to do a higher percentage of detailed molecular dynamics calculations, which give us better answers as opposed to more approximate calculations. So you can decrease the granularity size and we can compute whole molecular dynamics trajectories as opposed to approximate free energy calculations. It allows us to go deeper on the calculations, and do more of those. So ultimately, we get better answers.

But even with these new machines, we still cant do enough. If you think about the design space on, say, a protein that is a few hundred amino acids in length, and at each of those positions you can put in 20 different amino acids, you on the order of 20200 in the brute force with the possible number of proteins you could evaluate. You cant do that.

So we try to be smart about how we select where those simulations are done in that space, based on what we are seeing. And then we use the molecular dynamics to generate datasets that we then train machine learning models on so that we are basically doing very smart interpolation in those datasets. We are combining the best of both worlds and using the physics-based molecular dynamics to generate data that we use to train these machine learning algorithms, which allows us to then fill in a lot of the rest of the space because those can run very, very fast.

TPM: You couldnt do all of that stuff ten years ago? And SARS did not create the same level of outbreak that SARS-CoV-2 has done.

Jim Brase: No, these are all fairly new early new ideas.

TPM: So, in a sense, we are lucky. We have the resources at a time when we need them most. Did you have the code all ready to go for this? Were you already working on this kind of stuff and then COVID-19 happened or did you guys just whip up these programs?

Jim Brase: No, no, no, no. Weve been working on this kind of stuff for her for a few years.

TPM: Well, thank you. Id like to personally thank you.

Jim Brase: It has been an interesting development. Its both been both in the biology space and the physics space, and those two groups have set up a feedback loop back and forth. I have been running a consortium called Advanced Therapeutic Opportunities in Medicine, or ATOM for short, to do just this kind of stuff for the last four years. It started up as part of the Cancer Moonshot in 2016 and focused on accelerating cancer therapeutics using the same kinds of ideas, where we are using machine learning models to predict the properties, using both mechanistic simulations like molecular dynamics, but all that combined with data, but then also using it other the other way around. We also use machine learning to actually hypothesize new molecules given a set of molecules that we have right now and that we have computed properties on them that arent quite what we want, how do we just tweak those molecules a little bit to adjust their properties in the directions that we want?

The problem with this approach is scale. Molecules are atoms that are bonded with each other. You could just take out an atom, add another atom, change a bond type, or something. The problem with that is that every time you do that randomly, you almost always get an illegal molecule. So we train these machine learning algorithms these are generative models to actually be able to generate legal molecules that are close to a set of molecules that we have but a little bit different and with properties that are probably a little bit closer to what we what we want. And so that allows us to smoothly adjust the molecular designs to move towards the optimization targets that we want. If you think about optimization, what you want are things with smooth derivatives. And if you do this in sort of the discrete atom bond space, you dont have smooth derivatives. But if you do it in these, these are what we call learned latent spaces that we get from generative models, then you can actually have a smooth response in terms of the molecular properties. And thats what we want for optimization.

The other part of the machine learning story here is these new types of generative models. So variational autoencoders, generative adversarial models the things you hear about that generate fake data and so on. Were actually using those very productively to imagine new types of molecules with the kinds of properties that we want for this. And so thats something we were absolutely doing before COVID-19 hit. We have taken these projects like ATOM cancer project and other work weve been doing with DARPA and other places focused on different diseases and refocused those on COVID-19.

One other thing I wanted to mention is that we havent just been applying biology. A lot of these ideas are coming out of physics applications. One of our big things at Lawrence Livermore is laser fusion. We have 192 huge lasers at the National Ignition Facility to try to create fusion in a small hydrogen deuterium target. There are a lot of design parameters that go into that. The targets are really complex. We are using the same approach. Were running mechanistic simulations of the performance of those targets, we are then improving those with real data using machine learning. So now we now have a hybrid model that has physics in it and machine learning data models, and using that to optimize the designs of the laser fusion target. So thats led us to a whole new set of approaches to fusion energy.

Those same methods actually are the things were also applying to molecular design for medicines. And the two actually go back and forth and sort of feed on each other and support each other. In the last few weeks, some of the teams that have been working on the physics applications have actually jumped over onto the biology side and are using some of the same sort of complex workflows that were using on these big parallel machines that theyve developed for physics and applying those to some of the biology applications and helping to speed up the applications on these on this new hardware thats coming in. So it is a really nice synergy going back and forth.

TPM: I realize that machine learning software uses the GPUs for training and inference, but is the molecular dynamics software using the GPUs, too?

Jim Brase: All of the molecular dynamics software has been set up to use GPUs. The code actually maps pretty naturally onto the GPU.

TPM: Are you using the CUDA variants of the molecular dynamics software, and I presume that it is using the Radeon Open Compute, or ROCm, stack from AMD to translate that code so it can run on the Radeon Instinct accelerators?

Jim Brase: There has been some work to do, but it works. Its getting its getting to be pretty solid now, thats one of the reasons we wanted to jump into the AMD technology pretty early, because you know, any time you do first-in-kind machines its not always completely smooth sailing all the way.

TPM: Its not like Lawrence Livermore has a history of using novel designs for supercomputers. [Laughter]

Jim Brase: We seldom work with machines that are not Serial 00001 or Serial 00002.

TPM: Whats the machine learning stack you use? I presume it is TensorFlow.

Jim Brase: We use TensorFlow extensively. We use PyTorch extensively. We work with the DeepChem group at Stanford University that does an open chemistry package built on TensorFlow as well.

TPM: If you could fire up an exascale machine today, how much would it help in the fight against COVID-19?

Jim Brase: It would help a lot. Theres so much to do.

I think we need we need to show the benefits of computing for drug design and we are concretely doing that now. Four years ago, when we started up ATOM, everybody thought this was nuts, the general idea that we could lead with computing rather than experiment and do the experiments to focus on validating the computational models rather than the other way around. Everybody thought we were nuts. As you know, with the growth of data, the growth of machine learning capabilities, more accessibility to sophisticated molecular dynamics, and so on its much more accepted that computing is a big part of this. But we still have a long way to go on this.

The fact is, machine learning is not magic. Its a fancy interpolator. You dont get anything new out of it. With the physics codes, you actually get something new out of it. So the physics codes are really the foundation of this. You supplement them with experimental data because theyre not right necessarily, either. And then you use the machine learning on top of all that to fill in the gaps because you havent been able to sample that huge chemical and protein space adequately to really understand everything at either the data level or the mechanistic level.

So thats how I think of it. Data is truth sort of and what you also learn about data is that it is not always the same as you go through this. But data is the foundation. Mechanistic modeling allows us to fill in where we just cant measure enough data it is too expensive, it takes too long, and so on. We fill in with mechanistic modeling and then above that we fill in that then with machine learning. We have this stack of experimental truth, you know, mechanistic simulation that incorporates all the physics and chemistry we can, and then we use machine learning to interpolate in those spaces to support the design operation.

For COVID-19, there are there are a lot of groups doing vaccine designs. Some of them are using traditional experimental approaches and they are making progress. Some of them are doing computational designs, and that includes the national labs. Weve got 35 designs done and we are experimentally validating those now and seeing where we are with them. It will generally take two to three iterations of design, then experiment, and then adjust the designs back and forth. And were in the first round of that right now.

One thing were all doing, at least on the public side of this, is we are putting all this data out there openly. So the molecular designs that weve proposed are openly released. Then the validation data that we are getting on those will be openly released. This is so our group working with other lab groups, working with university groups, and some of the companies doing this COVID-19 research can contribute. We are hoping that by being able to look at all the data that all these groups are doing, we can learn faster on how to sort of narrow in on the on the vaccine designs and the antibody designs that will ultimately work.

Continued here:
One Supercomputers HPC And AI Battle Against The Coronavirus - The Next Platform

‘Err On The Side Of Patient Care’: Doctors Turn To Untested Machine Learning To Monitor Virus – Kaiser Health News

Physicians are prematurely relying on Epic's deterioration index, saying they're unable to wait for a validation process that can take months to years. The artificial intelligence gives them a snapshot of a patient's illness and helps them determine who needs more careful monitoring. News on technology is from Verily, Google, MIT, Livongo and more, as well.

Stat:AI Used To Predict Covid-19 Patients' Decline Before Proven To WorkDozens of hospitals across the country are using an artificial intelligence system created by Epic, the big electronic health record vendor, to predict which Covid-19 patients will become critically ill, even as many are struggling to validate the tools effectiveness on those with the new disease. The rapid uptake of Epics deterioration index is a sign of the challenges imposed by the pandemic: Normally hospitals would take time to test the tool on hundreds of patients, refine the algorithm underlying it, and then adjust care practices to implement it in their clinics. Covid-19 is not giving them that luxury. (Ross, 4/24)

Modern Healthcare:Verily, Google Cloud Develop COVID-19 Chatbot For HospitalsGoogle's sister company Verily Life Sciences has joined the mix of companies offering COVID-19 screening tools that hospitals can add to their websites. The screener, called the COVID-19 Pathfinder, takes the form of a chatbot or voicebotessentially personified computer programs that can instant-message or speak to human users in plain English. (Cohen, 4/23)

Boston Globe:Tech From MIT May Allow Caregivers To Monitor Coronavirus Patients From A DistanceA product developed at the Massachusetts Institute of Technology is being used to remotely monitor patients with COVID-19, using wireless signals to detect breathing patterns of people who do not require hospitalization but who must be watched closely to ensure their conditions remain stable. The device, developed at MITs Computer Science and Artificial Intelligence Laboratory by professor Dina Katabi and her colleagues, could in some situations lower the risk of caregivers becoming infected while treating patients with the coronavirus. (Rosen, 4/23)

Stat:A Gulf Emerges In Health Tech: Some Companies Surge, Others Have LayoffsYou might expect them to be pandemic-proof: Theyre the companies offering glimpses of the future in which you dont have to go to the doctors office, ones that would seem to be insulated from a crisis in which people arent leaving their homes. Yet theres a stark divide emerging among the companies providing high-demand virtual health care, triage, and testing services. While some are hiring up and seeing their stock prices soar, others are furloughing and laying off their workers. (Robbins and Brodwin, 4/24)

Go here to read the rest:
'Err On The Side Of Patient Care': Doctors Turn To Untested Machine Learning To Monitor Virus - Kaiser Health News

Automation, AI, and ML The Heroes in the World of Payment Fraud Detection – EnterpriseTalk

How organizations are leveraging AI to track a fraudulent activity (for example, in the financial industry) and what tools are available to the enterprises right now?

Machine Learning is not new in the world of payment fraud. In fact, one of the pioneers of Machine Learning is Professor Leon Cooper, Director of Brown Universitys Centre for Neural Science. The Centre was founded in 1973 to study animal nervous systems and the human brain. However, if you follow his career, Dr. Coopers machine learning technology was adapted for spotting fraud on credit cards and is still used today for identifying payment fraud within many financial institutions around the world.

Firms Need to be Secure to the Core Before Considering Digital Transformation

Machine learning technologies improve when they are presented with more and more data. Since there is a lot of payment data around today, payment fraud prevention has become an excellent use case for AI. To date, machine learning technologies have been used mainly by banks. Still, today more and more merchants are taking advantage of this technology to help automate fraud detection, including many retailers and telecommunications companies.

What are the interesting developments in this space for enterprises?

There is a lot of information on how machine learning is helping to understand human behavior and, more specifically, false/positive detection. However, it is our view that there is not enough focus on how automation could benefit the whole, end to end process, particularly within day to day fraud management business processes.

Until now, the fraud detection industry has focused on detecting fraud reactively; but it has not focussed on proactively evaluating the impact of automation on the whole end to end fraud management process. Clearly, the interdependencies on these two activity streams are significant, so the question remains why fraud prevention suppliers arent considering both.

Automation Is Booming Robots Are Taking Over Amid Lock-downs

Fraud is increasing, so at what point do we recognize that the approach of throwing budget and increasing the number of analysts in our teams is not working and that we need to consider automating more of the process? Machines dont steal data, so why are the manual processes/interventions not attracting more attention?

It isnt a stretch to imagine most of the fraud risk strategy process becoming automated. Instead of the expanding teams of today performing the same manual task continually, those same staff members could be used to spot enhancements in customer insight. This would enable analysts to thoroughly investigate complex fraud patterns, which a machine has not identified, or to assist in other tasks outside of risk management, which provide added business value.

Process automation is continuing to innovate and provide increased efficiency and profit gains in the places its implemented. The automation revolution isnt coming; its here.

What are the major concerns?

One major concern is the lack of urgency in adopting new ways of working there is a need to be more agile and innovative to stop the fraudsters continuing to win. We need to act fast and innovate, but many organizations are struggling to keep up, and the fraudsters are winning.

The use cases are well defined for the use of machine learning and AI, with big data sets, etc. but machine learning will not fix poor data management processes alone. Machines dont steal data. People do.

With the number of digital payments being made across the globe increasing dramatically, how can organizations ensure maximum sales conversion and payment acceptance, whilst mitigating any risk exposure?

Strategy alignment for taking digital payments is critical. The more organizations can operate holistically and not get caught out by silos and operational gaps, the better. Put simply; if key stakeholders in both the sales and marketing and risk teams are working to the same set of key performance indicators (KPIs), then mistakes will be mitigated. Many issues arise due to operational gaps, and those gaps will be exploited by the highly sophisticated and technically advanced modern-day fraudster.

Artificial Intelligence Infused with Big Data Creating a Tech-driven World

The reality is that technology is accelerating the convergence business activities. Managing that convergence and adapting your organization to ensure it remains competitive becomes more and more important. Successful organizations with a competitive future will continue to ensure maximum sales conversions and payment acceptance, whilst mitigating any risk exposure, by exploiting best of breed technology as much as possible.

Excerpt from:
Automation, AI, and ML The Heroes in the World of Payment Fraud Detection - EnterpriseTalk

The Dell EMC PowerEdge R7525 Saved Time During Machine Learning Preparation Tasks and Achieved Faster Image Processing Than a HPE ProLiant DL380…

Principled Technologies (PT) ran analytics and synthetic, containerized workloads on a ~$40K Dell EMC PowerEdge R7525 and a similarly priced HPE ProLiant DL380 Gen10 to gauge performance and performance/cost ratio.

To explore the performance on certain machine learning tasks of a ~$40K Dell EMC PowerEdge R7525 server powered by AMD EPYC 7502 processors, the experts at PT set up two testbeds and compared its performance results to those of a similarly priced HPE ProLiant DL380 Gen10 powered by Intel Xeon Gold 6240 processors.

The first study, Finish machine learning preparation tasks on Kubernetes containers in less time with the Dell EMC PowerEdge R7525, utilizes a workload that emulates simple image processing tasks that a company might run in the preparation phase of machine learning.

According to the first study, we found that the Dell EMC server: Processed 3.3 million images in 55.8% less time Processed 2.26x the images each second Had 2.32x the value in terms of image processing rate vs. hardware cost.

The second study, Get better k-means analytics workload performance for your money with the Dell EMC PowerEdge R7525, utilizes a learning algorithm used to mimic data mining that a company might use to improve the customer experience or prevent fraud.

According to the second study, we found that the Dell EMC solution: Completed a k-means clustering workload in 40 percent less time Processed 67 percent more data per second Carried a 74 percent better performance/cost ratio in terms of data processing performance vs. hardware price.

To explore the results PT found when comparing the two current-gen ~$40K server solutions, read the Kubernetes study here facts.pt/rfcwex2 and the k-means study here facts.pt/0jyo64h.

About Principled Technologies, Inc.Principled Technologies, Inc. is the leading provider of technology marketing and learning & development services.

Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit http://www.principledtechnologies.com.

Company ContactPrincipled Technologies, Inc.1007 Slater Road, Suite #300Durham, NC 27703press@principledtechnologies.com

See the article here:
The Dell EMC PowerEdge R7525 Saved Time During Machine Learning Preparation Tasks and Achieved Faster Image Processing Than a HPE ProLiant DL380...

The NBA has a chemistry problem – SB Nation

The NBA is a league of change. Recent roster turnover has been spurred by a number of factors, from an inflated salary cap and shorter, more exorbitant contracts, to restless owners, to star players progressively embracing their own power. Teams have been forced, at breakneck speed, to become comfortable with being uncomfortable.

Before the NBA went on hiatus due to the coronavirus pandemic, chaos was its new normal: compelling, delightful and anxiety-inducing. But the constant shuffle also sparked an existential question among hundreds of affected players, coaches and front office executives: How can chemistry be fostered in an increasingly erratic era of impatience, load management, reduced practice time and youthful inexperience?

Every year you have six new teammates, Houston Rockets guard Austin Rivers said in an interview with SB Nation. Its like gaw-lly! In some ways you wish that would stop.

Its a new NBA, man. Guys are playing on a new team every year now, and it has nothing to do with how good of a player you are, its just how the NBA is. I have teammates whove played on eight, nine teams. I mean, thats fucking nuts. I dont ever want to go through something like that. (Rivers is 27 years old, and on his fourth team in seven NBA seasons.)

Over the past six months, dozens of players, coaches and executives across the NBA spoke with SB Nation about the state of the leagues chemistry, and why creating cohesiveness now is more difficult and demanding than ever before. Their responses sketched a blurry future for the league.

Its amazing how fast players change in todays NBA, Indiana Pacers general manager Chad Buchanan said. From when I got over here two years ago, Myles [Turner] is the only player whos still here.

Last summer, nearly half of the leagues talent pool swapped jerseys. Seemingly every roster in the league was forced to learn complicated new personality quirks and on-court tendencies. Honed locker room dynamics and hierarchies changed dramatically.

Chicago Bulls forward Thaddeus Young has played for four teams in the last six years after spending his first seven with the Philadelphia 76ers. Few people know better just how precious chemistry can be.

With how the salary cap is going, teams are not locking themselves into long-term deals anymore, where they have four to six guys on four-year deals, Young said. Its definitely tough because you dont know each other. The communication is gonna be off. The teams that you came from before, you might be driving the basketball and you might be used to a guy being in the corner and that guy might not be in the corner.

The NBA may be long past being able to reverse the course of roster turnover, but teams are doing their best to mitigate any downsides. The teams that have done the best job tend to think of chemistry in two buckets: personal and performance. The former contains how players interact away from the game, and the latter contains what happens on the court. However, personal chemistry often informs performance, and vice versa. Once players are comfortable off the court, their on-court relationship improves.

Take Los Angeles Lakers forward Jared Dudley, for example.

When I got here Id turn the ball over throwing to our centers because they expected a lob, Dudley said. I dont really throw lobs, Im more of a bounce passer.

Dudley solved his problem by initiating conversations with LAs big men, verbalizing his own in-game habits so that everyone could get on the same page. Not all NBA players feel so comfortable expressing themselves, however. Especially when an on-court situation is more complex than what to do on basic pick-and-rolls.

When the personal chemistry exists, the performance chemistry is often very easy because the performance chemistry is sometimes a function of hard conversations, one Western Conference GM said. The personal chemistry allows a guy to say, Hey man, in the second quarter last night there were like four straight defensive possessions where four of us were back in transition and you werent. You really put a ton of pressure on us to cover a five-on-four when you were lobbying the officials for a call. It took you forever to get off the deck. Come on, man.

Over the past 20 years, no organization has been more conscious of team chemistry than San Antonio. The Spurs are also far and away the modern eras most influential organization: Nearly one third of the leagues rival head coaches and front office executives can be traced back to head coach Gregg Popovich.

Drafting multiple Hall of Famers undoubtedly factored into the Spurs success, but their efforts to maintain an open atmosphere for stars and role players alike one that obsessed over values of tolerance, respect and empathy also separated them from everyone else.

However, San Antonios year-to-year continuity is also becoming progressively rarer, if not extinct. When Chicago Bulls head coach Jim Boylen was a Spurs assistant during the 2014-15 season, they had brought back 14 members of the previous years 15-man roster.

And of those guys, a bunch of them had been together for years, Boylen said. Now theres approximately 6.5 new guys per team. Thats unheard of.

To ignore San Antonios sprawling influence would be like praising observational comedy and never once mentioning Jerry Seinfeld. But even the Spurs are vulnerable in a league where turnover is the status quo. Prior to the 2018-19 season, they traded Kawhi Leonard and Danny Green, and lost in the first round of the playoffs for the second year in a row. Prior to this seasons hiatus, they were on track for their first losing record in 21 years.

The Spurs dont have an advantage anymore, Dudley said. We all have a disadvantage. Now its who has the most talent. Talent is gonna win out. Talent and vets.

In 2012, James Tarlow was an economics student at the University of Oregon when he presented a paper titled Experience and Winning in the National Basketball Association at the MIT Sloan Sports Analytics Conference. Tarlow wanted to know how roster continuity relates to team chemistry, so he pulled data from 804 NBA seasons played by 30 franchises between 1979 and 2008. He defined chemistry as the number of years the five players playing the most minutes during the regular season have been on their current team with one another.

I got an actual measurement of how important [chemistry] is, Tarlow said in a conversation with SB Nation in March. And its pretty dang important. If you keep your team together its like a third of a win for a year, which, people dont appreciate that and it doesnt seem like much, but if you have a team that stuck together for several years, that turns into another game or two. Thats going to get you into another round in the playoffs.

Bill Russell once wrote, There is no time in basketball to think: This has happened; this is what I must do next. In the amount of time it takes to think through that semicolon, it is already too late.

Its an intuitive idea: The longer were around people, the better we know how theyre going to behave under certain circumstances. Just think of the short-hand forms of communication youve established with your closest friends, family members and coworkers. Those subtle gestures and glances are especially helpful in sports, where a split-second miscommunication can be the difference between winning and losing.

You go back to those San Antonio days, the winks and blinks and the nuances [where Tim] Duncan would find Tony [Parker] on a backdoor, or Manu [Ginobili] would find Timmy on a lob, that evolved over time, and lots of times [time] isnt afforded, said Philadelphia 76ers head coach Brett Brown, who spent nine years as a Spurs assistant. You need time to have that sophisticated camaraderie, gut feel, instances on a court that require split the moment decisions.

Continuity by itself doesnt lead to winning, of course. In some cases, it might only extend mediocrity. And if winning a championship is an organ
izations goal, then it should first pursue star power. But continuity is a boon to coaches, who can implement more complex strategies if theyre able to retain a core group of players year after year.

Right now with the influx of new players, youre having to really keep your playbook and your schemes at a basic level because you are teaching more, Charlotte Hornets head coach James Borrego said. Youre just starting almost at a ground level every single year in a lot of different ways, where the teams that have had success for years and years, theyre building on every single year.

The lack of in-season practice hours only compounds coaches frustrations. With shorter timeframes to fold new players into their system and culture, coaches around the league feel they need to adapt quicker than they ever had before.

As Boston Celtics head coach Brad Stevens joked, We get three weeks to get ready for a season, then we never practice again.

Orlando Magic head coach Steve Clifford believes that the league office did the right thing by limiting back-to-backs across every teams schedule. During his first two years as head coach of the Charlotte Hornets, they played 22 back-to-backs each season, while only 11 were on the docket this season for Orlando. But the schedule shift has hurt in unforeseen ways.

When you play a back-to-back you usually get two days off, most times, right? Clifford said. So you give them a day off and then when you come in you can practice. Theyre rested, you bring referees in. You practice. You actually practice.

You never actually practice anymore like when I first got in the league, everybody had a million plays, and you had to know the plays and stuff, and now if you do its an advantage but people dont look at it like that.

A side effect of so much player movement may be a simplification of the product. NBA teams have evolved to emphasize an up-and-down, free-flowing style of play that is largely the byproduct of an analytical revolution to prioritize threes, layups and free throws. Modern basketball is filled with pick-and-roll heavy offensive actions that dont require the same on-court intimacy as a stockpile of elaborate set plays.

My gut tells me that roster turnover is whats causing the thinning of playbooks, one western conference general manager said. And the thinning of playbooks is whats causing this standardization of playing style.

Players feel it, too. Most teams dont do anything. Really its just take the ball out the basket, pick-and-roll, and run, Rivers said. The coaches are really here to guide you now. Its crazy. Its more ATOs (after time-out plays) and out of bounds, and late clock, fourth quarter, thats when coaching really comes in play. Thats all we go over in shootaround. Most of our stuff involves defense because our offense is fucking ridiculous, man. We dont really do anything on offense.

Even teams that have gone out of their way to maintain continuitylike Cliffords Magic, which returned 85 and 82 percent of their minutes over the past two seasons, respectivelyare not immune to change, and all its myriad effects on strategy.

The game is what they wanted it to be when they changed the rules, and the level of execution is still high, obviously, Clifford said. Its not nearly the gameplanning league that it was even seven or eight years ago.

Sustained success is not possible without collaboration, and collaboration becomes habit when several contributors spend thousands of minutes battling together in the same system. The Golden State Warriors had the benefit of several superstars as they won three championships, but they also had an iron collective grasp of what they wanted to accomplish on every possession.

I think there is a level of beauty that exists with the game that is tougher to reach with the turnover, Warriors head coach Steve Kerr said. For example, if Draymond [Green] ever caught the ball in the pocket off of a high screen with Steph [Curry], Andre [Iguodala] knew exactly where to go and Draymond knew exactly where he was going to be. We didnt even have to practice it. And thats why you saw, frequently, either the lob to Andre along the baseline or Andre spaced out.

NBA teams have been thinking about how to manufacture chemistry for years, long before this accelerated era. None of these billion-dollar corporations can ever be sure how well their efforts actually work, however. Their adjustments have always been based on in-game progression, but success also depends on other obvious factors, like talent, injuries and dumb luck.

A difference between then and now is that players are driving roster decisions to a greater extent. They have much more leverage within organizations, and the biggest stars can take their talent elsewhere if the locker room doesnt jell quickly enough. For many teams, that means they have to proactively foster strong bonds among teammates by encouraging new and returning players to stick around the practice facility during the offseason.

The summer time is big for us, Borrego said. We cant demand it, but we encourage it Guys can really settle down and connect, really understand their teammates, understand their coaches, and its just a much more comfortable situation that allows for chemistry to be built and grown.

The Hornets also organize team dinners when theyre on the road, a practice Borrego borrowed from his time in San Antonio. When asked if those dinners are mandatory, he gave a wry smile: They are team dinners. Then there are others that happen organically on their own, and I want our players to do that. If theyre doing it on their own thats even better than me organizing something.

As one former Spur told ESPN: To take the time to slow down and truly dine with someone in this day and age Im talking a two- or three-hour dinner you naturally connect on a different level than just on the court or in the locker room. It seems like a pretty obvious way to build team chemistry, but the tricky part is getting everyone to buy in and actually want to go. You combine amazing restaurants with an interesting group of teammates from a bunch of different countries and the result is some of the best memories I have from my career.

Of course, Popovich was also good at finding players he wanted at the table.

The more you can stay together, the more the chemistry builds. But still chemistry is more a function of the character of the players than it is anything else, Popovich said. I always talk about getting over yourself. And if theres somebody in your organization that hasnt gotten over himself or herself theyre a pain in the ass and they make it harder for everybody else because they can only feel about their success. They cant be happy for somebody elses success. It has to be about them. If you dont have that then nothing else is gonna help you have chemistry. You cant make it happen.

Still, every coach tries to get everyone on the same page. As players digest their new surroundings, its important that everyone coaches, players and executives understand their expectations for one another. Rockets head coach Mike DAntoni boiled his own approach down simply: Dont ask somebody to do something they cant do. If youre gonna have to change a guy, you might not want to bring him in in the first place.

It is impossible for any team to keep all 15 players in the locker room happy at the same time; theyre human beings who are all going through their own real life issues. But a haphazard onboarding process will create headaches down the road. According to Buchanan, coaches have to be able to anticipate players questions.

Why am I not getting to do this? or Why am I not getting to play more? or Why am I not playing with that guy? or Why am I not starting?, Buchanan said. You try to get that communicated up front so the player knows what hes stepping into, because lots of times chemistry issues evolve from a lack of communication.

When general managers and coaches are unaware of loose frustrations, they risk one player venting t
o another, sewing animosity that does irreparable damage to the entire team. Left unattended, a team can spiral into soap opera.

Its not difficult to create chemistry, Atlanta Hawks head coach Lloyd Pierce said. Its more about sustaining it through the course of 82 games with so many ups and downs. Obviously [we had] some of those moments with John [Collins] being out and Kevin [Huerters] injury. Roles start to shift and some guys werent ready for it, so the frustration of that kicked in.

The Hawks have done two things to stave off that seemingly unavoidable discomfort. The first is an all-in dedication to how they play, in which everything revolves around pick-and-rolls with Trae Young and a rim-rolling big. By keeping the gameplan simple, they can plug in pieces as needed.

When Trae masters it and everybody else understands it, you know, you roll a little bit harder and you shake up a little bit better, and you slash a little bit better. Thats who we will become, Pierce said. Any big that comes to our roster [knows,] I can play here because I know Im gonna get the ball at the rim.

The Hawks also have a breakfast club that Pierce took from his time as an assistant under Brown in Philadelphia. Every time the club meets, players stand up in front of their teammates and discuss something that matters to them. Earlier this year, Collins enlightened the room with a powerpoint presentation about what it was like growing up in a military family. Huerter talked about growing up in upstate New York and losing high-school friends in a drunk-driving accident.

Former Hawks center Alex Len gave a particularly tender presentation last season that moved his head coach.

Youre barking at Alex Len. This fucking guy, he doesnt compete, he doesnt appreciate this, Pierce said. Well, Alex Len talked about why he couldnt go to the Ukraine for the last seven years. Wow, I didnt know you had such a tough time with that. Im over here fucking yelling at you for not rolling in the pick and roll. I dont know youre dealing with this Ukraine thing for the last seven years and not being able to go home and see your grandparents. My bad. Ive got to get to know you a little bit better.

Three years ago, Kings owner Vivek Ranadive asked a communications coach named Steve Shenbaum to work with his team. In 1997, Shenbaum founded a company called Game On Nation, which helps corporate executives, military personnel and government employees in addition to professional sports teams.

Within the past decade, Shenbaum has been brought in by the Lakers, Trailblazers, Nuggets. Cavaliers, Grizzlies and Mavericks, and agent Bill Duffy has asked Shenbaum to assist several of his clients, including Carmelo Anthony, Yao Ming and Greg Oden. He has more than 100 improvisational and conceptual exercises to help clients build self-awareness, selflessness, confidence and other traits that enable character development.

A favorite is called last letter, first letter, in which two teammates have a conversation with one rule: as they take turns speaking, they must start with the last letter of the other persons last word, and keep the conversation going. The exercise forces both parties to listen, let one another finish and focus in ways they otherwise might not.

My hope is I am planting seeds and empowering the players and the staff to take what theyve experienced and run with it and multiply it, Shenbaum said. I want them to see each other in another light.

Shenbaum has many telltale signs of good and bad chemistry, but a big one is how well veterans are buying in. In a league thats getting younger and younger, its imperative that older players command respect in the locker room and impose a will to succeed.

Dudley believes veterans who might not even be in a rotation can earn their money by bringing everyone together in ways a coach or GM cant. During his one year in Brooklyn, he organized dinners, trips to the movies, parties and other events away from the game that involved the whole team.

Then, when we were in film sessions and I would call them out, they took it as love and not criticism, Dudley said. Youre developing a relationship and then you can tell them, Spencer [Dinwiddie] or a guy who thinks he should play more, this is why youre not playing. This is your role for this team. Rondae [Hollis-Jefferson], he got benched: Hey Rondae, for you to stay in the league, this is what you gotta do. On this team youre not a starter anymore but theres gonna be times they call on you. He stayed ready.

Almost everyone interviewed for this story agreed that chemistry cant be forced. Players on contending teams have to go through organic hardships together before they can become comfortable enough for the difficult conversations that facilitate progress. Some teams dont believe its necessary or appropriate to ask their players to spend more time together than they already do. They believe that players should figure out issues among themselves, and that a front offices biggest role is doing good background research on everybody they bring aboard. Coaches are there to take a teams disparate pieces and position them to succeed.

I think everybodys budgets on team meals now has skyrocketed league wide because of the San Antonio Spurs, Miami Heat head coach Erik Spoelstra laughed. The more important thing is getting a team on the same sheet of music about your style of play and an identity on both ends of the court, understanding whats important, what the standards are, the expectations, role clarity. These things fasttrack, in my mind, chemistry. Its always nice if guys like each other and they go out to eat on the road, have dinner together. But that doesnt guarantee anything.

Spoelstra knew early on that Jimmy Butlers oft-misunderstood persona would have no trouble fitting in with several players on Miamis roster because they all shared the same sense of duty.

Ive noticed with Goran and Jimmy in particular, they have such a beautiful on-court chemistry because theyre in their 30s, theyre only about winning right now. They dont care about anything else. If you want to define on-court chemistry, in my mind its how willing are you to help somebody else. And how willing and able are you to enjoy somebody elses success when it happens.

The character vs. talent debate isnt new among NBA decision-makers. But going forward, we may see teams value the former more than they have.

If you get a group of four new players who you bring onto your team and theyre all team-first, unselfish, competitive, self-motivated players, theres a decent chance that the chemistry is gonna have a chance to be good, Buchanan said. Doing a ropes course, thats great in theory and may work for a business, but professional athletes, they develop chemistry by knowing they can trust each other because theyve been together through tough times on the court.

Chemistry is worth deep investment, but perhaps it can be overanalyzed, too.

We try to make the simple complicated at times, ESPN NBA analyst Jeff Van Gundy said. Going out to dinner is a far different chemistry than playing chemistry. You read a lot about, We play paintball together. Who gives a shit, you know?

I dont believe its the reason why a team is either good or not good. I think youve gotta get great players, and when you have them you gotta try and keep them.

There are endless ways to cultivate chemistry, but teams can still only guess at what will work in every situation. Statistics arent a good guide. They cant quantify personality flaws or gauge emotional intelligence. When intuition is the best way to make a decision, some front office executives lean too hard on what they can measure instead.

I think a lot of these GMs, they really dont take [chemistry] into consideration, Utah Jazz center Ed Davis said. Theyre starting to treat the NBA like 2K and more just looking at numbers instead of, are these two players going to get along? And I think youre gonna start seeing GMs lose their jobs.

Top-tier skill, athleticis
m and on-court awareness is very often the bottom line for NBA teams, but those still trying to crack chemistrys mysteries have good reason to believe they arent running a fools errand, as vulnerable as their circumstances may make them feel.

Theres something very powerful about newness, Shenbaum said. And if you embrace it early, whether its a new coach, five new players, the star left, if you embrace it early you can actually create a very authentic bond.

Defining chemistry can feel like trying to catch the wind. It is omnipresent and elusive at the same time. Until the NBAs best and brightest crack the formula, they will have to deal with increasing levels of uncertainty in what was already an uncertain business.

Time will tell if NBA teams ever learn how to overcome their mounting challenges from the shifting ways teams are built, to how on-court strategy is implemented, to the quality of the game itself. But theres no denying that chemistry is a force multiplier, complex and intractable. And in an era of basketball that demands urgency more than ever, that fact can be frightening.

A lot of players and teams will continue to fail in familiar ways, well-laid plans crumbling because players, coaches and executives never understood each other on or off the court. Only now, they may not realize their mistakes until they find themselves starting all over. Again.

See more here:
The NBA has a chemistry problem - SB Nation

Inflate a balloon using chemistry in this at-home experiment – WZZM13.com

GRAND RAPIDS, Mich The old volcano experiment is a classic - using a model volcano, combine baking soda and vinegar to get an eruption. In this experiment, we'll use the product of that eruption to inflate a balloon.

INFLATABLE BALLOON WITH BAKING SODA AND VINEGAR

*Caution! Have an adult help mix the ingredients or this could get messy!

Items you'll need:

Procedure

How it works

Baking soda and vinegar react because baking soda is a base while vinegar is an acid. The combination causes the ingredients to break down, making carbon dioxide and water. The carbon dioxide rises and fills the balloon.

RELATED: Let it rain! Fun rainbow Earth Day experiment for kids

RELATED: Here's how to make Oobleck slime at home with the kids

RELATED: Science experiments from a pro: The Bearded Science Guy shows off a fire tornado

Make it easy to keep up to date with more stories like this. Download the 13 ON YOUR SIDE app now.

Have a news tip? Emailnews@13onyourside.com, visit ourFacebook pageorTwitter. Subscribe to our YouTube channel.

Read the original here:
Inflate a balloon using chemistry in this at-home experiment - WZZM13.com