ScoreSense Leverages Machine Learning to Take Its Customer Experience to the Next Level – Yahoo Finance

One Technologies Partners with Arrikto to Uniquely Tailor its ScoreSense Consumer Credit Platform to Each Individual Customer

DALLAS, Jan. 30, 2020 /PRNewswire/ --To provide customers with the most personalized credit experience possible, One Technologies, LLC has partnered with data management innovator Arrikto Inc. (https://www.arrikto.com/)to incorporate Machine Learning (ML) into its ScoreSense credit platform.

ScoreSense, http://www.ScoreSense.com (PRNewsfoto/One Technologies, LLC)

"To truly empower consumers to take control of their financial future, we must rely on insights from real datanot on assumptions and guesswork," said Halim Kucur, Chief Product Officer at One Technologies, LLC. The innovations we have introduced provide data-driven intelligence about customers' needs and wants before they know this information themselves."

"ScoreSense delivers state-of-the-art credit information through their ongoing investment in the most cutting-edge machine learning products the industry has to offer," said Constantinos Venetsanopoulos, Founder and CEO of Arrikto Inc. "Our partnership has been a big success because One Technologies aligns seamlessly with the most forward-looking developers in the ML space and understands the tremendous value of data for serving customers better."

ScoreSense (https://www.scoresense.com) serves as a one-stop digital resource where consumers can access credit scores and reports from all three main credit bureausTransUnion, Equifax, and Experianand comprehensively pinpoint the factors which are most affecting their credit.

About One Technologies

One Technologies, LLC harnesses the power of technology, analytics and its people to create solutions that empower consumers to make more informed decisions about their financial lives. The firm's consumer credit products include ScoreSense, which enables members to seamlessly access, interact with, and understand their credit profiles from all three main bureaus using a single application. The ScoreSense platform is continually updated to give members deeper insights, personalized tools and one-on-one Customer Care support that can help them make the most sense of their credit.

One Technologies is headquartered in Dallas and was established in October 2000. For more information, please visit https://onetechnologies.net/.

Media Contact

Laura MarvinJConnelly for One Technologies646-922-7774 OT@jconnelly.com

View original content to download multimedia:http://www.prnewswire.com/news-releases/scoresense-leverages-machine-learning-to-take-its-customer-experience-to-the-next-level-300995934.html

SOURCE One Technologies, LLC

Original post:
ScoreSense Leverages Machine Learning to Take Its Customer Experience to the Next Level - Yahoo Finance

Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning – Yahoo Finance

Payoneer uses Iguazio to move from detection to prevention of fraud with predictive machine learning models served in real-time.

Iguazio, the data science platform for real time machine learning applications, today announced that Payoneer, the digital payment platform empowering businesses around the world to grow globally, has selected Iguazios platform to provide its 4 million customers with a safer payment experience. By deploying Iguazio, Payoneer moved from a reactive fraud detection method to proactive prevention with real-time machine learning and predictive analytics.

Payoneer overcomes the challenge of detecting fraud within complex networks with sophisticated algorithms tracking multiple parameters, including account creation times and name changes. However, prior to using Iguazio, fraud was detected retroactively, enabling customers to only block users after damage had already been done. Payoneer is now able to take the same sophisticated machine learning models built offline and serve them in real-time against fresh data. This ensures immediate prevention of fraud and money laundering with predictive machine learning models identifying suspicious patterns continuously. The cooperation was facilitated by Belocal, a leading Data and IT solution integrator for mid and enterprise companies.

"Weve tackled one of our most elusive challenges with real-time predictive models, making fraud attacks almost impossible on Payoneer" noted Yaron Weiss, VP Corporate Security and Global IT Operations (CISO) at Payoneer. "With Iguazios Data Science Platform, we built a scalable and reliable system which adapts to new threats and enables us to prevent fraud with minimum false positives".

"Payoneer is leading innovation in the industry of digital payments and we are proud to be a part of it" said Asaf Somekh, CEO, Iguazio. "Were glad to see Payoneer accelerating its ability to develop new machine learning based services, increasing the impact of data science on the business."

"Payoneer and Iguazio are a great example of technology innovation applied in real-world use-cases and addressing real market gaps" said Hugo Georlette, CEO, Belocal. "We are eager to continue selling and implementing Iguazios Data Science Platform to make business impact across multiple industries."

Iguazios Data Science Platform enables Payoneer to bring its most intelligent data science strategies to life. Designed to provide a simple cloud experience deployed anywhere, it includes a low latency serverless framework, a real-time multi-model data engine and a modern Python eco-system running over Kubernetes.

Earlier today, Iguazio also announced having raised $24M from existing and new investors, including Samsung SDS and Kensington Capital Partners. The new funding will be used to drive future product innovation and support global expansion into new and existing markets.

About Iguazio

The Iguazio Data Science Platform enables enterprises to develop, deploy and manage AI applications at scale. With Iguazio, companies can run AI models in real time, deploy them anywhere; multi-cloud, on-prem or edge, and bring to life their most ambitious data-driven strategies. Enterprises spanning a wide range of verticals, including financial services, manufacturing, telecoms and gaming, use Iguazio to create business impact through a multitude of real-time use cases. Iguazio is backed by top financial and strategic investors including Samsung, Verizon, Bosch, CME Group, and Dell. The company is led by serial entrepreneurs and a diverse team of innovators in the USA, UK, Singapore and Israel. Find out more on http://www.iguazio.com

About Belocal

Since its inception in 2006, Belocal has experienced consistent and sustainable growth by developing strong long-term relationships with its technology partners and by providing tremendous value to its clients. We pride ourselves on delivering the most innovative technology solutions enabling our customers to lead their market segments and stay ahead of the competition. At Belocal, we pride ourselves in our ability to listen, our attention to detail and our expertise in innovation. Such strengths have enabled us to develop new solutions and services, to suit the changing needs of our clients and acquire new businesses by tailoring all our solutions and services to the specific needs of each client.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200127005311/en/

Contacts

Iguazio Media Contact:Sahar Dolev-Blitental, +972.73.321.0401press@iguazio.com

See the rest here:
Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning - Yahoo Finance

Itiviti Partners With AI Innovator Imandra to Integrate Machine Learning Into Client Onboarding and Testing Tools – PRNewswire

NEW YORK, Jan. 30, 2020 /PRNewswire/ -- Itiviti, a leading technology, and service provider to financial institutions worldwide, has signed an exclusive partnership agreement with Imandra Inc., the AI pioneer behind the Imandra automated reasoning engine.

Imandra's technology will initially be applied to improving the onboarding process for our clients to Itiviti's Managed FIX global connectivity platform, with further plans to swiftly expand the AI capabilities across a number of our software solutions and services.

Imandra is the world-leader in cloud-scale automated reasoning, and has pioneered scalable symbolic AI for financial algorithms. Imandra's technology brings deep advances relied upon in safety-critical industries such as avionics and autonomous vehicles to the financial markets. Imandra is relied upon by top investment banks for the design, testing and governance of highly regulated trading systems. In 2019, the company expanded outside financial services and is currently under contract with the US Department of Defense for applications of Imandra to safety-critical algorithms.

"Partnerships are integral to Itiviti's overall strategy, by partnering with cutting edge companies like Imandra we can remain at the forefront of technology innovation and continue to develop quality solutions to support our clients. Generally, client onboarding has been a neglected area within the industry for many years, but we believe working with Imandra we can raise the level of automation for testing and QA, while significantly reducing onboarding bottlenecks for our clients. Other areas we are actively exploring to benefit from AI are within the Compliance and Analytics space. We are very excited to be working with Imandra." said Linda Middleditch, EVP, Head of Product Strategy, Itiviti Group.

"This partnership will capture the tremendous opportunities within financial markets for removing manual work and applying much-needed rigorous scientific techniques toward testing of safety critical infrastructure," said Denis Ignatovich, co-founder and co-CEO of Imandra. "We look forward to helping Itiviti empower clients to take full advantage of their solutions, while adding key capabilities." Dr Grant Passmore, co-founder and co-CEO of Imandra, further added, "This partnership is the culmination of many years of deep R&D and we're thrilled to partner with Itiviti to bring our technology to global financial markets on a massive scale."

About Itiviti

Itiviti enables financial institutions worldwide to transform their trading and capture tomorrow. With innovative technology, deep expertise and a dedication to service, we help customers seize market opportunities and guide them through regulatory change.

Top-tier banks, brokers, trading firms and institutional investors rely on Itiviti's solutions to service their clients, connect to markets, trade smarter in all asset classes by consolidating trading platforms and leverage automation to move faster.

A global technology and service provider, we offer the most innovative, consistent, and reliable connectivity and trading solutions available.

With presence in all major financial centres and serving around 2,000 clients in over 50 countries, Itiviti delivers on a global scale.

For more information, please visitwww.itiviti.com.

Itiviti is owned by Nordic Capital.

About Imandra

Imandra Inc. (www.imandra.ai) is the world-leader in cloud-scale automated reasoning, democratizing deep advances in algorithm analysis and symbolic AI for making algorithms safe, explainable and fair. Imandra has been deep in R&D and industrial pilots over the past 5 years and has recently closed its $5mm Seed round led by several top deep-tech investors in US and UK. Imandra is headquartered in Austin, TX, and has offices in the UK and continental Europe.

For further information, please contact:

Itiviti

Linda Middleditch, EVPHead of Product StrategyTel +44 796 82 126 24Email: linda.middleditch@itiviti.com

George RosenbergerHead of Product StrategyClient Connectivity ServiceTel: + Email: george.rosenberger@itiviti.com

Christine BlinkeEVP, Head of Marketing & CommunicationsTel. +46 739 01 02 01Email: christine.blinke@itiviti.com

Imandra

Denis Ignatovich, co-CEOTel: +44 20 3773 6225Email: denis@imandra.ai

Grant Passmoreco-CEOTel: +1 512 629 4038Email: grant@imandra.ai

This information was brought to you by Cision http://news.cision.com

https://news.cision.com/itiviti-group-ab/r/itiviti-partners-with-ai-innovator-imandra-to-integrate-machine-learning-into-client-onboarding-and-,c3021540

The following files are available for download:

SOURCE Itiviti Group AB

View original post here:
Itiviti Partners With AI Innovator Imandra to Integrate Machine Learning Into Client Onboarding and Testing Tools - PRNewswire

Top Machine Learning Services in the Cloud – Datamation

Machine Learning services in the cloud are a critical area of the modern computing landscape, providing a way for organizations to better analyze data and derive new insights. Accessing these service via the cloud tends to be efficient in terms of cost and staff hours.

Machine Learning (often abbreviated as ML) is a subset of Artificial Intelligence (AI) and attempts to 'learn' from data sets in several different ways, including both supervised and unsupervised learning. There are many different technologies that can be used for machine learning, with a variety of commercial tools as well as open source framework.s

While organizations can choose to deploy machine learning frameworks on premises, it is typically a complex and resource intensive exercise. Machine Learning benefits from specialized hardware including inference chips and optimized GPUs. Machine Learning frameworks can also often be challenging to deploy and configure properly. Complexity has led to the rise of Machine Learning services in the cloud, that provide the right hardware and optimally configured software to that enable organizations to easily get started with Machine Learning.

There are several key features that are part of most machine learning cloud services.

AutoML - The automated Machine Learning feature automatically helps to build the right model.Machine Learning Studio - The studio concept is all about providing a developer environment where machine learning models and data modelling scenarios can be built.Open source framework support - The ability to support an existing framework such as TensorFlow, MXNet and Caffe is important as it helps to enable model portability.

When evaluating the different options for machine learning services in the cloud, consider the following criteria:

In this Datamation top companies list, we spotlight the vendors that offer the top machine learning services in the cloud.

Value proposition for potential buyers: Alibaba is a great option for users that have machine learning needs where data sets reside around the world and especially in Asia, where Alibaba is a leading cloud service.

Value proposition for potential buyers: Amazon Web Services has the broadest array of machine learning services in the cloud today, leading with its SageMaker portfolio that includes capabilities for building, training and deploying models in the cloud.

Value proposition for potential buyers: Google's set of Machine Learning services are also expansive and growing, with both generic as well as purpose built services for specific use-cases.

Value proposition for potential buyers: IBM Watson Machine learning enables users to run models on any cloud, or just on the the IBM Cloud

Value proposition for potential buyers: For organizations that have already bought into Microsoft Azure cloud, Azure Machine Learning is good fit, providing a cloud environment to train, deploy and manage machine learning models.

Value proposition for potential buyers: Oracle Machine learning is a useful tools for organizations already using Oracle Cloud applications, to help build data mining notebooks.

Value proposition for potential buyers: Salesforce Einstein is a purpose built machine learning platform that is tightly integrated with the Salesforce platform.

Read the original here:
Top Machine Learning Services in the Cloud - Datamation

Combating the coronavirus with Twitter, data mining, and machine learning – TechRepublic

Social media can send up an early warning sign of illness, and data analysis can predict how it will spread.

The coronavirus illness (nCoV) is now an international public health emergency, bigger than the SARS outbreak of 2003. Unlike SARS, this time around scientists have better genome sequencing, machine learning, and predictive analysis tools to understand and monitor the outbreak.

During the SARS outbreak, it took five months for scientists to sequence the virus's genome. However, the first 2019-nCoV case was reported in December, and scientists had the genome sequenced by January 10, only a month later.

Researchers have been using mapping tools to track the spread of disease for several years. Ten European countries started Influenza Net in 2003 to track flu symptoms as reported by individuals, and the American version, Flu Near You, started a similar service in 2011.

Lauren Gardner, a civil engineering professor at Johns Hopkins and the co-director of the Center for Systems Science and Engineering, led the effort to launch a real-time map of the spread of the 2019-nCoV. The site displays statistics about deaths and confirmed cases of coronavirus on a worldwide map.

Este Geraghty, MD, MS, MPH, GISP, and chief medical officer and health solutions director at Esri, said that since the SARS outbreak in 2003 there has been a revolution in applied geography through web-based tools.

"Now as we deploy these tools to protect human lives, we can ingest real-time data and display results in interactive dashboards like the coronavirus dashboard built by Johns Hopkins University using ArcGIS," she said.

SEE:The top 10 languages for machine learning hosted on GitHub (free PDF)

With this outbreak, scientists have another source of data that did not exist in 2003: Twitter and Facebook. In 2014, Chicago's Department of Innovation and Technology built an algorithm that used social media mining and illness prediction technologies to target restaurants inspections. It worked: The algorithm found violations about 7.5 days before the normal inspection routine did.

Theresa Do, MPH, leader of the Federal Healthcare Advisory and Solutions team at SAS, said that social media can be used as an early indicator that something is going on.

"When you're thinking on a world stage, a lot of times they don't have a lot of these technological advances, but what they do have is cell phones, so they may be tweeting out 'My whole village is sick, something's going on here,' she said.

Do said an analysis of social media posts can be combined with other data sources to predict who is most likely to develop illnesses like the coronavirus illness.

"You can use social media as a source but then validate it against other data sources," she said. "It's not always generalizable (is generalizable a word?), but it can be a sentinel source."

Do said predictive analytics has made significant advances since 2003, including refining the ability to combine multiple data sources. For example, algorithms can look at names on plane tickets and compare that information with data from other sources to predict who has been traveling to certain areas.

"Algorithms can allow you to say 'with some likelihood' it's likely to be the same person," she said.

The current challenge is identifying gaps in the data. She said that researchers have to balance between the need for real-time data and privacy concerns.

"If you think about the different smartwatches that people wear, you can tell if people are active or not and use that as part of your model, but people aren't always willing to share that because then you can track where someone is at all times," she said.

Do said that the coronavirus outbreak resembles the SARS outbreak, but that governments are sharing data more openly this time.

"We may be getting a lot more positives than they're revealing and that plays a role in how we build the models," she said. "A country doesn't want to be looked at as having the most cases but that is how you save lives."

Get expert tips on mastering the fundamentals of big data analytics, and keep up with the latest developments in artificial intelligence. Delivered Mondays

This map from Johns Hopkins shows reported cases of 2019-nCoV as of January 30, 2020 at 9:30 pm. The yellow line in the graph is cases outside of China while the orange line shows reported cases inside the country.

Image: 2019-nCoV Global Cases by Johns Hopkins Center for Systems Science and Engineering

Follow this link:
Combating the coronavirus with Twitter, data mining, and machine learning - TechRepublic

The ML Times Is Growing A Letter from the New Editor in Chief – Machine Learning Times – machine learning & data science news – The Predictive…

Dear Reader,

As of the beginning of January 2020, its my great pleasure to join The Machine Learning Times as editor in chief! Ive taken over the main editorial duties from Eric Siegel, who founded the ML Times (also the founder of the Predictive Analytics World conference series). As youve likely noticed, weve renamed to The Machine Learning Times what until recently was The Predictive Analytics Times. In addition to a new, shiny name, this rebranding corresponds with new efforts to expand and intensify our breadth of coverage. As editor in chief, Im taking the lead in this growth initiative. Were growing the MLTimes both quantitatively and qualitatively more articles, more writers, and more topics. One particular area of focus will be to increase our coverage of deep learning.

And speaking of deep learning, please consider joining me at this summers Deep Learning World 2020 May 31 June 4 in Las Vegas the co-located sister conference of Predictive Analytics World and part of Machine Learning Week. For the third year, I am chairing and moderating a broad ranging lineup of the latest industry use cases and applications in deep learning. This year, DLW features a new track on large scale deep learning deployment. You can view the full agenda here. In the coming months, the MLTimes will be featuring interviews with the speakers giving you sneak peeks into the upcoming conference presentations.

In addition to supporting the community in these two roles with the MLTimes and Deep Learning World, I am a fellow analytics practitioner yes, I practice what I preach! To learn more about my work leading and executing on advanced data science projects for high tech firms and major research universities in Silicon Valley, click here.

And finally, Attention All Writers: Whether youve published with us in the past or are considering publishing for the very first time, wed love to see original content submissions from you. Published articles gain strong exposure on our site, as well as within the monthly MLTimes email send. If you currently publish elsewhere, such as on a personal blog, consider publishing items as an article with us first, and then in your own blog two weeks thereafter (per our editorial guidelines). Doing so would provide you the opportunity to gain our readers eyes in addition to those you already reach.

Im excited to lead the MLTimes into a strong year. Weve already got a good start with greater amounts of exciting original content lined up for this and coming months. Please feel free to reach out to me with any feedback on our published content or if you are interested in submitting articles for consideration. For general inquiries, see the information on our editorial page and the contact information there. And to reach out to me directly, connect with me on LinkedIn.

Thanks for reading!

Best Regards,

Luba GloukhovaEditor in Chief, The Machine Learning TimesFounding Chair, Deep Learning World

Go here to read the rest:
The ML Times Is Growing A Letter from the New Editor in Chief - Machine Learning Times - machine learning & data science news - The Predictive...

Euro machine learning startup plans NYC rental platform, the punch list goes digital & other proptech news – The Real Deal

New York City rentals (Credit: iStock)

Digital marketplace gets a boost

CRE digital marketplace CREXi nabbed $30 million in a Series B round led by Mitsubishi Estate Company, Industry Ventures, and Prudence Holdings. The new funds will help them build out a subscription service aimed at brokers and an analytics service that highlights trends in the industry. The company wants to become the go-to platform for every step in the CRE process, from marketing to sale.

Dude, wheres my tech-fueled hotel chain?

Ashton Kutchers Sound Ventures and travel-focused VC firm Thayer Ventures have gotten behind hospitality startup Life House, leading a $30 million Series B round. The company runs a boutique hotel chain as well as a management platform, which gives hotel owners access to AI-based pricing and automated financial accounting. Life House has over 800 rooms across cities such as Miami and Denver, with plans to expand to 25 hotels by next year.

Working from home

As the deadly Coronavirus virus outbreak becomes more serious with every hour, WeWork said it is temporarily closing 55 locations in China. The struggling co-working company encouraged employees at these sites to work from home or in private rooms to keep from catching the virus. Also this week, the startup closed a three-year deal to provide office space for 250 employees of gym membership company Gympass, per Reuters. WeWorks owner SoftBank is a minority investor in Gympass so it looks like Masa Sons using some parts of his portfolio to prop up others.

300,000

Thats how many listings rental platform/flatmate matcher Badi has across London, Berlin, Madrid, and Barcelona. Barcelona-based Badi claims to use machine-learning technology to match tenants and rooms. Badi plans on hopping across the pond to New York City within the year. Its an interesting market for the Barcelona-based company to enter. Though most people use a platform like StreetEasy to find an apartment with a traditional landlord, few established companies have cracked the sublet game without running afoul New York Citys rental laws. In effect, Badi would likely be competing with Facebook groups such as Gypsy Housing plus wanna-be-my-roommate startups like Roomi and SpareRoom. Badi is backed by Goodwater Capital, Target Global, Spark Capital and Mangrove Capital. The firm has raised over $45 million in VC funding since its founding in 2015.

Pink slips at Compass

Uh oh, yet another SoftBank-funded startup is laying off employees. Up to 40 employees of tech brokerage Compass in the IT, marketing and M&A departments will be getting the pink slip this week. Sources told E.B. Solomont that the nationwide cuts are a part of a reorganization to introduce a new Agent Experience Team that will take over onboarding and training new agents from former employees. Its a small number of cuts compared to the 18,000 employees Compass has across the U.S. but it isnt a great look in todays business climate.

Getting ready to move

As SoftBank-backed hospitality startup Oyo continues to cut back, their arch nemesis RedDoorz just launched a new co-living program in Indonesia. Theyre targeting young professionals and college students with the KoolKost service, dishing out shared units with flexible leases and free WiFi. Their main business, like Oyo, is running a network of budget hotels across Southeast Asia. Well see if co-living will help them avoid some of Oyos profitability problems.

Homes on Olympus

Its no secret that it can be a pain to figure out a place to live when work needs you to move to a new city for a bit. You can take your pick between bland corporate housing and Airbnbs designed for quick vacations. Thats where Zeus comes in (not with a thunderbolt but with a corporate housing platform.)

Zeus signs two-year minimum leases with landlords, furnishes the apartments with couches meant to look chic, and rents them out to employees for 30 days or more. They currently manage around 2,000 furnished homes with the goal of filling a newly added apartment within 10 days.

The corporate housing is a competitive space with startups like Domio and Sonder also trying to lure in business travelers. Youd think that Zeus would have to go one-on-one with Airbnb but the two companies actually have a partnership. The short-term rental giant lists Zeus properties on its platform and invested in the company as a part of a $55 million Series B round last month. Theyre trying to keep competition close.

Punch lists go digital

Home renovations platform Punch List just scored $4 million in a seed round led by early stage VC funds Bling Capital and Bedrock Capital, per Crunchbase. The platform lets homeowners track project progress and gives contractors a place to send digital invoices, all on a newly launched app. They want to make as much of the frustrating process of remodeling as digital as possible.

Go here to read the rest:
Euro machine learning startup plans NYC rental platform, the punch list goes digital & other proptech news - The Real Deal

JP Morgan expands dive into machine learning with new London research centre – The TRADE News

JP Morgan is expanding its foray into machine learning and artificial intelligence (AI) with the launch of a new London-based research centre, as it explores how it can use the technology for new trading solutions.

The US investment bank has recently launched a Machine Learning Centre of Excellence (ML CoE) in London and has hired Chak Wong who will be responsible for overseeing a new team of machine learning engineers, technologists, data engineers and product managers.

Wong was most recently a professor at the Hong Kong University of Science and Technology, where he taught Masters and PhD level courses on AI and derivatives. He was also a senior quant trader at Morgan Stanley and Goldman Sachs in London.

According to JP Morgans website, the ML CoE teams partner across the firm to create and share Machine Learning Solutions for our most challenging business problems. The bank hopes the expansion of the machine learning centre to Europe will accelerate the deployment of the technology in regions outside of the US.

JP Morgan will look to build on the success of a similar New York-based centre it launched in 2018 under the leadership of Samik Chandarana, head of corporate and investment banking applied AI and machine learning.

These ventures include the application of the technology to provide an optimised execution tool in FX algo trading, and the development of Robotrader as a tool to automate pricing and hedging of vanilla equity options, using machine learning.

In November last year, JP Morgan also made a strategic investment in FinTech firm Limeglass, which deploys AI, machine learning and natural language processing (NLP) to analyse institutional research.

AI and machine learning technology has been touted to revolutionise quantitative and algorithmic trading techniques. Many believe its ability to quantify and analyse huge amounts of data will enable them to make more informed investment decisions. In addition, as data sets become more complex, trading strategies are increasingly being built around new machine and deep learning tools.

Speaking at an industry event in Gaining the Edge Hedge Fund Leadership conference in New York last year, representatives from the hedge fund and allocator industry discussed the significant importance the technology will have on investment strategies and processes.

AI and machine learning is going to raise the bar across everything. Those that are not paying attention to it now will fall behind, said one panellist from a $6 billion alternative investment manager, speaking under Chatham House Rules.

Excerpt from:
JP Morgan expands dive into machine learning with new London research centre - The TRADE News

This tech firm used AI & machine learning to predict Coronavirus outbreak; warned people about danger zones – Economic Times

A couple of weeks after the Coronavirus outbreak and the disease has become a full-blown pandemic. According to official Chinese statistics, more than 130 people have died from the mysterious virus.

Contagious diseases may be diagnosed by men and women in face masks and lab coats, but warning signs of an epidemic can be detected by computer programmers sitting thousands of miles away. Around the tenth of January, news of a flu outbreak in Chinas Hubei province started making its way to mainstream media. It then spread to other parts of the country, and subsequently, overseas.

But the first to report of an impending biohazard was BlueDot, a Canadian firm that specializes in infectious disease surveillance. They predicted an impending outbreak of coronavirus on December 31 using an artificial intelligence-powered system that combs through animal and plant disease networks, news reports in vernacular websites, government documents, and other online sources to warn its clients against traveling to danger zones like Wuhan, much before foreign governments started issuing travel advisories.

They further used global airline ticketing data to correctly predict that the virus would spread to Seoul, Bangkok, Taipei, and Tokyo. Machine learning and natural language processing techniques were also employed to create models that process large amounts of data in real time. This includes airline ticketing data, news reports in 65 languages, animal and plant disease networks.

iStock

We know that governments may not be relied upon to provide information in a timely fashion. We can pick up news of possible outbreaks, little murmurs or forums or blogs of indications of some kind of unusual events going on, Kamran Khan, founder and CEO of BlueDot told a news magazine.

The death toll from the Coronavirus rose to 81 in China, with thousands of new cases registered each day. The government has extended the Lunar New Year holiday by three days to restrict the movement of people across the country, and thereby lower the chances of more people contracting the respiratory disease.

However, a lockdown of the affected area could be detrimental to public health, putting at risk the domestic population, even as medical supplies dwindle, causing much anger and resentment.

24 May, 2018

24 May, 2018

24 May, 2018

24 May, 2018

24 May, 2018

Read this article:
This tech firm used AI & machine learning to predict Coronavirus outbreak; warned people about danger zones - Economic Times

3 books to get started on data science and machine learning – TechTalks

Image credit: Depositphotos

This post is part of AI education, a series of posts that review and explore educational content on data science and machine learning.

With data science and machine learning skills being in high demand, theres increasing interest in careers in both fields. But with so many educational books, video tutorials and online courses on data science and machine learning, finding the right starting point can be quite confusing.

Readers often ask me for advice on the best roadmap for becoming a data scientist. To be frank, theres no one-size-fits-all approach, and it all depends on the skills you already have. In this post, I will review three very good introductory books on data science and machine learning.

Based on your background in math and programming, the two fundamental skills required for data science and machine learning, youll surely find one of these books a good place to start.

Data scientists and machine learning engineers sit at the intersection of math and programming. To become a good data scientist, you dont need to be a crack coder who knows every single design pattern and code optimization technique. Neither do you need to have an MSc in math. But you must know just enough of both to get started. (You do need to up your skills in both fields as you climb the ladder of learning data science and machine learning.)

If you remember your high school mathematics, then you have a strong base to begin the data science journey. You dont necessarily need to recall every formula they taught you in school. But concepts of statistics and probability such as medians and means, standard deviations, and normal distributions are fundamental.

On the coding side, knowing the basics of popular programming languages (C/C++, Java, JavaScript, C#) should be enough. You should have a solid understanding of variables, functions, and program flow (if-else, loops) and a bit of object-oriented programming. Python knowledge is a strong plus for a few reasons: First, most data science books and courses use Python as their language of choice. Second, the most popular data science and machine learning libraries are available for Python. And finally, Pythons syntax and coding conventions are different from other languages such as C and Java. Getting used to it takes a bit of practice, especially if youre used to coding with curly brackets and semicolons.

Written by Sinan Ozdemir, Principles of Data Science is one of the best intros to data science that Ive read. The book keeps the right balance between math and coding, theory and practice.

Using examples, Ozdemir takes you through the fundamental concepts of data science such as different types of data and the stages of data science. You will learn what it means to clean your data, normalize it and split it between training and test datasets.

The book also contains a refresher on basic mathematical concepts such as vector math, matrices, logarithms, Bayesian statistics, and more. Every mathematical concept is interspersed with coding examples and introduction to relevant Python data science libraries for analyzing and visualizing data. But you have to bring your own Python skills. The book doesnt have any Python crash course or introductory chapter on the programming language.

What makes the learning curve of this book especially smooth is that it doesnt go too deep into the theories. It gives you just enough knowledge so that you can make optimal uses of Python libraries such as Pandas and NumPy, and classes such as DataFrame and LinearRegression.

Granted, this is not a deep dive. If youre the kind of person who wants to get to the bottom of every data science and machine learning concept and learn the logic behind every library and function, Principles of Data Science will leave you a bit disappointed.

But again, as I mentioned, this is an intro, not a book that will put you on a data science career level. Its meant to familiarize you with what this growing field is. And it does a great job at that, bringing together all the important aspects of a complex field in less than 400 pages.

At the end of the book, Ozdemir introduces you to machine learning concepts. Compared to other data science textbooks, this section of Principles of Data Science falls a bit short, both in theory and practice. The basics are there, such as the difference between supervised and unsupervised learning, but I would have liked a bit more detail on how different models work.

The book does give you a taste of different ML algorithms such as regression models, decision trees, K-means, and more advanced topics such as ensemble techniques and neural networks. The coverage is enough to whet your appetite to learn more about machine learning.

As the name suggests, Data Science from Scratch takes you through data science from the ground up. The author, Joel Grus, does a great job of showing you all the nitty-gritty details of coding data science. And the book has plenty of examples and exercises to go with the theory.

The book provides a Python crash course, which is good for programmers who have good knowledge of another programming language but dont have any background in Python. Whats really good about Gruss intro to Python is that aside from the very basic stuff, he takes you through some of the advanced features for handling arrays and matrices that you wont find in general Python tutorial textbooks, such as list comprehensions, assertions, iterables and generators, and other very useful tools.

Moreover, the Second Edition of Data Science from Scratch, published in 2019, leverages some of the advanced features of Python 3.6, including type annotations (which youll love if you come from a strongly typed language like C++).

What makes Data Science from Scratch a bit different from other data science textbooks is its unique way to do everything from scratch. Instead of introducing you to NumPy and Pandas functions that will calculate coefficients and, say, mean absolute errors (MAE) and mean square errors (MSE), Grus shows you how to code it yourself.

He does, of course, remind you that the books sample code is meant for practice and education and will not match the speed and efficiency of professional libraries. At the end of each chapter, he provides references to documentation and tutorials of the Python libraries that correspond to the topic you have just learned. But the from-scratch approach is fun nonetheless, especially if youre one of those I-have-to-know-what-goes-on-under-the-hood type of people.

One thing youll have to consider before diving into this book is, youll need to bring your math skills with you. In the book, Grus codes fundamental math functions, starting from simple vector math to more advanced statistic concepts such as calculating standard deviations, errors, and gradient descent. However, he assumes that you already know how the math works. I guess its okay if youre fine with just copy-pasting the code and seeing it work. But if youve picked up this book because you want to make sense of everything, then have your calculus textbook handy.

After the basics, Data Science from Scratch goes into machine learning, covering various algorithms, including the different flavors of regression models and decision trees. You also get to delve into the basics of neural networks followed by a chapter on deep learning and an introduction to natural language processing.

In short, I would describe Data Science with Python as a fully hands-on introduction to data science and machine learning. Its the most practice-driven book on data science and machine learning that Ive read. The authors have done a great job of bringing together the right data samples and practice code to get you acquainted with the principles of data science and machine learning.

The book contains minimal theoretical content and mostly teaches you by taking you through coding labs. If you have a decent computer and an installation of Anaconda or another Python package that has comes bundled with Jupyter Notebooks, then you can probably go through all the exercises with minimal effort. I highly recommend writing the code yourself and avoiding copy-pasting it from the book or sample files, since the entire goal of the book is to learn through practice.

Youll find no Python intro here. Youll dive straight into NumPy, Pandas, and scikit-learn. Theres also no deep dive into mathematical concepts such as correlations, error calculations, z-scores, etc., so youll need to get help from your math book whenever you need a refresher on any of the topics.

Alternatively, you can just type in the code and see Pythons libraries work their magic. Data Science with Python does a decent job of showing you how to put together the right pieces for any data science and machine learning project.

Data Science with Python provides a solid intro to data preparation and visualization, and then takes you through a rich assortment of machine learning algorithms as well as deep learning. There are plenty of good examples and templates you can use for other projects. The book also gives an intro on XGBoost, a very useful optimization library, and the Keras neural network library. Youll also get to fiddle around with convolutional neural networks (CNN), the cornerstone of current advances in computer vision.

Before starting this book, I strongly recommend that you go through a gentler introductory book that covers more theory, such as Ozdemirs Principles of Data Science. It will make the ride less confusing. The combination of the two will leave you with a very strong foundation to tackle more advanced topics.

These are just three of the many data science books that are out there. If youve read other awesome books on the topic, please share your experience in the comments section. There are also plenty of great interactive online courses, like Udemys Machine Learning A-Z: Hands-On Python & R In Data Science (I will be reviewing this one in the coming weeks).

While an intro to data science will give you a good foothold into the world of machine learning and the broader field of artificial intelligence, theres a lot of room for expanding that knowledge.

To build on this foundation, you can take a deeper dive into machine learning. There are plenty of good books and courses out there. One of my favorites is Aurelien Gerons Hands-on Machine Learning with Scikit-Learn, Keras & TensorFlow (also scheduled for review in the coming months). You can also go deeper on one of the sub-disciplines of ML and deep learning such as CNNs, NLP or reinforcement learning.

Artificial intelligence is complicated, confusing, and exciting at the same time. The best way to understand it is to never stop learning.

View original post here:
3 books to get started on data science and machine learning - TechTalks

In Coronavirus Response, AI is Becoming a Useful Tool in a Global Outbreak – Machine Learning Times – machine learning & data science news – The…

By: Casey Ross, National Technology Correspondent, StatNews.com

Surveillance data collected by healthmap.org show confirmed cases of the new coronavirus in China.

Artificial intelligence is not going to stop the new coronavirus or replace the role of expert epidemiologists. But for the first time in a global outbreak, it is becoming a useful tool in efforts to monitor and respond to the crisis, according to health data specialists.

In prior outbreaks, AI offered limited value, because of a shortage of data needed to provide updates quickly. But in recent days, millions of posts about coronavirus on social media and news sites are allowing algorithms to generate near-real-time information for public health officials tracking its spread.

The field has evolved dramatically, said John Brownstein, a computational epidemiologist at Boston Childrens Hospital who operates a public health surveillance site called healthmap.org that uses AI to analyze data from government reports, social media, news sites, and other sources.

During SARS, there was not a huge amount of information coming out of China, he said, referring to a 2003 outbreak of an earlier coronavirus that emerged from China, infecting more than 8,000 people and killing nearly 800. Now, were constantly mining news and social media.

Brownstein stressed that his AI is not meant to replace the information-gathering work of public health leaders, but to supplement their efforts by compiling and filtering information to help them make decisions in rapidly changing situations.

We use machine learning to scrape all the information, classify it, tag it, and filter it and then that information gets pushed to our colleagues at WHO that are looking at this information all day and making assessments, Brownstein said. There is still the challenge of parsing whether some of that information is meaningful or not.

These AI surveillance tools have been available in public health for more than a decade, but the recent advances in machine learning, combined with greater data availability, are making them much more powerful. They are also enabling uses that stretch beyond baseline surveillance, to help officials more accurately predict how far and how fast outbreaks will spread, and which types of people are most likely to be affected.

Machine learning is very good at identifying patterns in the data, such as risk factors that might identify zip codes or cohorts of people that are connected to the virus, said Don Woodlock, a vice president at InterSystems, a global vendor of electronic health records that is helping providers in China analyze data on coronavirus patients.

To continue reading this article click here.

Read the rest here:
In Coronavirus Response, AI is Becoming a Useful Tool in a Global Outbreak - Machine Learning Times - machine learning & data science news - The...

Technologies of the future, but where are AI and ML headed to? – YourStory

Today, when we look around, the technological advances in recent years have been immense. We can see driverless cars, hands-free devices that can turn on the lights, and robots working in factories, which prove that intelligent machines are possible.

In the last four years in the Indian startup ecosystem, the terms that were used (overused rather) more than funding, valuation, and exit were artificial intelligence (AI) and machine learning (ML). We also saw investors readily putting in their money in startups that remotely used or claimed to use these emerging technologies.

From deeptech, ecommerce, fintech, and conversational chatbots to mobility, foodtech, and healthcare, AI and ML have transformed most industry sectors today.

The industry has swiftly moved from asking programmers to feed tonnes of code to the machine to acquiring terabytes of data and crunching it to build relevant logic.

Sameer Dhanrajani, Co-Founder and Chief Executive Officer of AI Advisory & Consulting Firm AIQRATE, says,

A subset of artificial intelligence, machine learning allows systems to make predictions and crucial business decisions, driven by data and pattern-based experiences. Without humans having to intervene, the algorithms that are fed to the systems are helping them develop and improve their own models and understanding of a certain use-case.

According to a study carried out by Analytics India and AnalytixLabs, the Indian data analytics market is expected to double its size by 2020, with about 24 percent being attributed to Big Data. It said that almost 60 percent of the analytics revenue across India comes from exports of analytics to the USA. Domestic revenue accounts for only four percent of the total analytics revenue across the country.

The BFSI industry accounts for almost 37 percent of the total analytics market while generating almost $756 million. While marketing and advertising comes second at 26 percent, ecommerce contributes to about 15 percent.

At present, the average paycheck sizes of AI and ML engineers in India start from Rs 10 lakh per annum and the maximum cap often crosses Rs 50 lakh per annum.

According to a report by Great Learning, an edtech startup for professional education, India is expected to see 1.5 lakh new openings in Data Science in 2020, an increase of about 62 percent as compared to that of 2019. Currently, 70 percent of job postings in this sector are for Data Scientists with less than five years of work experience.

Shantanu Bhattacharya, a data scientist at Locus, had told YourStory earlier about the phenomenon, and opined that it is wrong to look at machine learning as a tool or a career path, and that it is only a convenient means to develop training models to solve problems in general.

The fluid nature of data science allows people from multiple fields of expertise to come and crack it. Shantanu believes if JRR Tolkien, being the brilliant linguist that he was, pursued data science to develop NLP models, he would have been the greatest NLP expert ever, and that is the kind of liberty and scope data science offers.

Needless to say, AI and ML have the scope to exponentially amplify the profitability and efficiency of a business by automating many tasks. And naturally, the trend has spread its wings to the jobs market where the dire need for experts and engineers in these technologies is only going up, and does not seem to slow down.

Thanks to the hefty paychecks and faster career growth, the role of machine learning engineers has claimed the top spot in job portals.

Hari Krishnan Nair, the co-founder of Great Learning, says,

For a country like India, acquiring new skills is not something of a luxury but a necessary requirement, and the trends of upskilling and reskilling are also currently on the rise to complement with the same. But data science, machine learning, and artificial intelligence are those fields where mere book-reading and formulaic interpretation and execution just does not cut it.

If one aspires to have a competitive career in futuristic technologies, machine learning and data science have a larger spectrum of required understanding of probability, statistics, and mathematics on a fundamental level.

To break the myths around programmers and software developers entering this market, machine learning involves understanding of basic programming languages (Python, SQL, R), linear algebra and calculus, as well as inferential and descriptive statistics.

Siddharth Das, Founder of Univ.ai, an early stage edtech startup that focuses on teaching these tools, says,

For a business world that thrives on data and its leverage, the science around it is where the employment economy is moving towards. While the youth of the country is anxious how rapid their upskilling rate is ought to be, it is no easy mountain to climb to rightfully master the art of data science, which it is often referred to as.

Most professionals say it is a consistent routine of learning for almost six to eight months, to be an expert in this field. During this time, when the industry is almost on the verge of fully migrating to NLP and Neural Networks, which are a significant part of future deep-tech, now is more than a better time to start learning machine learning.

With rapidly changing technological paradigms, predicting how the world is going to run is something close to impossible. And being prepared for anything is the best one can manage with, at the moment.

(Edited by Megha Reddy)

Read more:
Technologies of the future, but where are AI and ML headed to? - YourStory

How Machine Learning Will Lead to Better Maps – Popular Mechanics

Despite being one of the richest countries in the world, in Qatar, digital maps are lagging behind. While the country is adding new roads and constantly improving old ones in preparation for the 2022 FIFA World Cup, Qatar isn't a high priority for the companies that actually build out maps, like Google.

"While visiting Qatar, weve had experiences where our Uber driver cant figure out how to get where hes going, because the map is so off," Sam Madden, a professor at MIT's Department of Electrical Engineering and Computer Science, said in a prepared statement. "If navigation apps dont have the right information, for things such as lane merging, this could be frustrating or worse."

Madden's solution? Quit waiting around for Google and feed machine learning models a whole buffet of satellite images. It's faster, cheaper, and way easier to obtain satellite images than it is for a tech company to drive around grabbing street-view photos. The only problem: Roads can be occluded by buildings, trees, or even street signs.

So Madden, along with a team composed of computer scientists from MIT and the Qatar Computing Research Institute, came up with RoadTagger, a new piece of software that can use neural networks to automatically predict what roads look like behind obstructions. It's able to guess how many lanes a given road has and whether it's a highway or residential road.

RoadTagger uses a combination of two kinds of neural nets: a convolutional neural network (CNN), which is mostly used in image processing, and a graph neural network (GNN), which helps to model relationships and is useful with social networks. This system is what the researchers call "end-to-end," meaning it's only fed raw data and there's no human intervention.

First, raw satellite images of the roads in question are input to the convolutional neural network. Then, the graph neural network divides up the roadway into 20-meter sections called "tiles." The CNN pulls out relevant road features from each tile and then shares that data with the other nearby tiles. That way, information about the road is sent to each tile. If one of these is covered up by an obstruction, then, RoadTagger can look to the other tiles to predict what's included in the one that's obfuscated.

Parts of the roadway may only have two lanes in a given tile. While a human can easily tell that a four-lane road, shrouded by trees, may be blocked from view, a computer normally couldn't make such an assumption. RoadTagger creates a more human-like intuition in a machine learning model, the research team says.

"Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks cant do that," Madden said. "Our approach tries to mimic the natural behavior of humans ... to make better predictions."

The results are impressive. In testing out RoadTagger on occluded roads in 20 U.S. cities, the model correctly counted the number of lanes 77 percent of the time and inferred the correct road types 93 percent of the time. In the future, the team hopes to include other new features, like the ability to identify parking spots and bike lanes.

Read more here:
How Machine Learning Will Lead to Better Maps - Popular Mechanics

Want To Be AI-First? You Need To Be Data-First. – Forbes

Data First

Those that implement AI and Machine Learning project learn quickly that machine learning projects are not application development projects. Much of the value of machine learning projects rest in the models, training data, and configuration information that guides how the model is applied to the specific machine learning problem. The application code is mostly a means to implement the machine learning algorithms and "operationalize" the machine learning model in a production environment. That's not to say that application code is not necessary after all, the computer needs some way to operationalize the machine learning model but focusing a machine learning project on the application code is missing the big picture. If you want to be AI-first for your project, you need to have a data-first perspective.

Use data-centric methodologies and data-centric technologies

Therefore it follows that if you're going to have a data-first perspective, you need to use a data-first methodology. There's certainly nothing wrong with Agile methodologies as a way of iterating towards success, but Agile on its own leaves much to be desired as it's focused on functionality and delivery of application logic. There are already data-centric methodologies out there that have been proven in many real-world scenarios. One of the most popular is the Cross Industry Standard Process for Data Mining (CRISP-DM), which focuses on the steps needed for successful data projects. In the modern age, it makes sense to merge the notably non-agile CRISP-DM with Agile Methodologies to make it more relevant. While this is still a new area for most enterprises implementing AI projects, we see this sort of merged methodology approach to be more successful than trying to shoehorn all the aspects of an AI project into existing application-focused Agile methodologies.

It stands to reason that if you have a data-centric perspective on AI then you need to pair your data-centric methodologies with data-centric technologies. This means that your choice of tooling to implement all those artifacts detailed above need to be, first and foremost, data-focused. Don't use code-centric IDEs when you should be using data notebooks. Don't use enterprise integration middleware platforms when you should be using tools that focus on model development and maintenance. Don't use so-called machine learning platforms that are really just a pile of cloud-based technologies or overgrown big data management platforms. The tools you use should support the machine learning goals you need, which are in turn supported by the activities you need to do and the artifacts you need to create. Just because a GPU provider has a toolset doesn't mean that it's the right one to use. Just because a big enterprise vendor or a cloud vendor has a "stack" doesn't mean it's the right one. Start from the deliverables and the machine learning objectives and work your way backwards.

Another big consideration is where and how machine learning models will be deployed - or in AI-speak "operationalized". AI models can be implemented in a remarkably wide range of places from "edge" devices sitting disconnected from the internet to mobile and desktop applications; from enterprise servers to cloud-based instances; and all manner of autonomous vehicles and craft. Each of these locations is a place where AI models and implementations can and do exist. This amount of model operationalization heterogeneity highlights even more so how ludicrous the idea of a single machine learning platform is. How can one platform at the same time provide AI capabilities in a drone, mobile app, enterprise implementation, and cloud instance. Even if you source all this technology from a single vendor, it will be a collection of different tools that sit under a single marketing umbrella rather than a single, cohesive, interoperable platform that makes any sense.

Build data-centric talent

All this methodology and technology can't assemble itself. If you're going to be successful at AI projects you're going to need to be successful at building an AI team. And if the data-centric perspective is the correct one for AI, then it makes sense that your team also needs to be data-centric. The talent to build apps or manage enterprise systems or data is not the same to build AI models, tune algorithms, work with training data sets, and operationalize ML models. The primary core of your AI team needs to be data scientists, data engineers, and those folks responsible for putting machine learning models into operation. While there's always a need for coding, development, and project management, finding and growing your data-centric talent is key to long term success of your AI initiatives.

The primary challenge with building data talent is that it's hard to find and grow. The primary reason for this is because data isn't code. You need folks who know how to wrangle lots of data sources, compile them into clean data sets, and then extract information needles from data haystacks. In addition, the language of AI is math, not programming logic. So a strong data team is also strong in the right kinds of math to understand how to select and implement AI algorithms, properly tweak hyperparameters, and properly interpret testing and validation results. Simply guessing about and changing training data sets and hyperparameters at random is not a good way to create AI projects that deliver value. As such, data-centric talent grounded in a fundamental understanding of machine learning math and algorithms combined with an understanding of how to deal with big data sets is crucial to AI project success.

Prepare to continue to invest for the long haul

It should be pretty obvious at this point that the set of activities for AI are indeed very much data-centric and the activities, artifacts, tools, and team need to follow from that data-centric perspective. The biggest challenge is that so much of that ecosystem is still being developed and is not fully available for most enterprises. AI-specific methodologies are still being tested in large scale projects. AI-specific tools and technologies are still being developed, enhanced, and evolutionary changes are being released on a rapid scale. AI talent continues to be tight and is an area where we're just starting to see investment in growth of this skill set.

As a result, organizations that need to be successful with AI, even with this data-centric perspective, need to be prepared to invest for the long haul. Find your peer groups to see what methodologies are working for them and continue to iterate until you find something that works for you. Find ways to continuously update your team's skills and methods. Realize that you're on the bleeding edge with AI technology and prepare to reinvest in new technology on a regular basis, or invent your own if need be. Even though the history of AI spans at least seven decades, we're still in the early stages of making AI work for large scale projects. This is like the early days of the Internet or mobile or big data. Those early pioneers had to learn the hard way, making many mistakes before realizing the "right" way to do things. But once those ways were discovered, organizations reaped big rewards. This is where we're at with AI. As long as you have a data-centric perspective and are prepared to continue to invest for the long haul, you will be successful with your AI, machine learning, and cognitive technology efforts.

Go here to read the rest:
Want To Be AI-First? You Need To Be Data-First. - Forbes

I Know Some Algorithms Are Biased–because I Created One – Scientific American

Artificial intelligence and machine learning are becoming common in research and everyday life, raising concerns about how these algorithms work and the predictions they make. For example, when Apple released its credit card over the summer, there were claims that women were given a lower credit limit than otherwise identical men were. In response, Sen. Elizabeth Warren warned that women might have been discriminated against, on an unknown algorithm.

On its face, her statement appears to contradict the way algorithms work. Algorithms are logical mathematical functions and processes, so how can they discriminate against a person or a certain demographic?

Creating an algorithm that discriminates or shows bias isnt as hard as it might seem, however. As a first-year graduate student, my advisor asked me to create a machine-learning algorithm to analyze a survey sent to United States physics instructors about teaching computer programming in their courses. While programming is an essential skill for physicists, many undergraduate physics programs do not offer programming courses, leaving individual instructors to decide whether to teach programming.

The task seemed simple enough. Id start with an algorithm in Pythons scikit-learn library to create my algorithm to predict whether a survey respondent had experience teaching programming. Id supply the physics instructors responses to the survey and run the algorithm. My algorithm then would tell me whether the instructors taught programming and which questions on the survey were most useful in making that prediction.

When I did that, however, I noticed a problem. My algorithm kept finding that only the written response questions (and none of the multiple-choice questions) differentiated the two groups of instructors. When I analyzed those questions using a different technique, I didnt find any differences between the instructors who taught and did not teach programming! It out turned that I had been using the wrong algorithm the whole time.

My example may seem silly. So what if I chose the wrong algorithm to predict which instructors teach programming? But what if I had instead been creating a model to predict which patients should receive extra care? Then using the wrong algorithm could be a significant problem.

Yet, this isnt hypothetical as a recent study in Science showed. In the study, researchers examined an algorithm created to find patients who may be good fits in a high-risk care management program. For white and black patients the algorithm identified as having equal risk, the black patient was sicker than the white patient. Thus, even though the black patient was sicker than the white patient, the algorithm saw the two patients as having equal needs.

Just as in my research, the health care company had used the wrong algorithm. The designers of the algorithm created it to predict health care costs rather than the severity of the illness. As a result, since white patients have better access to care and hence spend more on health care, the algorithm assigned white patients, who were less ill, the same level of risk as more ill black patients. The researchers claim that similar algorithms are applied to around 200 million Americans each year, so who knows how many lives may have been lost to what the study authors called a racial bias in an algorithm?

What then can we do to combat this bias? I learned that I used an incorrect algorithm because I visualized my data, saw that my algorithms predictions were not aligned with what my data or previous research said, and could not remove the discrepancy regardless of how I changed my algorithm. Likewise, to combat any bias, policy ideas need to focus on the algorithms and the data.

To address issues with the algorithm, we can push for algorithms transparency, where anyone could see how an algorithm works and contribute improvements. Given that most commercial machine learning algorithms are considered proprietary information, companies may not be willing to share their algorithms.

A more practical route may be to occasionally test algorithms for potential bias and discrimination. The companies themselves could conduct this testing, as the House of Representatives Algorithm Accountability Act would require, or the testing could be performed by an independent nonprofit accreditation board, such as the proposed Forum for Artificial Intelligence Regularization (FAIR).

To make sure the testing is fair, the data themselves need to be fair. For example, crime-predicting algorithms analyze historical crime data, in which people from racial and ethnic minority groups are overrepresented, and hence the algorithm may make biased predictions even if the algorithm is constructed correctly. Therefore, we need to ensure that representative data sets are available for testing.

Getting these changes to occur will not come easily. As machine learning and artificial intelligence become more essential to our lives, we must ensure our laws and regulations keep pace. Machine learning is already revolutionizing entire industries, and we are only at the beginning of that revolution. We as citizens need to hold algorithm developers and users accountable to ensure that the benefits of machine learning are equitably distributed. By taking appropriate precautions, we can ensure that algorithmic bias is a bug and not a feature of future algorithms.

Read more from the original source:
I Know Some Algorithms Are Biased--because I Created One - Scientific American

Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning – Business Wire

NEW YORK--(BUSINESS WIRE)--Iguazio, the data science platform for real time machine learning applications, today announced that Payoneer, the digital payment platform empowering businesses around the world to grow globally, has selected Iguazios platform to provide its 4 million customers with a safer payment experience. By deploying Iguazio, Payoneer moved from a reactive fraud detection method to proactive prevention with real-time machine learning and predictive analytics.

Payoneer overcomes the challenge of detecting fraud within complex networks with sophisticated algorithms tracking multiple parameters, including account creation times and name changes. However, prior to using Iguazio, fraud was detected retroactively, enabling customers to only block users after damage had already been done. Payoneer is now able to take the same sophisticated machine learning models built offline and serve them in real-time against fresh data. This ensures immediate prevention of fraud and money laundering with predictive machine learning models identifying suspicious patterns continuously. The cooperation was facilitated by Belocal, a leading Data and IT solution integrator for mid and enterprise companies.

Weve tackled one of our most elusive challenges with real-time predictive models, making fraud attacks almost impossible on Payoneer noted Yaron Weiss, VP Corporate Security and Global IT Operations (CISO) at Payoneer. With Iguazios Data Science Platform, we built a scalable and reliable system which adapts to new threats and enables us to prevent fraud with minimum false positives.

Payoneer is leading innovation in the industry of digital payments and we are proud to be a part of it said Asaf Somekh, CEO, Iguazio. Were glad to see Payoneer accelerating its ability to develop new machine learning based services, increasing the impact of data science on the business.

Payoneer and Iguazio are a great example of technology innovation applied in real-world use-cases and addressing real market gaps said Hugo Georlette, CEO, Belocal. We are eager to continue selling and implementing Iguazios Data Science Platform to make business impact across multiple industries.

Iguazios Data Science Platform enables Payoneer to bring its most intelligent data science strategies to life. Designed to provide a simple cloud experience deployed anywhere, it includes a low latency serverless framework, a real-time multi-model data engine and a modern Python eco-system running over Kubernetes.

Earlier today, Iguazio also announced having raised $24M from existing and new investors, including Samsung SDS and Kensington Capital Partners. The new funding will be used to drive future product innovation and support global expansion into new and existing markets.

About Iguazio

The Iguazio Data Science Platform enables enterprises to develop, deploy and manage AI applications at scale. With Iguazio, companies can run AI models in real time, deploy them anywhere; multi-cloud, on-prem or edge, and bring to life their most ambitious data-driven strategies. Enterprises spanning a wide range of verticals, including financial services, manufacturing, telecoms and gaming, use Iguazio to create business impact through a multitude of real-time use cases. Iguazio is backed by top financial and strategic investors including Samsung, Verizon, Bosch, CME Group, and Dell. The company is led by serial entrepreneurs and a diverse team of innovators in the USA, UK, Singapore and Israel. Find out more on http://www.iguazio.com

About Belocal

Since its inception in 2006, Belocal has experienced consistent and sustainable growth by developing strong long-term relationships with its technology partners and by providing tremendous value to its clients. We pride ourselves on delivering the most innovative technology solutions enabling our customers to lead their market segments and stay ahead of the competition. At Belocal, we pride ourselves in our ability to listen, our attention to detail and our expertise in innovation. Such strengths have enabled us to develop new solutions and services, to suit the changing needs of our clients and acquire new businesses by tailoring all our solutions and services to the specific needs of each client.

Continue reading here:
Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning - Business Wire

Letter: Yes to Green Growth | Opinion – Southern Pines Pilot

Yes, yes and yes! to Pinehursts conversation about green growth as reported by David Sinclair in his recent front-page article.

All communities in Moore County should be engaged in discussions about what value and vitality mean for the long term. Lets continue considering how we fit into a larger set of processes, including our natural environment which provides us with priceless essentials like clean water and air.

Living alongside nature in deep partnership now will ensure better lives for the generations who follow us.

Greta Nintzel, Whispering Pines

Publishers Note: This is a letter to the editor, submitted by a reader, and reflects the opinion of the author. The Pilot welcomes letters from readers on its Opinion page, which serves as a public forum. The Pilot is not in the business of suppressing public opinion. We are a forum for community debate, and publish almost every letter we receive. For information on how to make a submission, visit this page:https://www.thepilot.com/site/forms/online_services/letter/

See the original post here:
Letter: Yes to Green Growth | Opinion - Southern Pines Pilot

Diane Francis: Treating aging like a disease is the next big thing for science – Financial Post

LOS ANGELES Extending everyones life in a healthy fashion is one of many goals held by Peter Diamandis, a space, technology, aeronautics and medicine pioneer. But the new field known as longevity is of interest to everyone.

One hundred will be the new 60, he told his Abundance360 conference recently. The average human health span will increase by 10+ years this decade.

He, like others in Silicon Valley, believe that aging is a disease and the result of planned obsolescence, or the wearing down of, or damage to, certain critical mechanisms, sensors and functions within our bodies. Longevity research is about identifying the core problems to mitigate or reverse them.

The average human health span will increase by 10+ years this decade

Peter Diamandis

The exponential technologies of artificial intelligence, machine learning and computational heft have been harnessed, and have resulted in breakthroughs and clinical trials that are just a handful of years away from deployment on human patients. The main areas of research include: Stem cell supply restoration, regenerative medicine to regrow damaged cartilage, ligaments, tendons, bone, spinal cords and neural nerves; vaccine research against chronic diseases such as Alzheimers; and United Therapeutics that is developing technology to tackle the organ shortage for humans by genetically engineering organs grown in pigs.

New tools are accelerating the development of new, tailor-made medicines at a fraction of todays costs. Alex Zhavoronkov of Insilico Medicine told the conference that drugs take 10 years and cost $3 billion to research and 90 per cent fail. But his company can test in 46 days using human tissue, then model, design and produce in weeks with the help of advanced computing.

In regenerative medicine, advances appear to be arriving relatively soon. For instance, Diamandis asked the audience if anyone was awaiting a knee replacement operation and suggested that they might be better off postponing these until 2021 when regenerative medicine innovator, Samumed LLC in San Diego, is expected to complete phase three clinical trials of cartilage regeneration.

Samumeds founder, Osman Kibar, said his company has successfully injected a protein that activates nearby stem cells into producing new cartilage in a knee or a new disc in a spine. Preliminary success has also occurred to regenerate muscle and neural cells, retinal cells, skin and hair. Not surprisingly, the private company just raised US$15.5 billion to continue research and product development.

Another hot area of early stage research is called epigenetic reprogramming or identifying how to reverse deficiencies in proteins, stem cells, chromosomes, genes that repair DNA and damaged cells. A leader in this field is David Sinclair, professor of genetics at the Harvard Medical School, whose new book Lifespan: Why We Age and Why We Dont Have To explains the science and offers advice.

Aging is a disease, and that disease is treatable, he said. As research progresses toward actual corrections or cures, there are also lifestyle habits that can slow down the aging process, or avert damage. For instance, he said humans should replicate some behaviour that their bodies were designed for. Obviously, exercising and sleep are necessary but so is eating less often. You should feel hungry regularly, he said.

Another condition that is useful to emulate is hormesis, a scientific term for what Neitzsche posited which was that that which does not kill us makes us stronger. Sinclair recommends stressing our bodies with temperature changes such as going from a hot sauna to rolling in the snow. This invigorates the bodys processes and cells.

Theres also xenohormesis or gaining benefits from eating plants that have been environmentally stressed, therefore contain more beneficial nutrients. For instance, drought-stressed or wild strawberries have better flavour but they also are enhanced with additional antioxidant capacity and phenol content.

The age of 100 is easily in sight now, said Diamandis. And kids born today can expect to live to 105.

Financial Post

Read the original post:
Diane Francis: Treating aging like a disease is the next big thing for science - Financial Post

Brave Cashel Community School get the better of St Augustine’s to set-up a Munster hurling final clash with Doon – TipperaryLive.ie

Munster Under-19B Post-Primary Schools Hurling Championship Semi-FinalCashel Community School 4-16 St Augustines (Dungarvan) 3-16

Sen Treacy Park was the venue for this semi-final with the conditions cool, but dry. If the conditions were on the chilly side the large crowd in attendance were not short of entertainment as both teams played out a dramatic game featuring thirty-nine scores and seven goals.

The first half was a tight and tense affair with both sides finding space and time on the ball hard to come by.

The Dungarvan school opened the scoring with points from the lively Willie Beresford and Johnny Bourke. Cashel responded with a free, but the Waterford men extended the lead back out to two with a fantastic score from Sam Fitzgerald. Cashel then hit the front with a flurry when hitting 1-2 during a three-minute blitz with the goal coming from Ben Ryan who reacted quickest when an effort at a point came down off the upright.

These scores settled Cashel, but a pair of Niall Buckley frees saw Augustines tie the game (1-2 to 0-5) with twenty-two minutes on the clock.

Both teams exchanged points from then until half-time before Stephen Browne slotted a good point, the other scores all came from dead balls.

The teams headed for the dressing rooms all square (1-6 to 0-9).

SECOND HALF

Cashel opened the second half full of purpose and intent. The Tipperary men hit an uninterrupted 2-3 during a twelve-minute blitz with the goals coming from Kevin Cleary and Conor ODwyer as well as a fine point form the dynamic Euan Ryan.

Daniel Moloney and ODwyer were now to the fore in Cashel's forward line and were helped by the movement of Cathal Quinn to leave the scoreline reading 3-9 to 0-11 points after forty-four minutes.

The men from the south coast did not roll over, however, and crept back into the game thanks to points from Buckley and Burke raising green flags - the second coming from a 21-yard free which saw Buckley take his tally to 1-8. The sides were now (3-11 to 2-14).

This Cashel team knew how to respond. A long-range free was batted out and the lightning fast reactions of Stephen Browne saw him rattle the net to reassert his side's lead. ODwyer dissected the posts with a free to give Cashel a much needed cushion going into the last ten minutes.

The drama did not end there though with Johnny Burke firing the game's seventh goal to claw his team back into the game (4-13 to 3-15) with five minutes of normal time remaining.

It was from here on that the large crowd really saw the true character of this Cashel team as their never-say-die attitude came to the fore.

Cashel kicked for home as Murphy won another crucial free and sub Reuben Bourke made a powerful run to set up the final score which sealed his teams passage to the Munster final.

Cashel had leaders all over the park and not least with defenders Ben Loughman, Toms Bourke, Brian g ODwyer, Conor Farrell and Callum Lawrence all contributing hugely. The athleticism and energy of Jamie Duncan was admired by all while Jack Currivan in goal worked many of his puck-outs to build attacks for his team. Captain Lorcan Carr and Euan Ryan linked play between defence and attack while also working tirelessly. All six starting forwards finished up on the score sheet which shows the variety of threat this team contains.

MUNSTER FINAL

Cashel Community School will take on Scoil na Tronide Naofa, Doon in the Munster Under-19B Post-Primary Schools Hurling Championship final on Saturday, February 15 at Leahy Park in Cashel when the prestigious Corn Thomais Mhic Choilm will be on the line.

The Doon team, which features eight Tipperary players from Cappawhite and ire g Annacarty, got the better of Borrisokane Community College in the semi-finals (3-21 to 2-15) while Cashel Community School proved too strong for St Augustines (4-16 to 3-16).

In the quarter-finals Cashel Community School accounted for Causeway Comprehensive (1-22 to 2-18) while the Tipperary outfit have also beaten Rice College, Ennis (1-17 to 0-8) and Coliste Chrost R, Cork (7-21 to 0-5) during their campaign.

Meanwhile Scoil na Tronide Naofa, Doon proved too strong for Scoil Phobal Roscrea (1-32 to 4-20) in the quarter-finals and Abbey CBS (1-16 to 1-15) in an earlier round of the competition. The Limerick school did lose to St Augustine's (2-8 to 1-13) during the group phase, but re-grouped before surging into the provincial decider.

MATCH DETAILS

Cashel Community School: Jack Currivan (Golden-Kilfeacle), Conor Farrell (Knockavilla Kickhams), Toms Bourke (Boherlahan-Dualla), Jamie Duncan (Knockavilla Kickhams), Lorcan Carr (Knockavilla Kickhams), Ben Loughman (Knockavilla Kickhams), Brian g ODwyer (Rockwell-Rosegreen), Euan Ryan (Boherlahan-Dualla), Callum Lawrence (Cashel King Cormacs), Ben Ryan (Knockavilla Kickhams), Daniel Moloney (Cashel King Cormacs), Stephen Browne (Knockavilla Kickhams), Cathal Quinn (Cashel King Cormacs), Conor ODwyer (Cashel King Cormacs), Kevin Cleary (Rockwell-Rosegreen). Subs: James Murphy (Boherlahan-Dualla) for B Ryan (40th), Adam Ryan (Rockwell-Rosegreen) for Cleary (50th), Reuben Bourke (Knockavilla Kickhams) for Carr (55th), Ned Ryan (Boherlahan-Dualla) for Farrell (60th). Panel Members: Ciarn Moroney (Fethard), Jack Breen (Knockavilla Kickhams), Michael OConnor (Boherlahan-Dualla), Ben Currivan (Golden-Kilfeacle), David Sinclair (Golden-Kilfeacle), Piric Brosnan (Cashel King Cormacs), Darragh Lacey (Boherlahan-Dualla), John Marnane (Rockwell-Rosegreen), Sen Ryan (Rockwell-Rosegreen), Eoghan Murphy (Cashel King Cormacs), Christopher Geraghty (Rockwell-Rosegreen), Ross Whelan (Cashel King Cormacs) and Michel Quinlan (Fethard).

Continue reading here:
Brave Cashel Community School get the better of St Augustine's to set-up a Munster hurling final clash with Doon - TipperaryLive.ie

Federated machine learning is coming – here’s the questions we should be asking – Diginomica

A few years ago, I wondered how edge data would ever be useful given the enormous cost of transmitting all the data to either the centralized data center or some variant of cloud infrastructure. (It is said that 5G will solve that problem).

Consider, for example, applications of vast sensor networks that stream a great deal of data at small intervals. Vehicles on the move are a good example.

There is telemetry from cameras, radar, sonar, GPS and LIDAR, the latter about 70MB/sec. This could quickly amount to four terabytes per day (per vehicle). How much of this data needs to be retained? Answers I heard a few years ago were along two lines:

My counterarguments at the time were:

Introducing TensorFlow federated, via The TensorFlow Blog:

This centralized approach can be problematic if the data is sensitive or expensive to centralize. Wouldn't it be better if we could run the data analysis and machine learning right on the devices where that data is generated, and still be able to aggregate together what's been learned?

Since I looked at this a few years ago, the distinction between an edge device and a sensor has more or less disappeared. Sensors can transmit via wifi (though there is an issue of battery life, and if they're remote, that's a problem); the definition of the edge has widened quite a bit.

Decentralized data collection and processing have become more powerful and able to do an impressive amount of computing. The case is point in Intel's Introducing the Intel Neural Compute Stick 2 computer vision and deep learning accelerator powered by the Intel Movidius Myriad X VPU, that can stick into a Pi for less than $70.00.

But for truly distributed processing, the Apple A13 chipset in the iPhone 11 has a few features that boggle the mind: From Inside Apple's A13 Bionic system-on-chip Neural Engine, a custom block of silicon separate from the CPU and GPU, focused on accelerating Machine Learning computations. The CPU has a set of "machine learning accelerators" that perform matrix multiplication operations up to six times faster than the CPU alone. It's not clear how exactly this hardware is accessed, but for tasks like machine learning (ML) that use lots of matrix operations, the CPU is a powerhouse. Note that this matrix multiplication hardware is part of the CPU cores and separate from the Neural Engine hardware.

This should beg the question, "Why would a smartphone have neural net and machine learning capabilities, and does that have anything to do with the data transmission problem for the edge?" A few years ago, I thought the idea wasn't feasible, but the capability of distributed devices has accelerated. How far-fetched is this?

Let's roll the clock back thirty years. The finance department of a large diversified organization would prepare in the fall a package of spreadsheets for every part of the organization that had budget authority. The sheets would start with low-level detail, official assumptions, etc. until they all rolled up to a small number of summary sheets that were submitted headquarters. This was a terrible, cumbersome way of doing things, but it does, in a way, presage the concept of federated learning.

Another idea that vanished is Push Technology that shared the same network load as centralizing sensor data, just in the opposite direction. About twenty-five years, when everyone had a networked PC on their desk, the PointCast Network used push technology. Still, it did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. If Federated Learning works, those problems have to be addressed

Though this estimate changes every day, there are 3 billion smartphones in the world and 7 billion connected devices.You can almost hear the buzz in the air of all of that data that is always flying around. The canonical image of ML is that all of that data needs to find a home somewhere so that algorithms can crunch through it to yield insights. There are a few problems with this, especially if the data is coming from personal devices, such as smartphones, Fitbit's, even smart homes.

Moving highly personal data across the network raises privacy issues. It is also costly to centralize this data at scale. Storage in the cloud is asymptotically approaching zero in cost, but the transmission costs are not. That includes both local WiFi from the devices (or even cellular) and the long-distance transmission from the local collectors to the central repository. This s all very expensive at this scale.

Suppose, large-scale AI training could be done on each device, bringing the algorithm to the data, rather than vice-versa? It would be possible for each device to contribute to a broader application while not having to send their data over the network. This idea has become respectable enough that it has a name - Federated Learning.

Jumping ahead, there is no controversy that training a network without compromising device performance and user experience, or compressing a model and resorting to a lower accuracy are not alternatives. In Federated Learning: The Future of Distributed Machine Learning:

To train a machine learning model, traditional machine learning adopts a centralized approach that requires the training data to be aggregated on a single machine or in a datacenter. This is practically what giant AI companies such as Google, Facebook, and Amazon have been doing over the years. This centralized training approach, however, is privacy-intrusive, especially for mobile phone usersTo train or obtain a better machine learning model under such a centralized training approach, mobile phone users have to trade their privacy by sending their personal data stored inside phones to the clouds owned by the AI companies.

The federated learning approach decentralizes training across mobile phones dispersed across geography. The presumption is that they collaboratively develop machine learning while keeping their personal data on their phones. For example, building a general-purpose recommendation engine for music listeners. While the personal data and personal information are retained on the phone, I am not at all comfortable that data contained in the result sent to the collector cannot be reverse-engineered - and I havent heard a convincing argument to the contrary.

Here is how it works. A computing group, for example, is a collection of mobile devices that have opted to be part of a large scale AI program. The device is "pushed" a model and executes it locally and learns as the model processes the data. There are some alternatives to this. Homogeneous models imply that every device is working with the same schema of data. Alternatively, there are heterogeneous models where harmonization of the data happens in the cloud.

Here are some questions in my mind.

Here is the fuzzy part: federated learning sends the results of the learning as well as some operational detail such as model parameters and corresponding weights back to the cloud. How does it do that and preserve your privacy and not clog up your network? The answer is that the results are a fraction of the data, and since the data itself is not more than a few Gb, that seems plausible. The results sent to the cloud can be encrypted with, for example, homomorphic encryption (HE). An alternative is to send the data as a tensor, which is not encrypted because it is not understandable by anything but the algorithm. The update is then aggregated with other user updates to improve the shared model. Most importantly, all the training data remains on the user's devices.

In CDO Review, The Future of AI. May Be In Federated Learning:

Federated Learning allows for faster deployment and testing of smarter models, lower latency, and less power consumption, all while ensuring privacy. Also, in addition to providing an update to the shared model, the improved (local) model on your phone can be used immediately, powering experiences personalized by the way you use your phone.

There is a lot more to say about this. The privacy claims are a little hard to believe. When an algorithm is pushed to your phone, it is easy to imagine how this can backfire. Even the tensor representation can create a problem. Indirect reference to real data may be secure, but patterns across an extensive collection can surely emerge.

Read more here:
Federated machine learning is coming - here's the questions we should be asking - Diginomica