The Bank Of Spain Confirms Registration Of Coinmotion As Cryptocurrency Operator | Crowdfund Insider  Crowdfund Insider
Read more:
Cryptocurrency Prices Rise in Anticipation of Federal Reserve ...  Investing.com
Go here to see the original:
Cryptocurrency Prices Rise in Anticipation of Federal Reserve ... - Investing.com
DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Global Analysis by Diagnostic Tool; Application; End User; Service; and Geography" report has been added to ResearchAndMarkets.com's offering.
The global artificial intelligence (AI) in healthcare diagnosis market was valued at US$ 3,639.02 million in 2019 and is projected to reach US$ 66,811.97 million by 2027; it is expected to grow at a CAGR of 44% during 2020-2027.
The growth of the market is mainly attributed to factors such rising adoption of AI in disease identification and diagnosis, and increasing investments in AI healthcare startups. However, the lack of skilled workforce and ambiguity in regulatory guidelines for medical software are the factor hindering the growth of the market.
Artificial Intelligence in healthcare is one of the most significant technological advancements in medicine so far. The involvement of multiple startups in the development of AI-driven imaging and diagnostic solutions is the major factors contributing to the growth of the market. China, the US, and the UK are emerging as popular hubs for healthcare innovations.
Additionally, the British government has announced the establishment of a National Artificial Intelligence Lab that would collaborate with the country's universities and technology companies to conduct research on cancer, dementia, and heart diseases. The UK-based startups have received benefits from the government's robust library of patient data, as British citizens share their anonymous healthcare data with the British National Health Service. As a result, the number of artificial intelligence startups in the healthcare sector has significantly grown in the past few years, and the trend is expected to be the same in the coming years.
Based on diagnostic tool, the global artificial intelligence in healthcare diagnosis market is segmented into medical imaging tool, automated detection system, and others. The medical imaging tool segment held the largest share of the market in 2019, and the market for automated detection system is expected to grow at the highest CAGR during the forecast period.
Based on application, the global artificial intelligence in healthcare diagnosis market is segmented into eye care, oncology, radiology, cardiovascular, and others. The oncology segment held the larger share of the market in 2019, and the radiology segment is expected to register the highest CAGR during the forecast period.
Based on service, the global artificial intelligence in healthcare diagnosis market is segmented into tele-consultation, tele monitoring, and others. The tele-consultation segment held the largest share of the market in 2019, however, tele monitoring segment it is further expected to report the highest CAGR in the market during the forecast period.
Based on end-user, the global artificial intelligence in healthcare diagnosis market is segmented into hospital and clinic, diagnostic laboratory, and home care. The hospital and clinic segment held the highest share of the market in 2019 and is expected to register the highest CAGR in the market during the forecast period.
Key Topics Covered
1. Introduction
1.1 Scope of the Study
1.2 Report Guidance
1.3 Market Segmentation
1.3.1 Artificial Intelligence in Healthcare Diagnosis Market - By Diagnostic Tool
1.3.2 Artificial Intelligence in Healthcare Diagnosis Market - By Application
1.3.3 Artificial Intelligence in Healthcare Diagnosis Market - By Service
1.3.4 Artificial Intelligence in Healthcare Diagnosis Market - By End User
1.3.5 Global Artificial Intelligence in Healthcare Diagnosis Market - By Geography
2. Artificial Intelligence in Healthcare Diagnosis Market - Key Takeaways
3. Research Methodology
3.1 Coverage
3.2 Secondary Research
3.3 Primary Research
4. Artificial Intelligence in Healthcare Diagnosis Market - Market Landscape
4.1 Overview
4.2 PEST Analysis
4.2.1 North America - PEST Analysis
4.2.2 Europe - PEST Analysis
4.2.3 Asia-Pacific - PEST Analysis
4.2.4 Middle East & Africa - PEST Analysis
4.2.5 South & Central America
4.3 Expert Opinion
5. Artificial Intelligence in Healthcare Diagnosis Market - Key Market Dynamics
5.1 Market Drivers
5.1.1 Rising Adoption of Artificial Intelligence (AI) in Disease Identification and Diagnosis
5.1.2 Increasing Investment in AI Healthcare Start-ups
5.2 Market Restraints
5.2.1 Lack of skilled AI Workforce and Ambiguous Regulatory Guidelines for Medical Software
5.3 Market Opportunities
5.3.1 Increasing Potential in Emerging Economies
5.4 Future Trends
5.4.1 AI in Epidemic Outbreak Prediction and Response
5.5 Impact Analysis
6. Artificial Intelligence in Healthcare Diagnosis Market - Global Analysis
6.1 Global Artificial Intelligence in Healthcare Diagnosis Market Revenue Forecast and Analysis
6.2 Global Artificial Intelligence in Healthcare Diagnosis Market, By Geography - Forecast and Analysis
6.3 Market Positioning of Key Players
7. Artificial Intelligence in Healthcare Diagnosis Market - By Diagnostic Tool
7.1 Overview
7.2 Artificial Intelligence in Healthcare Diagnosis Market Revenue Share, by Diagnostic Tool (2019 and 2027)
7.3 Medical Imaging Tool
7.4 Automated Detection System
7.5 Others
8. Artificial Intelligence in Healthcare Diagnosis Market Analysis, By Application
8.1 Overview
8.2 Artificial Intelligence in Healthcare Diagnosis Market Revenue Share, by Application (2019 and 2027)
8.3 Eye Care
8.4 Oncology
8.5 Radiology
8.6 Cardiovascular
8.7 Others
9. Artificial Intelligence in Healthcare Diagnosis Market - By End-User
9.1 Overview
9.2 Artificial Intelligence in Healthcare Diagnosis Market, by End-User, 2019 and 2027 (%)
9.3 Hospital and Clinic
9.4 Diagnostic Laboratory
9.5 Home Care
10. Artificial Intelligence in Healthcare Diagnosis Market - By Service
10.1 Overview
10.2 Artificial Intelligence in Healthcare Diagnosis Market, by Service, 2019 and 2027 (%)
10.3 Tele-Consultation
10.4 Tele-Monitoring
10.5 Others
11. Artificial Intelligence in Healthcare Diagnosis Market - Geographic Analysis
11.1 North America: Artificial Intelligence in Healthcare Diagnosis Market
11.2 Europe: Artificial Intelligence in Healthcare Diagnosis Market
11.3 Asia-Pacific: Artificial Intelligence in Healthcare Diagnosis Market
11.4 Middle East and Africa: Artificial Intelligence in Healthcare Diagnosis Market
11.5 South & Central America: Artificial Intelligence in Healthcare Diagnosis Market
12. Impact of COVID-19 Pandemic on Global Artificial Intelligence in Healthcare Diagnosis Market
12.1 North America: Impact Assessment of COVID-19 Pandemic
12.2 Europe: Impact Assessment of COVID-19 Pandemic
12.3 Asia-Pacific: Impact Assessment of COVID-19 Pandemic
12.4 Rest of the World: Impact Assessment of COVID-19 Pandemic
13. Artificial Intelligence in Healthcare Diagnosis Market - Industry Landscape
13.1 Overview
13.2 Growth Strategies Done by the Companies in the Market, (%)
13.3 Organic Developments
13.4 Inorganic Developments
14. Company Profiles
14.1 General Electric Company
14.2 Aidoc
14.3 Arterys Inc.
14.4 icometrix
14.5 IDx Technologies Inc.
14.6 MaxQ AI Ltd.
14.7 Caption Health, Inc.
14.8 Zebra Medical Vision, Inc.
14.9 Siemens Healthineers AG
More here:
A unique data-driven scientific approach to study and predict excessive drinking using social media and mobile-phone data has wonAndrew Schwartz, assistant professor in the Department of Computer Science, and his team a $2.5M award from the National Institutes of Health (NIH). Their research will develop an innovative approach utilizing data from texting and social media as well as mobile phone apps to better understand how unhealthy drinking manifests in daily life and push the envelope for the ability of Artificial Intelligence (AI) to predict human behaviors.
The cross-disciplinary team will build AI models that predict future drinking as well as future effects of drinking on an individuals mood among service industry workers. The award will be distributed over four years in collaboration withRichard Rosenthal, MD, Director of Addiction Psychiatry at Stony Brook Medicine, andChristine DeLorenzo, Associate Professor in the Departments of Biomedical Engineering and Psychiatry and a team at the University of Pennsylvania.
Andy is blazing the trail in advancing AI tools for tackling major health challenges, said Fotis Sotiropoulos, Dean,College of Engineering and Applied Sciences. His work is an ingenious approach using data-science tools, smart-phones and social media postings to identify early signs of alcohol abuse and alcoholism and guide interventions. This is AI-driven discovery and innovation at its very best!
With the aid of the teams methods, social media content can be collected and analyzed faster and cheaper, and present an unscripted, less biased psychological assessment. Traditional ecological studies of unhealthy drinking are done via costly and time-consuming phone interviews, which can also be subject to poor data quality and biases. Schwartz and his team will use their novel AI-based approach over social media text content. Samir Das, Chair of the Department of Computer Science, said: Andrew has very successfully applied large-scale data and text analysis techniques for important and timely human health and well-being applications with very impactful results.
We now know analysis of everyday language can cover a wide array of daily factors affecting individual health but their use over timespans is limited. The methods we will develop in this project should enable real-time study into how health plays out in the each individuals own words and bring about the possibility for personalized mental health care, said Schwartz.
The technology will be developed with a focus on a population of frontline restaurant workers bartenders and servers a group that has among the highest rates of heavy alcohol use of all professions. This unhealthy drinking (defined as seven drinks a week for women and 14 for men, according to the National Institute on Alcohol Abuse and Alcoholism) creates the potential for extensive negative consequences related to work performance, relationships, and physical and psychological health. For example, the team will look at the effect of empathy, as measured through language. Psychologists on the team note that empathy can be both health-promoting (beneficial) and health-threatening (depleting). Distinguishing beneficial versus depleting empathy is an example where AI can capture something difficult to get at through questions. Its also a dimension of human psychology suspected to play a role in stress on servers and bartenders since they often listen to customers problems and provide advice, which could have a negative effect on them.
The teams research will include development of:
Schwartzs research has also focused on the impact of social media to predict mental and physical health issues. He is also using Twitter to studyCOVID-19and before that he focused ondepression and social media posts.
About the ResearcherAndrew Schwartzis an Assistant Professor in the Department of Computer Science and a faculty member of theInstitute for AI-Driven Discovery and Innovation. His research utilizes natural language processing and machine learning techniques to focus on large-scale language analysis for health and social sciences.
Dan Olawski
See the original post here:
Machine learning and artificial intelligence (AI) are not just tools for streamlining customer engagement. They represent an opportunity for companies to completely rethink how they build context around each individual, ultimately creating a better experience and a more loyal customer.
By tapping into the potential of these new technologies, brands can cost-effectively enable and empower sophisticated, relevant types of two-way communications with existing and potential customers. These technologies also change how brands can use digital channels to reimagine their essential communications such as bills, statements, tax documents, and other important information. Traditionally, these touchpoints were static, generic mailings created to address the widest audience. They lacked personalization and relevance. They were informational, but not engaging.
An important brand message no longer needs to be a common document that looks the same to everyone who receives it. Personalized essential communications bring additional value to the customer. Brands can offer more than a sum-total utility or service bill, by including a customers usage compared to the previous billing period or year and providing tips on how to cut down on their use and cost. But this is just the tip of the iceberg.
With some strategic planning and investment, businesses can tap into machine learning and AI to transform customer communications by following these four steps:
Consumer expectations for how they engage with brands continue to evolve. Customers now demand that every piece of communication is tailored to their individual needs and preferences. These technologies enable brands to meet and exceed these constantly rising expectations.
Its no surprise that first on the list of Gartners top 10 strategic technology trends for 2017 is AI and advanced machine learning. These solutions can sift through, analyze, and respond to volumes of data at a speed no number of humans can rival. As these technologies continue to advance, they have the power to benefit both companies and the customers they serve. They represent an opportunity for companies to completely rethink how they build context around each individual to create a better experience and a more loyal customer.
How will you embrace these new technological innovations to catapult your business into the next year and beyond?
Rob Krugman is the Chief Data Officer at Broadridge, a customer communication and data analytics company.
Read more here:
4 tips for transforming your customer communications with AI - VentureBeat
Beware the hyperbole surrounding artificial intelligence and how far it has progressed.
Great claims are being made for artificial intelligence, or AI, these days.
Amazon's Alexa, Google's assistant, Apple's Siri, Microsoft's Cortana: these are all cited as examples of AI. Yet speech recognition is hardly new: we have seen steady improvements in commercial software like Dragon for 20 years.
Recently we have seen a series of claims that AI, with new breakthroughs like "deep learning", could displace 2 million or more Australian workers from their jobs by 2030.
Similar claims have been made before.
I was fortunate to discuss AI with a philosopher, Julius Kovesi, in the 1970s as I led the team that eventually developed sheep-shearing robots. With great insight, he argued that robots, in essence, were built on similar principles to common toilet cisterns and were nothing more than simple automatons.
"Show me a robot that deliberately tells you a lie to manipulate your behaviour, and then I will accept you have artificial intelligence!" he exclaimed.
That's the last thing we wanted in a sheep-shearing robot, of course.
To understand future prospects, it's helpful to see AI as just another way of programming digital computers. That's all it is, for the time being.
We have been learning to live with computers for many decades. Gradually, we are all becoming more dependent on them and they are getting easier to use. Smartphones are a good example.
Our jobs have changed as a result, and will continue to change.
Smartphones can also disrupt sleep and social lives, but so can many other things too. Therefore, claims that we are now at "a convergence" where AI is going to fundamentally change everything are hard to accept.
We have seen several surges in AI hyperbole. In the 1960s, machine translation of natural language was "just two or three years away". And we still have a long way to go with that one. In the late 1970s and early 1980s, many believed forecasts that 95 per cent of factory jobs would be eliminated by the mid-1990s. And we still have a long way to go with that one too. The "dot com, dot gone" boom of 2001 saw another surge. Disappointment followed each time as claims faded in the light of reality. And it will happen again.
Self-driving cars will soon be on our streets, thanks to decades of painstaking advances in sensor technology, computer hardware and software engineering. They will drive rather slowly at first, but will steadily improve with time. You can call this AI if you like, but it does not change anything fundamental.
The real casualty in all this hysteria is our appreciation of human intelligences ... plural. For artificial intelligence has only replicated performances like masterful game playing and mathematical theorem proving, or even legal and medical deduction. These are performances we associate with intelligent people.
Consider performances easily mastered by people we think of as the least intelligent, like figuring out what is and is not safe to sit on, or telling jokes. Cognitive scientists are still struggling to comprehend how we could begin to replicate these performances.
Even animal intelligence defies us, as we realised when MIT scientists perfected an artificial dog's nose sensitive enough to detect TNT vapour from buried landmines. When tested in a real minefield, this device detected TNT everywhere and the readings appeared to be unrelated to the actual locations of the mines. Yet trained mine detection dogs could locate the mines in a matter of minutes.
To appreciate this in a more familiar setting, imagine a party in a crowded room. One person lights up a cigarette and, to avoid being ostracised, keeps it hidden in an ashtray under a chair. Everyone in the room soon smells the cigarette smoke but no one can sense where it's coming from. Yet a trained dog would find it in seconds.
There is speculation that quantum computers might one day provide a real breakthrough in AI. At the moment, however, experiments with quantum computers are at much the same stage as Alan Turing was when he started tinkering with relays in the 1920s. There's still a long way to go before we will know whether these machines will tell deliberate lies.
In the meantime it might be worth asking whether the current surge of interest in AI is being promoted by companies like Google and Facebook in a deliberate attempt to seduce investors. Then again, it might just be another instance of self-deception group-think.
James Trevelyan is emeritus professor in the School of Mechanical and Chemical Engineering at the University of Western Australia.
Excerpt from:
When robots learn to lie, then we worry about AI - The Australian Financial Review
This week for our Vergecast interview series, Verge editor-in-chief Nilay Patel chats with Microsoft chief technology officer Kevin Scott about his new book Reprogramming the American Dream: From Rural America to Silicon ValleyMaking AI Serve Us All.
Scotts book tackles how artificial intelligence and machine learning can help rural America in a more grounding way, from employment to education to public health. In one chapter of his book, Scott focuses on how AI can assist with health care and diagnostic issues a prominent concern in the US today, especially during the COVID-19 pandemic.
In the interview, Scott refocuses the solutions he describes in the book around the current crisis, specifically supercomputers Microsoft has been using to train natural language processing now being used to search for vaccine targets and therapies for the novel coronavirus.
Below is a lightly edited excerpt of the conversation.
So lets talk about health care because its something you do focus on in the book. Its a particularly poignant time to talk about health care. How do you see AI helping broadly with health care and then more specifically with the current crisis?
I think there are a couple of things going on.
One I think is a trend that I wrote about in the book and that is just getting more obvious every day is that we need to do more. So that particular thing is that if our objective as a society is to get higher-quality, lower-cost health care to every human being who needs it, I think the only way that you can accomplish all three of those goals simultaneously is if you use some form of technological disruption.
And I think AI can be exactly that thing. And youre already seeing an enormous amount of progress on the AI-powered diagnostics front. And just going into the crisis that were in right now, one of the interesting things that a bunch of folks are doing including, I think I read a story about the Chan Zuckerberg Initiative is doing this is the idea is that if you have ubiquitous biometric sensing, like youve got a smartwatch or a fitness band or maybe something even more complicated that can sort of read off your heart-tick data, that can look at your body temperature, that can measure the oxygen saturation in your blood, that can basically get a biometric readout of how your bodys performing. And its sort of capturing that information over time. We can build diagnostic models that can look at those data and determine whether or not youre about to get sick and sort of predict with reasonable accuracy whats going on and what you should do about it.
Like you cant have a cardiologist following you around all day long. There arent enough cardiologists in the world even to give you a good cardiological exam at your annual checkup.
I think this isnt a far-fetched thing. There is a path forward here for deploying this stuff on a broader scale. And it will absolutely lower the cost of health care and help make it more widely available. So thats one bucket of things. The other bucket of things is like just some mind-blowing science that gets enabled when you intersect AI with the leading-edge stuff that people are doing in the biosciences.
Give me an example.
So, two things that we have done relatively recently at Microsoft.
One is one of the big problems in biology that weve had that that immunologists have been studying for years and years and years, is whether or not you could take a readout of your immune system by looking at the distribution of the types of T-cells that are active in your body. And from that profile, determine what illnesses that your body may be actively dealing with. What is it prepared to deal with? Like what might you have recently had?
And that has been a hard problem to figure out because, basically, youre trying to build something called a T-cell receptor antigen map. And now, with our sequencing technology, we have the ability to get the profile so you can sort of see what your immune system is doing. But we have not yet figured out how to build that mapping of the immune system profile to diseases.
Except were partnering with this company called Adaptive that is doing really great work with us, like bolting machine learning onto this problem to try to figure out what the mapping actually looks like. We are rushing right now a serologic test like a blood test that we hope well be able to sort of tell you whether or not you have had a COVID-19 infection.
So I think its mostly going to be useful for understanding the sort of spread of the disease. I dont think its going to be as good a diagnostic test as like a nasal swab and one of the sequence-based tests that are getting pushed out there. But its really interesting. And the implications are not just for COVID-19, but if you are able to better understand that immune system profile, the therapeutic benefits of that are just absolutely enormous. Weve been trying to figure this out for decades.
The other thing that were doing is when youre thinking about SARS-CoV-2 which is the virus that causes COVID-19 that is raging through the world right now we have never in human history had a better understanding of a virus and how it is attacking the body. And weve never had a better set of tools for precision engineering, potential therapies, and vaccines for this thing. And part of that engineering process is using a combination of simulation and machine learning and these cutting-edge techniques of biosciences in a way where youre sort of leveraging all three at the same time.
So weve got this work that were doing with a partner right now where I have taken a set of supercomputing clusters that we have been using to train natural language processing, deep neural networks, just massive scale. And those clusters are now being used to search for vaccine targets and therapies for SARS-CoV-2.
Were one among a huge number of people who are very quickly searching for both therapies and potential vaccines. There are reasons to be hopeful, but weve got a way to go.
But its just unbelievable to me to see how these techniques are coming together. And one of the things that Im hopeful about as we deal with this current crisis and think about what we might be able to do on the other side of it is it could very well be that this is the thing that triggers a revolution in the biological sciences and investment in innovation that has the same sort of a decades-long effect that the industrialization push around World War II had in the 40s that basically built our entire modern world.
Yeah, thats what I keep coming back to, this idea that this is a reset on a scale that very few people living today have ever experienced.
And you said out of World War II, a lot of basic technology was invented, deployed, refined. And now we kind of get to layer in things like AI in a way that is, quite frankly, remarkable. I do think, I mean, it sounds like were going to have to accept that Cortana might be a little worse at natural language processing while you search for the protein surfaces. But I think its a trade most people make.
[Laughs] I think thats the right trade-off.
Read the original here:
Microsofts CTO explains how AI can help health care in the US right now - The Verge
At the last Democratic presidential debate, the technologist candidate Andrew Yang emphatically declared that were in the process of potentially losing the AI arms race to China right now. As evidence, he cited Beijings access to vast amounts of data and its substantial investment in research and development for artificial intelligence. Yang and othersmost notably the National Security Commission on Artificial Intelligence, whichreleased its interim report to Congress last monthare right about Chinas current strengths in developing AI and the serious concerns this should raise in the United States. But framing advances in the field as an arms race is both wrong and counterproductive. Instead, while being clear-eyed about Chinas aggressive pursuit of AI for military use and human rights-abusing technological surveillance, the United States and China must find their way to dialogue and cooperation on AI. A practical, nuanced mix of competition and cooperation would better serve U.S. interests than an arms race approach.
AI is one of the great collective Rorschach tests of our times. Like any topic that captures the popular imagination but is poorly understood, it soaks up the zeitgeist like a sponge.
Its no surprise, then, that as the idea of great-power competition has reengulfed the halls of power, AI has gotten caught up in therace narrative.ChinaAmericans are toldis barreling ahead on AI, so much so that the United States willsoon be lagging far behind. Like the fears that surrounded Japans economic rise in the 1980s or the Soviet Union in the 1950s and 1960s, anxiety around technological dominance are really proxies for U.S. insecurity about its own economic, military, and political prowess.
Yet as technology, AI does not naturally lend itself to this framework and is not a strategic weapon.Despite claims that AI will change nearly everything about warfare, and notwithstanding its ultimate potential, for the foreseeable future AI will likely only incrementally improve existing platforms, unmanned systems such as drones, and battlefield awareness. Ensuring that the United States outpaces its rivals and adversaries in the military and intelligence applications of AI is important and worth the investment. But such applications are just one element of AI development and should not dominate the United States entire approach.
The arms race framework raises the question of what one is racing toward. Machine learning, the AI subfield of greatest recent promise, is a vast toolbox of capabilities and statistical methodsa bundle of technologies that do everything from recognizing objects in images to generating symphonies. It is far from clear what exactly would constitute winning in AI or even being better at a national level.
The National Security Commission is absolutely right that developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. U.S. leadership in AI is imperative. Leading, however, does not mean winning. Maintaining superiority in the field of AI is necessary but not sufficient. True global leadership requires proactively shaping the rules and norms for AI applications, ensuring that the benefits of AI are distributed worldwidebroadly and equitablyand stabilizing great-power competition that could lead to catastrophic conflict.
That requires U.S. cooperation with friends and even rivals such as China. Here, we believe that important aspects of the National Security Commission on AIs recent report have gotten too little attention.
First, as the commission notes, official U.S. dialogue with China and Russia on the use of AI in nuclear command and control, AIs military applications, and AI safety could enhance strategic stability, like arms control talks during the Cold War. Second, collaboration on AI applications by Chinese and American researchers, engineers, and companies, as well as bilateral dialogue on rules and standards for AI development, could help buffer the competitive elements of anincreasingly tense U.S.-Chinese relationship.
Finally, there is a much higher bar to sharing core AI inputs such as data and software and building AI for shared global challenges if the United States sees AI as an arms race. Although commercial and military applications for AI are increasing, applications for societal good (addressing climate change,improving disaster response,boosting resilience, preventing the emergence of pandemics, managing armed conflict, andassisting in human development)are lagging. These would benefit from multilateral collaboration and investment, led by the United States and China.
The AI arms race narrative makes for great headlines, buttheunbridled U.S.-Chinese competition it implies risks pushing the United States and the world down a dangerous path. Washington and Beijing should recognize the fallacy of a generalized AI arms race in which there are no winners. Instead, both should lead by leveraging the technology to spur dialogue between them and foster practical collaboration to counter the many forces driving them apartbenefiting the whole world in the process.
Read the original here:
Artificial intelligence has come to mean more to retailers than just chatbots and recommendation engines. AI has found its place as a critical tool, particularly for companies managing a fleet of stores.
Thats certainly the case for Puma and PVH Corp., which have both come to rely on the technology in the critical months of the coronavirus pandemic.
Katie Darling, Pumas vice president of merchandising, and Kate Nadolny, senior vice president of business strategy and innovation at PVH, weighed in on how their companies used AI to improve forecasting and merchandising during a tough year that has vexed brick-and-mortar retail like no other.
Nadolny explained to host Prashant Agrawal, of Impact Analytics, that when PVH Corp. began its journey with AI forecasting a couple of years ago, the company wasnt entirely sure about what it would entail.
We identified the clear need and opportunity for us to be smarter about how were making our forecasting and prediction decisions, she said. But we werent really sure about what the tools and capabilities were that we needed.
As the parent company of Van Heusen, Tommy Hilfiger, Calvin Klein, Izod, Geoffrey Beene and more, PVH had more to weigh than a chain of single-branded stores. It was dealing with a multibrand portfolio, with different target customers and locations, bringing an extra layer of complexity. But its the sort of challenge that AI meets head on.
The company first looked to brainstorm with tech partners over areas like assortment and allocation, but COVID-19 quickly changed the priorities.
Suddenly, Nadolny said, PVH realized that it needed to make swift decisions, as its stores contended with varying rules that meant some stores closed, while others remained opened or swung between the two, as infection rates changed. Meanwhile, the rules of retail were being rewritten, as consumer behaviors morphed.
This was the period when stores were exploring curbside pickup and retailers that never before offered appointment shopping suddenly raced to meet new customer expectations. And such services may not work equally well in all areas, particularly in regions hard hit by the economic downturn, or perhaps work best for certain product categories or customer segments, which can vary by store.
Nadolny explained that knowing where and how to shift inventories or change pricing in real time, AI is simply faster at crunching the data and pulling insights than humans.
Darling agreed. She discovered at Puma that granular planning, down to the store level, across multiple doors is impossible without artificial intelligence. Additionally, it can find patterns youre not looking for, she explained, especially when compared to the way people dig through spreadsheets.
Its not only slower, but also less efficient at spotting identifying critical insights.
If one product sells out, whats the next best item in stock that can fill the gap? If a certain item performs well, but whats really leading sales are the smaller sizes in that particular style or stockkeeping unit, could a human staffer pinpoint that? In a normal year, such questions would point to missed opportunities, but in 2020, those insights can determine survival.
The idea of using artificial intelligence to help us make smarter decisions, whether it be at a category level, a collection level or even down to a size level is really important, said Darling.
But before AI can be really useful in the retail setting or in any setting the fundamentals need to be in place. Nadolny pointed out that AI initiatives need to start out with good data, which was one of the PVHs biggest early challenges.
As the saying goes, Garbage in, garbage out, she said. So how do you make sure that your data is right? That attributes are right, that the information that we have is correct and aligned? So while we have quite a bit of data thats very, very useful for us, its not always in the same place, in the same structure, in the same format.
As we start to move towards being able to better utilize these types of tools, internally were spending a lot of time focused on the clarity of our data governance and data structure, so we can therefore take that information and utilize that appropriately in the tool, she added.
The human element also remains important, Nadolny noted, in that staff should have appropriate training on how to best use the tools for the business.
The process could be a challenge for the humans, Nadolny acknowledged. It can even feel like a loss of control, but ideally theyll come to see and appreciate the tech. The machine can really learn more quickly and adapt to whats happening in the space more so than we can in our Excel-based toolset that we have today, she said.
Read more here:
Puma, PVH Corp. on AI and Forecasting, Merchandising in the COVID-19 Era - WWD
Augmented Intelligence
The nasal test for Covid-19 requires a nurse to insert a 6-inch long swab deep into your nasal passages. The nurse inserts this long-handled swab into both of your nostrils and moves it around for 15 seconds.
Now, imagine that your nurse is a robot.
A few months ago, a nasal swab robot was developed by Brain Navi, a Taiwanese startup. The companys intent was to minimize the spread of infection by reducing staff-patient contact. So, here we have a robot autonomously navigating the probe down into your throat, and carefully avoiding channels that lead up to the eyes.
The robot is supposed to be safe. But many patients would, understandably, be terrified.
Unfortunately, enterprise applications of artificial intelligence (AI) are often no less misguided. Today, AI has picked up remarkable capabilities. Its better than humans in tasks such as voice and image recognition, across disciplines from audio transcription to games.
But does this mean we should simply hand over the reins to machines and sit back? Not quite.
You need humans to make your AI solutions more effective, acceptable, and humane for your users. Thats when they will be adopted and deliver ROI for your organization. When AI and humans combine forces, the whole can be greater than the sum of its parts.
This is called augmented intelligence.
Here are 4 reasons why you need augmented intelligence to transform your business:
A large computer manufacturer wanted to find out what made its customers happy. Gramener, a company providing data science solutions analyzed tens of thousands of comments from the clients bi-annual voice of customer (VoC) survey. A key step in this text analytics process was to find what the customers were talking about. Were they worried about billing or after-sales service?
The team used AI language models to classify comments into the right categories. The algorithm delivered an average accuracy of over 90%, but the business users werent happy. While the algorithm aced at most categories, there were a few where it stumbled, at around 60% accuracy. This led to poor decisions in those areas.
Algorithms perform best when they are trained on large volumes of data, with a representative variety of scenarios. The low-accuracy categories in this project had neither. The project team experimented by bringing in humans to handle those categories where the models confidence was low.
At low manual effort, the overall solution accuracy shot up. This delivered an improvement of 2 percentage points in the clients Net Promoter Score.
Algorithms detect online fraud by studying factors such as consumer behavior and historical shopping patterns. They learn from past examples to identify whats normal and whats not. With the onset of the pandemic, these algorithms started failing.
In todays new normal, consumers have gone remote. They spend more time online, and the spending patterns have shifted in unexpected ways. Suddenly, everything these algorithms have learned has become irrelevant. Covid-19 threw them a curveball.
Algorithms work well only in scenarios that they are trained for. In completely new situations, humans must step in. Organizations that have kept humans in the loop can quickly transition control to them in such situations. Humans can keep systems running smoothly by ensuring that they are resilient in the face of change.
Meanwhile algorithms can go back to the classroom to unlearn, relearn, and come back a little smarter. For example, a recent NIST study found that the use of face masks is breaking facial recognition algorithms, such as the ones used in border crossings. Most systems had error rates up to 50%, calling for manual intervention. The algorithms are being retrained to use areas visible around the eyes.
On March 18, 2018, Elaine Herzberg was walking her bike across Mill Avenue. It was around 10 p.m in Tempe, Arizona. She crossed several lanes of traffic, before being struck by a Volvo.
But this wasnt any Volvo. It was a self-driving car, being tested by Uber.
The car was trained to detect jaywalkers at crosswalks. But, Herzberg had been crossing in the middle of the road, so the AI failed to detect her.
This tragic incident was the first pedestrian death caused by a self-driving car. It raised several questions. When AI makes a mistake, who should be held responsible? Is it the carmaker (Volvo), the AI system maker (Uber), the car driver (Rafaela Vasquez), or the pedestrian (Elaine Herzberg)?
Occasionally, high-precision algorithms will falter, even in familiar scenarios. Rather than roll back the advances made in automation, we must make efforts to improve accountability. Last month, the European Commission published recommendations from an independent expert report for self-driving cars.
The experts call for identifying ownership of all parties and for devising ways to attribute responsibility across scenarios. The report recommends an improvement of human-machine interactions so that AI and drivers can communicate better and understand each others limitations.
Will Siri, Alexa or Google Assistant discriminate against you? Earlier this year, researchers at Stanford University attempted to answer this question by studying the top voice recognition systems in the world. They found that these popular devices had more trouble understanding Black people than white people. They misidentified 35 percent of words spoken by Black users, but only 19 percent for white users.
Bias is a thorny issue in AI. But we must remember that algorithms are only as good as the data used to train them. Our world is anything but perfect. When algorithms learn from our data, they mimic these imperfections and magnify the bias. There is ongoing research in AI to improve fairness and ethics. However, no amount of model engineering will make algorithms perfect.
In the real world, if we are serious about fighting bias, we use our judgement. We make rules more inclusive and adopt measures to amplify suppressed voices. The same approach is needed in AI solutions. Design human intervention to check and address potential scenarios of discrimination. Use human judgment to fight a machines learned bias.
Rethink Design
We often measure progress in AI by comparing AIs abilities to that of humans.
While thats a useful benchmarking exercise, its a mistake to use this approach while designing AI solutions. Organizations often pit AI against humans. This doesnt do justice to either one. It leads to suboptimal performance, brittle solutions, untrustworthy applications and unfair decisions.
Augmented intelligence combines the strengths of humans with those of AI. It combines the speed, logic and consistency of machines with the common sense, emotional intelligence and empathy of humans.
To achieve augmented intelligence, you need humans in the loop. This mustbe planned upfront. Merely adding new processes or responsibilities to an existing technology solution leads to poor results. You must (re)design the solution workflow, and decide which areas are best handled by algorithms. You should define whether humans must make decisions or review decisions made by a machine.
Building augmented intelligence is an ongoing journey. With evolution in machine capabilities and changes in users comfort and trust levels, you must continuously improve the design.
This will make AI-driven systems that do invasive medical procedures or that make high-stakes financial decisions more compassionate and trustworthy for your users.
Disclosure: I co-founded the company, Gramener, that's mentioned in one of the examples in this article.
More here:
Go Beyond Artificial Intelligence: Why Your Business Needs Augmented Intelligence - Forbes
Robotics and artificial intelligence (A.I) were once considered fantasies of the future.
Today, both technologies are being incorporated into many elements of everyday life, with applications popping up in everything from healthcare and education to communication and transportation.
In July, R&D Magazine took a deeper dive into this breakthrough area of research.
We kicked off our coverage by speaking to several experts about where the field of robotics is going in, Robotics Industry Has Big Future as Applications Grow.
Susan Teele of the Advanced Robotics for Manufacturing Institute and Bob Doyle of the Robotics Industries Association, discussed the impact that robots will have on the workforce and what technological advancements are needed for them to truly flourish.
We expanded on that idea in Creating Robots That Are More Like Humans, which features a research group at Northeastern University focused on creating software that makes robots more autonomous, so eventually they are able to perform tasks on their own with little human supervision or intervention.
The groups leader, Taskin Padir told R&D Magazine how reliable robots with human-like dexterity and improved autonomy could take over jobs that are dangerous or difficult for humans to perform.
Robots can also be used to reduce danger. Our article, Creator of Suicidal Robot Explains How Robot Security Could Prevent 'The Next Sandy Hook, focused on the robotic security company Knightscope, which made headline recently for a humorous mishap involving one of its robots falling into a fountain.
However, the real story is the true mission behind Knightscope.
The company was created by a former police officer who was deeply impacted by the Sandy Hook Elementary School shooting. Knightscopes robots now serve as intelligence gathering tools, which law enforcement officials can utilize during, as well as afteran emergency, to better understand what is going on, de-escalate a dangerous situation, and potentially help capture or gather evidence against the perpetrator of the crime.
We wrapped up our robotics coverage with, Robotic Teachers Can Adjust Style Based on Student Success, which focuses on the development of socially assistive robotics a new field of robotics that focuses on assisting users through social rather than physical interaction. A research group at Yale University is designing these robots to work with children, including those with challenges such as autism, hearing impairment, or those whose first language is one other than English.
A.I. Advancements
Our A.I. coverage kicked off with Why Canada is Becoming a Hub for A.I. Research, which highlighted the significant commitment to A.I. research and development our neighbor to the north is making.
The Vector Institute which received an estimated $150 million investment from both the Canadian government and as well as Canadian businessesis one example of that commitment.
The independent not-for-profit institution based in Ontario seeks to to build and sustain A.I.-based innovation, growth and productivity in Canada by focusing on the transformative potential of deep learning and machine learning.
We also looked at the impact of A.I. in the healthcare space. One article, Startup Uses A.I. to Streamline Drug Discovery Process, features an interview with the CEO of Exscientia, which is using A.I. fueled programs in conjunction with experienced drug developers to implement a rapid design-make-test cycle. This essentially ascertains how certain molecules will behave and then predicts how likely they are to become useful drugs.
Another startup, Potbotics is using A.I. to comb through the different strains of medical marijuana to find the right one for a specific ailment with its app PotBot.Once a medical cannabis recommendation is calculated, the app helps patients find their recommended cannabis at a nearby dispensary or set up an appointment with a licensed medical cannabis clinic. We featured the company in our article, PotBot Uses A.I. to Match Medical Marijuana Users to Best Strain.
The use of A.I. to create autonomous vehicles is another area that is rapidly growing. In our article Algorithm Improves Energy Efficiency of Autonomous Underwater Vehicles, we focused on researchers from Oregon State University, who developed a new algorithm to direct autonomous underwater vehicles to ride the ocean currents when traveling from point to point.
Improving the A.I. of the vehicles extends their battery life by decreasing the amount of battery power wasted through inefficient trajectory cuts.
Also, we took a deep dive into Toyotas plan for A.I., featuring an exclusive interview with Jim Adler the managing director of the Japanese car companys new venture fund, Toyota A.I. Ventures in our article, How Toyotas New Venture Fund Will Tackle A.I. Investments.
The venture fund will use an initial fund of $100 million to collaborate with entrepreneurs from all over the world, in an effort to improve the quality of human life through artificial A.I.
Toyota A.I. Ventures will work with startups at an early stage and offer a founder-friendly environment that wont impact their ability to work with other investors. They will also offer assistance with technology and product expertise to validate that the product being built is for the right market, and give these entrepreneurs access to Toyotas global network of affiliates and partners to ensure a successful market launch.
Next Months Special Focus
In August, R&D Magazine will continue its special focus series, this time highlighting the many applications of virtual reality. The technology has expanded significantly outside of the video gaming world, and is now being used across multiple disciplines.
Continue reading here:
It shouldn’t come as much of a surprise, but Facebook knows a lot about you.
And while the information it collects about you isn’t exactly in the safest of hands, it could give mental health care professionals a huge leg up in predicting your future mental well-being — if you’re willing to hand over your login information.
In research published this week Proceedings of the National Academy of Sciences, scientists at the University of Pennsylvania described how they were able to determine whether a particular Facebook user is likely to become depressed in the future, simply by analyzing their status updates.
Wired reports that the researchers used machine learning algorithms to analyze almost half a million Facebook posts — spanning a period of seven years — by 684 willing patients at a Philadelphia emergency ward.
“We’re increasingly understanding that what people do online is a form of behavior we can read with machine learning algorithms, the same way we can read any other kind of data in the world,” UPenn psychologist Johannes Eichstaedt told the magazine.
The algorithms looked for markers of depression in the patients’ posts, and found that depressed individuals used more first-person language — a finding in line with many previous studies. The algorithm got so good at catching those markers that it could predict if a Facebook user was depressed up to three months prior to a formal diagnosis by a health care professional.
For now, take these results with a grain of salt — this won’t ever be a substitute for human psychologists, because there are far too many variables. But social media information — along with heart rates or sleep data collected by fitness trackers — could be a powerful tool to catch mental health problems early on.
That is, if we’re willing to share that kind of information with them in the first place.
Read More: Your Facebook Posts Can Reveal If You’re Depressed [Wired]
More on mental health and social media: Instagram is Trying to Make Users Feel Better Without Scaring Them Off

See the article here:
AI Can Tell if You’re Depressed by Reading Your Facebook Posts
U.S.-headquartered Google on Thursday announced the setting up of a research lab in Bengaluru that will work on advancing artificial intelligence-related research with an aim to solve problems in sectors such as healthcare, agriculture and education.
...we announced Google Research India a new AI research team in Bangalore that will focus on advancing computer science and applying AI research to solve big problems in healthcare, agriculture, education and more, Sundar Pichai, CEO, Google, tweeted.
Caesar Sengupta, vice-president, Next Billion Users Initiative and Payments at Google, added that this team would focus on advancing fundamental computer science and AI research by building a strong team and partnering with the research community across the country. It will also be applying this research to tackle problems in fields such as healthcare, agriculture, and education.
The new lab will be a part of and support Googles global network of researchers. Were also exploring the potential for partnering with Indias scientific research community and academic institutions to help train top talent and support collaborative programmes, tools and resources, Jay Yagnik, vice-president and Google Fellow, Google AI, said.
Google Pay
The technology giant announced a host of additions to its UPI-powered digital payments app Google Pay, which the company said had grown more than three times in the last 12 months to 67 million monthly active users, driving transactions worth over $110 billion on an annualised basis.
To start with, the company has introduced Spot Platform within Google Pay, which will enable merchants to create new experiences that bridge the offline and online world. Ambarish Kenghe, director, product management, Google Pay, said: A Spot is a digital front for a business that is created, branded and hosted by them, and powered by Google Pay. Users can discover a Spot online or at a physical location, and transact with the merchant easily and securely within the Google Pay app.
Users will now also be able to search for entry-level jobs that could not be easily discovered online via the application.
Google will also roll out tokenized cards, which will enable users to make payments using debit and credit cards without using actual card number.
View post:
Listen to Andreas Koukorinis, founder of UK sports betting company Stratagem, and youd be forgiven for thinking that soccer games are some of the most predictable events on Earth. Theyre short duration, repeatable, with fixed rules, Koukorinis tells The Verge. So if you observe 100,000 games, there are patterns there you can take out.
The mission of Koukorinis company is simple: find these patterns and make money off them. Stratagem does this either by selling the data it collects to professional gamblers and bookmakers, or by keeping it and making its own wagers. To fund these wagers, the firm is raising money for a 25 million ($32 million) sports betting fund that its positioning as an investment alternative to traditional hedge funds. In other words, Stratagem hopes rich people will give Stratagem their money. The company will gamble with it using its proprietary data, and, if all goes to plan, everyone ends up just that little bit richer.
Its a familiar story, but Stratagem is adding a little something extra to sweeten the pot: artificial intelligence.
At the moment, the company uses teams of human analysts spread out around the globe to report back on the various sporting leagues it bets on. This information is combined with detailed data about the odds available from various bookmakers to give Stratagem an edge over the average punter. But, in the future, it wants computers to do the analysis for it. It already uses machine learning to analyze some of its data (working out the best time to place a bet, for example), but its also developing AI tools that can analyze sporting events in real time, drawing out data that will help predict which team will win.
Stratagem is using deep neural networks to achieve this task the same technology thats enchanted Silicon Valleys biggest firms. Its a good fit, since this is a tool thats well-suited for analyzing vast pots of data. As Koukorinis points out, when analyzing sports, theres a hell of a lot data to learn from. The companys software is currently absorbing thousands of hours of sporting fixtures to teach it patterns of failure and success, and the end goal is to create an AI that can watch a range of a half-dozen different sporting events simultaneously on live TV, extracting insights as it does.
Stratagems AI identifies players to make a 2D map of the game
At the moment, though, Stratagem is starting small. Its focusing on just a few sports (soccer, basketball, and tennis) and a few metrics (like goal chances in soccer). At the companys London offices, home to around 30 employees including ex-bankers and programmers, were shown the fledgling neural nets for soccer games in action. On-screen, the output is similar to what you might see from the live feed of a self-driving car. But instead of the computer highlighting stop signs and pedestrians as it scans the road ahead, its drawing a box around Zlatan Ibrahimovi as he charges at the goal, dragging defenders in his wake.
Stratagems AI makes its calculations watching a standard, broadcast feed of the match. (Pro: its readily accessible. Con: it has to learn not to analyze the replays.) It tracks the ball and the players, identifying which team theyre on based on the color of their kits. The lines of the pitch are also highlighted, and all this data is transformed into a 2D map of the whole game. From this viewpoint, the software studies matches like an armchair general: it identifies what it thinks are goal-scoring chances, or the moments where the configuration of players looks right for someone to take a shot and score.
Football is such a low-scoring game that you need to focus on these sorts of metrics to make predictions, says Koukorinis. If theres a short on target from 30 yards with 11 people in front of the striker and that ends in a goal, yes, it looks spectacular on TV, but its not exciting for us. Because if you repeat it 100 times the outcomes wont be the same. But if you have Lionel Messi running down the pitch and hes one-on-one with the goalie, the conversion rate on that is 80 percent. We look at what created that situation. We try to take the randomness out, and look at how good the teams are at what theyre trying to do, which is generate goal-scoring opportunities.
Whether or not counting goal-scoring opportunities is the best way to rank teams is difficult to say. Stratagem says its a metric thats popular with professional gamblers, but they and the company weigh it with a lot of other factors before deciding how to bet. Stratagem also notes that the opportunities identified by its AI dont consistently line up with those spotted by humans. Right now, the computer gets it correct about 50 percent of the time. Despite this, the company say its current betting models (which it develops for soccer, but also basketball and tennis) are right more than enough times for it to make a steady return, though they wont share precise figures.
A team of 65 analysts collect data around the world
At the moment, Stratagem generates most of its data about goal-scoring opportunities and other metrics the old-fashioned way: using a team of 65 human analysts who write detailed match reports. The companys AI would automate some of this process and speed it up significantly. (Each match report takes about three hours to write.) Some forms of data-gathering would still rely on humans, however.
A key task for the companys agents is finding out a teams starting lineup before its formally announced. (This is a major driver of pre-game betting odds, says Koukorinis, and knowing in advance helps you beat the market.) Acquiring this sort of information isnt easy. It means finding sources at a club, building up a relationship, and knowing the right people to call on match day. Chatbots just arent up to the job yet.
Machine vision, though, is really just one element of Stratagems AI business plan. It already applies machine learning to more mundane facets of betting like working out the best time to place a bet in any particular market. In this regard, what the company is doing is no different from many other hedge funds, which for decades have been using machine learning to come up with new ways to trade. Most funds blend human analysis with computer expertise, but at least one is run completely by decisions generated by artificial intelligence.
However, simply adding more computers to the mix isnt always a recipe for success. Theres data showing that if you want to make the most out of your money, its better to just invest in the top-performing stocks of the S&P 500, rather than sign up for an AI hedge fund. Thats not the best sign that Stratagems sports-betting fund will offer good returns, especially when such funds are already controversial.
In 2012, a sports-betting fund set up by UK firm Centaur Holdings, collapsed just two years after it launched. It lost $2.5 million after promising investors returns of 15 to 20 percent. To critics, operations like this are just borrowing the trappings of traditional funds to make gambling look more like investing.
I dont doubt its great fun... but dont qualify it with the term investment.
David Stevenson, director of finance research company AltFi, told The Verge that theres nothing essentially wrong with these funds, but they need to be thought of as their own category. I dont particularly doubt its great fun [to invest in one] if you like sports and a bit of betting, said Stevenson. But dont qualify it with the term investment, because investment, by its nature, has to be something you can predict over the long run.
Stevenson also notes that AI hedge funds that are successful those that torture the math within an inch of its life to eek out small but predictable profits tend not to seek outside investment at all. They prefer keeping the money to themselves. I treat most things that combine the acronym AI and the word investing with an enormous dessert spoon of salt, he said.
Whether or not Stratagems AI can deliver insights that make sporting events as predictable as the tides remains to be seen, but the companys investment in artificial intelligence does have other uses. For starters, it can attract investors and customers looking for an edge in the world of gambling. It can also automate work thats currently done by the companys human employees and make it cheaper. As with other businesses that are using AI, its these smaller gains that might prove to be most reliable. After all, small, reliable gains make for a good investment.
Link:
This startup is building AI to bet on soccer games - The Verge - The Verge
Understanding the how our universe came to be what it is today and what will be its final destiny is one of the biggest challenges in science. The awe-inspiring display of countless stars on a clear night gives us some idea of the magnitude of the problem, and yet that is only part of the story. The deeper riddle lies in what we cannot see, at least not directly: dark matter and dark energy. With dark matter pulling the universe together and dark energy causing it to expand faster, cosmologists need to know exactly how much of those two is out there in order to refine their models.
At ETH Zurich, scientists from the Department of Physics and the Department of Computer Science have now joined forces to improve on standard methods for estimating the dark matter content of the universe through artificial intelligence. They used cutting-edge machine learning algorithms for cosmological data analysis that have a lot in common with those used for facial recognition by Facebook and other social media. Their results have recently been published in the scientific journal Physical Review D.
While there are no faces to be recognized in pictures taken of the night sky, cosmologists still look for something rather similar, as Tomasz Kacprzak, a researcher in the group of Alexandre Refregier at the Institute of Particle Physics and Astrophysics, explains: "Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy." As dark matter cannot be seen directly in telescope images, physicists rely on the fact that all matter - including the dark variety - slightly bends the path of light rays arriving at the Earth from distant galaxies. This effect, known as "weak gravitational lensing", distorts the images of those galaxies very subtly, much like far-away objects appear blurred on a hot day as light passes through layers of air at different temperatures.
Cosmologists can use that distortion to work backwards and create mass maps of the sky showing where dark matter is located. Next, they compare those dark matter maps to theoretical predictions in order to find which cosmological model most closely matches the data. Traditionally, this is done using human-designed statistics such as so-called correlation functions that describe how different parts of the maps are related to each other. Such statistics, however, are limited as to how well they can find complex patterns in the matter maps.
"In our recent work, we have used a completely new methodology", says Alexandre Refregier. "Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job." This is where Aurelien Lucchi and his colleagues from the Data Analytics Lab at the Department of Computer Science come in. Together with Janis Fluri, a PhD student in Refregier's group and lead author of the study, they used machine learning algorithms called deep artificial neural networks and taught them to extract the largest possible amount of information from the dark matter maps.
In a first step, the scientists trained the neural networks by feeding them computer-generated data that simulates the universe. That way, they knew what the correct answer for a given cosmological parameter - for instance, the ratio between the total amount of dark matter and dark energy - should be for each simulated dark matter map. By repeatedly analysing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information. In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.
The results of that training were encouraging: the neural networks came up with values that were 30% more accurate than those obtained by traditional methods based on human-made statistical analysis. For cosmologists, that is a huge improvement as reaching the same accuracy by increasing the number of telescope images would require twice as much observation time - which is expensive.
Finally, the scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-450 dataset. "This is the first time such machine learning tools have been used in this context," says Fluri, "and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications."
As a next step, he and his colleagues are planning to apply their method to bigger image sets such as the Dark Energy Survey. Also, more cosmological parameters and refinements such as details about the nature of dark energy will be fed to the neural networks.
Reference: Fluri J, Kacprzak T, Lucchi A, Refregier A, Amara A, Hofmann T, Schneider A: Cosmological constraints with deep learning from KiDS-450 weak lensing maps. Physical Review D. 100: 063514, doi: 10.1103/PhysRevD.100.063514
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.
See the original post:
How Much Dark Matter in the Universe? AI May Have the Answer - Technology Networks
Set a machine to program a machine
iunewind/Alamy Stock Photo
By Matt Reynolds
OUT of the way, human, Ive got this covered. A machine learning system has gained the ability to write its own code.
Created by researchers at Microsoft and the University of Cambridge, the system, called DeepCoder, solved basic challenges of the kind set by programming competitions. This kind of approach could make it much easier for people to build simple programs without knowing how to write code.
All of a sudden people could be so much more productive, says Armando Solar-Lezama at the Massachusetts Institute of Technology, who was not involved in the work. They could build systems that it [would be] impossible to build before.
Ultimately, the approach could allow non-coders to simply describe an idea for a program and let the system build it, says Marc Brockschmidt, one of DeepCoders creators at Microsoft Research in Cambridge, UK.
DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.
It could allow non-coders to simply describe an idea for a program and let the system build it
One advantage of letting an AI loose in this way is that it can search more thoroughly and widely than a human coder, so could piece together source code in a way humans may not have thought of. Whats more, DeepCoder uses machine learning to scour databases of source code and sort the fragments according to its view of their probable usefulness.
All this makes the system much faster than its predecessors. DeepCoder created working programs in fractions of a second, whereas older systems take minutes to trial many different combinations of lines of code before piecing together something that can do the job. And because DeepCoder learns which combinations of source code work and which ones dont as it goes along, it improves every time it tries a new problem.
The technology could have many applications. In 2015, researchers at MIT created a program that automatically fixed software bugs by replacing faulty lines of code with working lines from other programs. Brockschmidt says that future versions could make it very easy to build routine programs that scrape information from websites, or automatically categorise Facebook photos, for example, without human coders having to lift a finger
The potential for automation that this kind of technology offers could really signify an enormous [reduction] in the amount of effort it takes to develop code, says Solar-Lezama.
But he doesnt think these systems will put programmers out of a job. With program synthesis automating some of the most tedious parts of programming, he says, coders will be able to devote their time to more sophisticated work.
At the moment, DeepCoder is only capable of solving programming challenges that involve around five lines of code. But in the right coding language, a few lines are all thats needed for fairly complicated programs.
Generating a really big piece of code in one shot is hard, and potentially unrealistic, says Solar-Lezama. But really big pieces of code are built by putting together lots of little pieces of code.
This article appeared in print under the headline Computers are learning to code for themselves
More on these topics:
See the rest here:
AI learns to write its own code by stealing from other programs - New Scientist
Commenting on what name historians might settle on to label our interesting times, Hilbert notes that this most recent period of ancient and incessant logic of societal transformation took a series of titles between the 1970s and the year 2000.
In chronological order, he recounts, these have included post-industrial society, information economy, information society, fifth Kondratieff, information technology revolution, digital age and information age.
While only time will provide the required empirical evidence to set any categorization of this current period on a solid footing, recent developments have suggested that we are living through different long waves within the continuously evolving information age, writes Hilbert, who holds separate doctorates in communication and economics/social sciences.
Considering the outlook for AI, he sees some already achieved advancementscancer diagnostics, speech recognitionas dazzling.
And in case its been stealthy enough in its march forward to escape the attention it ought to be getting, note well that AI has become an indispensable pillar of the most crucial building blocks of society.
Here is the original post:
For every exciting opportunity promised by artificial intelligence, theres a potential downside that is its bleak mirror image. We hope that AI will allow us to make smarter decisions, but what if it ends up reinforcing the prejudices of society? We dream that technology might free us from work, but what if only the rich benefit, while the poor are dispossessed?
Its issues like these that keep artificial intelligence researchers up at night, and theyre also the reason that Google is launching an AI initiative today to tackle some of these same problems. The new project is named PAIR (it stands for People + AI Research) and its aim is to study and redesign the ways people interact with AI systems and try to ensure that the technology benefits and empowers everyone.
Google wants to help everyone from coders to users
Its a broad remit, and an ambitious one. Google says PAIR will look at a number of different issues affecting everyone in the AI supply chain from the researchers who code algorithms, to the professionals like doctors and farmers who are (or soon will be) using specialized AI tools. The tech giant says it wants to make AI user-friendly, and that means not only making the technology easy to understand (getting AI to explain itself is a known and challenging problem) but also ensuring that it treats its users equally.
Its been noted time and time again that the prejudices and inequalities of society often become hard-coded in AI. This might mean facial recognition software that doesnt recognize dark-skinned users, or a language processing program which assume that doctors are always male and nurses are always female.
Usually this sort of issue is caused by the data that artificial intelligence is trained on. Either the information it has it incomplete, or its prejudiced in some way. Thats why PAIRs first real news is the announcement of two new open-source tools called Facets Overview and Facets Dive which make it easier for programmers to examine datasets.
In the screenshot above Facets Dive is being used to test a facial recognition system. The program is sorting the testers by their country of origin and comparing errors with successful identifications. This allows a coder to quickly see where their dataset is falling short, and make the relevant adjustments.
Currently, PAIR has 12 full-time staff. Its a bit of a small figure considering the scale of the problem, but Google says PAIR is really a company-wide initiative one that will draw in expertise from the firms various departments.
More open-source tools like Facets will be released in the future, and Google will also be setting up new grants and residencies to sponsor related research. Its not the only big organization taking these issues seriously (see also: the Ethics and Governance of Artificial Intelligence Fund and Elon Musk-funded OpenAI), but its good to see Google join the fight for a fairer future.
View original post here:
Google wants to make sure AI advances don't leave anyone behind - The Verge
How To Prove ROI From AI
Your use of AI is probably succeeding in countless ways; however, AI has the potential to fail you, and in a big way: by sealing down the fate of your business and career. In fact, you might not even be able to prove that AI is driving you or your stakeholders to profit at all. Failures in the world of AI today can be small or enormous. Take for example IBMs Watson for Oncology. The initiative had to be cancelled after $62 million in spending lead to unsafe treatment recommendations.
According to Venturebeat, an estimated 87% of data science projects never make it to the production stage, and TechRepublic claims that 56% of global CEOs expect it to take 3-5 years to see any real ROI on their AI investment. Long story short: you are not alone in your quest for returns. Nevermind this fact, you can take solace in the reality that you want to be a leader, not a laggard, and you need to be the one who can prove that your use of AI is contributing to ROI via expansion and growth.
Start Back at the Beginning
AI has taken over nearly every facet of business in the 21st century. Every major player in every industry has AI at the root of nearly every project. In retail, Domino's Pizza has used AI to reduce and more accurately predict delivery time from 75% to 95% accuracy.
In Mining, there are companies in Australia using autonomous trucks and drilling technology to cut mining costs, improve worker safety and boost productivity by 20%. They also predict that 77% of jobs in the countrys mining sector will be altered by technological innovations, increasing productivity by up to 23%.
Then in banking, Barclays is using AI to detect and prevent fraud. Barclays is also using similar tech to improve customer experience through chatbots, leveraging the vast amount of data they have accumulated. However, Barclays still faces challenges. They struggle with implementation of faster payment options for their customers.
Challenges That Youll Face
You will have to conquer several hills on your journey to return on AI investments. One challenge has to do with the American A.I initiative. Even though this policy implementation is a good start, we are still behind some of our global competitors when it comes to direct government funding of AI. Therefore, if you dont have the capital to internally implement, monitor and optimize your AI, you will have to seek out funding.
You will also need a well-planned and executed initiative for retraining, reskilling and repurposing your employees. A recent study by McKinsey predicts that in the U.S. up to 33.3% of the 2030 workforce may need to learn new skills and find new work. By now, you have already spent countless hours and substantial revenue on recruiting, hiring, training and building your team and company culture. Don't allow that money to go to waste by allowing your workforce to become irrelevant. Invest more in your people now to save your business in the long run.
In addition, your access to data and your use of it is critical. The AI you implement is only as good as the fuel you give it. And that fuel is data. The Pistoia Alliance released a survey in 2019 that showed 52% of respondents cited insufficient access to data as one of biggest barriers to the adoption of AI.
How can you mirror the success of the previously mentioned players? In order to replicate their triumphs, you must start by asking yourself variations of the following questions:
What are your specific business goals or challenges that youre looking to address with AI solutions?
Buying AI is not buying a one-size-fits-all, off-the-shelf solution for your business. Business leaders must treat AI like any other technology investment: it should have a specific purpose to solve a specific goal. It must be tracked with benchmarks and KPIs. You must then hold yourself and your teams accountable for those numbers.
Is this the right technology to solve your business problem?
Its important that an organization approaches AI from the starting point of: What problem do we need to solve? Rather than, Lets do something with AI. And it should be the right problem for which AI can have a substantial impact. Many companies have not answered basic questions on what business problems can be addressed with AI, which leads to unrealistic expectations.
Do you have internal expertise to maintain AI integration, and a team committed to training and improving the technology across your organization?
How are companies creating a people-focused practice around the operationalization of AI on a company-wide scale? Some have applied to AI teams. Some have virtual teams where two days out of the week, data scientists are embedded with the operations team (this is analogous to DBAs who train non-technical colleagues to understand databases role within company operations). Breaking down organizational silos, and allowing various groups to interact and collaborate, is a critical enabler of an AI project.
How will you measure the success of an AI deployment?
You create your own AI KPIs and maintain a functioning knowledge of how you will measure them prior to deployment. There should be no guesswork or maybes involved in the process. If you want to prove returns, you need concrete benchmarks to get you there.
Now what? Youve worked diligently and answered all of the questions. Youre ready for implementation, but how do you execute? There are multiple factors and things to keep in the back of your mind in order to ensure your deployment is a success.
Growth and Expansion is Greater Than Savings
While AI has the potential to cut expenses, the primary focus should be on growth and expansion in order to maximize outcomes. This includes innovation in products and services, efficiency for productivity and gaining market share. AI is optimized when it is adopted at every level of technology, from value chains to pricing, and when understanding the AI-related preferences of customers. Your best bet is to stay focused on growth by innovating new products and fine-tuning your business model. To better capitalize on the technological benefits of AI, stay on the offense.
Investment is Needed in Both Human Resources and Technology: You cannot fully benefit from AI in technology if your employees arent prepared. Consider that 69% of enterprises are facing a moderate, major or extreme skills gap when it comes to AI. Management and staff must be educated and trained in cross-functional teams in all processes and operations. Finding the right people for new jobsand the recruitment of new employees for the requisite technical job categoriesis essential.
ROI For Business as a Whole: Consider the potential ROI for the entire business. If there is a bottleneck operation in the automation process, you need to increase throughput, not just in one area, but throughout the entire organization. With business process automation platforms growing by 63% in 2019 you may be tempted to just dive in and throw caution to the wind. But, there is an effective playbook out there. Look at, for example, companies like Bosch, which is saving around $500,000 a year by automating some management operations regarding its thousands-strong network of suppliers. Find a similar story in your industry and study their playbook.
Continue to Cultivate and Develop AI: The business world is trending in the correct direction when it comes to workplace culture and AI. According to this publication, Forbes, 65% of workers are optimistic...about having robot co-workers...and 64% of workers would trust a robot more than their manager. Take steps to ensure that you have an abundant AI environment for success, an increase in knowledgeable talent/workers and a heightened corporate awareness of general AI knowledge and related benefits. Make certain these are fully embedded in your organization from top to bottom.
Effectively measuring ROI on AI is a universal challenge
Everyone faces the challenge of creating their standards, KPIs and goals for their AI. There have been several successful methods deployed. Here are a few to guide you on your path:
Determine What It Will Cost Versus What It Will Save: Focus on use-case goals around savings instead of potential revenue growth. This includes reduced employee hours, reduced headcount and less time on processes. How much you invest in AI should be based on these saving forecasts and not revenue uplift. This calculation determines how much you should be willing to invest and the break-even point for AI deployment. If the deployment is not successful, the organization will have risked only what it expected to save, rather than risking what it expected to add in revenue.
Focus on Soft Dollar Benefits: on top of cost savings and additional revenue, companies must also calculate soft dollar benefits, such as fewer errors, reduced turnover, faster access to information and service, etc. AI will improve employee productivity, customer satisfaction, and it will reveal areas where a company may have been unknowingly struggling to achieve maximum value.
Know When the Break-Even Point Will Be: The break-even point is when the cost savings of an AI project equals the investment. Once an organization has calculated what it hopes to save by implementing AI, only then should it begin to consider how much to invest. Many organizations struggle with predicting the break-even point for AI deployments. By allowing cost-savings to dictate the initial AI investment, companies can begin to estimate when the break-even point should be reached.
New Product or Service = New Revenue Streams: this point is about maximizing ROI. Once an organization has become fluent in AI deployment, its an ideal time to determine ways in which AI can help deliver new products or services to customers. Businesses that attempt this level of deployment should invest with both product development and AI piloting principles in mind. New products and services require additional investments beyond the AI technology itself (including marketing, sales, product management, etc.). As a result of these additional investments, organizations should not use the previous equations for determining ROI.
Stay committed to digitalization and automation as if your business and career depends on it, because it does. Stay committed to maximizing efficiency and return on your investment, and most importantly, being able to demonstrate ROI. If you answer the questions above, and execute the four steps of implementation outlined in conjunction with a well worn path to success, you will set yourself and your business up for triumph, and cast yourself vastly ahead of any competition. Refuse to take these facts into consideration, and you will be left in the dustbin of history.
See the article here: