What Opportunities are Appearing Thanks to AI, Artificial Intelligence? – We Heart

The AI sector is booming. Thanks to several leaps that have been made, we are closer than ever before to developing an AI that acts and reacts as a real human would do. Opportunities in this sector are flourishing, and there is always a way for you to get involved.

Photo by Annie Spratt.

Employees: If you are searching for a job in the tech sector, one of the most rewarding you could find is working with AI. It is a mistake to assume that all AI development is focussed on developing android technologies. There are many other applications for AI and each one needs experts at the helm to help bring it to fruition.

Whether you are a graduate, or you are looking for a change in careers, there is always a job opening that you could look into. Even if you dont have a background in this tech, there are many other ways you could get involved, whether you are working on an AIs cognitive abilities or even just testing out the product. Whatever your background and skillset might be, there is always a way for you to get involved.

Investors: AI development is incredibly costly. While many of the smaller developers may have a great idea that could be world-changing if they bring it to fruition. However, they often lack the finances to be able to do so. This is where investors can come in.

Investors like Tej Kohli, James Wise, or Jonathan Goodwin may have little expertise in these areas from their own personal experience, but they know how to recognise a viable idea when presented with one. Whether you are looking to get into venture investment yourself or you are a tech company looking for financial backing, their activities should give you some idea about the paths you need to follow.

Photo, Bence Boros.

Consumers: The world of AI isnt just open to investors and tech gurus. There is now a vast range of AI-driven tech emerging onto the market. You, as a consumer, get to be an instrumental part of driving this new tech forward as it means that the developers gain some insight into what features are popular and which arent.

Just look at the boom in home assistants that has erupted in the past few years. We are now able to live in fully functioning smart homes with music playing and lights turning off with a simple voice command. By exploring what AI has to offer through the role of the consumer, this all feeds back to the developers and helps them create the next generation of products.

No matter how interested you are in this sector, there is always going to be something you can pursue that will help to develop AI overall. This is an incredibly exciting era to live in, and AI is just one of the pieces of tech that could transform the world as we know it. Take a look at some of the roles and opportunities and see where you could jump in today.

Read the rest here:

What Opportunities are Appearing Thanks to AI, Artificial Intelligence? - We Heart

MIT researchers release Clevrer to advance visual reasoning and neurosymbolic AI – VentureBeat

Researchers from Harvard University and MIT-IBM Watson AI Lab have released Clevrer, a data set for evaluating AI models ability to recognize causal relationships and carry out reasoning. A paper sharing initial findings about the CoLlision Events for Video REpresentation and Reasoning (Clevrer) data set was published this week at the entirely digital International Conference of Representation Learning (ICLR).

Clevrer builds on Clevr, a data set released in 2016 by a team from Stanford University and Facebook AI Research, including ImageNet creator Dr. Fei-Fei Li, for analyzing the visual reasoning abilities of neural networks. Clevrer cocreators like Chuang Gan of MIT-IBM Watson Lab and Pushmeet Kohli of Deepmind introduced Neuro-Symbolic Concept Learner (NS-DR), a neuralsymbolic model applied to Clevr at ICLR one year ago.

We present a systematic study of temporal and causal reasoning in videos. This profound and challenging problem deeply rooted to the fundamentals of human intelligence has just begun to be studied with modern AI tools, the paper reads. Our newly introduced Clevrer data set and the NS-DR model are preliminary steps toward this direction.

The data set includes 20,000 synthetic videos of colliding objects on a tabletop created with the Bullet physics simulator, together with a natural language data set of questions and answers about objects in videos. The more than 300,000 questions and answers are categorized as descriptive, explanatory, predictive, and counterfactual.

GamesBeat Summit 2020 Online | Live Now, Upgrade your pass for networking and speaker Q&A.

MIT-IBM Watson Lab director David Cox told VentureBeat in an interview that he believes the data set can make progress toward creating hybrid AI that combines neural networks and symbolic AI. IBM Research will apply the approach to IT infrastructure management and industrial settings like factories and construction sites, Cox said.

I think this is actually going to be important for pretty much every kind of application, Cox said. The very simple world that were seeing are these balls moving around is really the first step on the journey to look at the world, understand that world, be able to make plans about how to make things happen in that world. So we think thats probably going to be across many domains, and indeed vision and robotics are great places to start.

The MIT-IBM Watson AI Lab was created three years ago as a way to look for disruptive advances in AI related to the general theme of broad AI. Some of that work like ObjectNet highlighted the brittle nature of deep learning success stories like ImageNet, but the lab has focused on the combination of neural networks and symbolic or classical AI.

Like neural networks, symbolic AI has been around for decades. Cox argues that just as neural networks waited for the right conditions enough data, ample compute symbolic AI was waiting for neural networks in order to experience a resurgence.

Cox says the two forms of AI complement each other well and together can build more robust and reliable models with less data and more energy efficiency. In a conversation with VentureBeat at the start of the year, IBM Research director Dario Gil called neurosymbolic AI one of the top advances expected in 2020.

Rather than map inputs and outputs like neural networks, whatever you want the outcome to be, you can represent knowledge or programs. Cox says this may lead to AI better equipped to solve real-world problems.

Google has a river of data, Amazon has a river of data, and thats great, but the vast majority of problems are more like puzzles, and we think that to move forward and actually make AI live beyond the hype we need to build systems that can do that, that have a logical component, can flexibly reconfigure themselves, that can act on the environment and experiments, that can interpret that information, and define their own internal mental models of the world, Cox said.

The joint MIT-IBM Watson AI Lab was created in 2017 with a $240 million investment.

More here:

MIT researchers release Clevrer to advance visual reasoning and neurosymbolic AI - VentureBeat

Can Rats AI Rats, That is Shed Light on How Neural Networks Work? – HPCwire

Rats have long been highly-valued model organisms helping researchers better understand biology and pursue drug development. Now, researchers from Harvard and DeepMind say AI-versions of rats can help humans better understand how AI neural networks learn and develop and how their counterparts in real life work. An interesting account of their work appear on IEEE Spectrum today.

Heres brief excerpt from the article written by Edd Gent:

[A]uthors ofa new paperdue to be presented this week at theInternational Conference on Learning Representationshave created a biologically-accurate 3D model of a rat that can be controlled by a neural network in a simulated environment. They also showed that they could use neuroscience techniques foranalyzing biological brain activityto understand how the neural net controlled the rats movements.

The platform could be the neuroscience equivalent of a wind tunnel, saysJesse Marshall, co-author and postdoctoral researcher at Harvard, by letting researchers test different neural networks with varying degrees of biological realism to see how well they tackle complex challenges.

Typical experiments in neuroscience probe the brains of animals performing single behaviors, like lever tapping, while most robots are tailor-made to solve specific tasks, like home vacuuming, he says. This paper is the start of our effort to understand how flexibility arises and is implemented in the brain, and use the insights we gain todesign artificial agents with similar capabilities.

Its a fascinating idea. The researchers built the AI rat model (muscles, joints, vision, movement. Etc.) based on observing real rats and then trained a neural network to guide the rat through four tasksjumping over a series of gaps, foraging in a maze, trying to escape a hilly environment, and performing precisely-timed pairs of taps on a ball.

As the rats improved at the tasks the researchers were abler to watch the controlling neural networks develop. Its early work, and the researchers agree that because they built the model much of what they learned was expected. One interesting insight, though, was that the neural activity seemed to occur over longer timescales than would be expected if it was directly controlling muscle forces and limb movements, according to Diego Aldarondo, a co-author and graduate student at Harvard.

He is quoted in the article, This implies that the network represents behaviors at an abstract scale of running, jumping, spinning, and other intuitive behavioral categories, he says, a cognitive model that has previously been proposed to exist in animals. This kind of work, says the researchers, will help understand both how neural networks evolve and also provide insight into how biology neural networks work.

Link to the IEEE Spectrum article by Ed Gent: https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/ai-powered-rat-valuable-new-tool-neuroscience

Link to the paper: https://openreview.net/forum?id=SyxrxR4KPS

See original here:

Can Rats AI Rats, That is Shed Light on How Neural Networks Work? - HPCwire

Artificial Intelligence breaks barriers where policymakers may go wrong – The Nation

The COVID-19 outbreak has highlighted the importance of working on public health and technology together in order to fight the crisis. Countries across the world are opting for different measures where several technologies are at play to tap the positive COVID-19 cases to stop the further spread of the virus.

China was the first country to report COVID-19 cases and is now witnessing the return of normalcy, but it also had to resort to technology to contain the spread. China used technologies such as smart imaging, drones and mobile apps totrace virus-carrying individuals.

The US and Europe, however, took a slightly different approach, using data derived via artificial intelligence to stop the spread of the virus. One such data provider is US-based Mobilewall, which serves countries with data to serve public health.

In an interview with Sputnik,Anind Datta, the CEO and chairman ofMobilewall, a consumer intelligence platform that is working with US task forces and other municipalities to fight the coronavirus, reflects on the importance of the use of artificial intelligence technologies to deal with the present-day crisis, especially in highly densely populated regions like South Asia.

Question: Where has Mobilewall successfully carried out data distribution?

Anind Datta:Mobilewall data is being used by health services organizations and governmental entities around the world to better predict the spread of the Novel Coronavirus at both the macro (city/county/state/country) and micro (predicting patients at a hospital) level. Mobilewall is working with various businesses and municipalities, providing data around individual mobility that acts as a proxy for social distancing. We can provide both a social isolation score and separate data attributes, features that can be used to build a custom score. Such data includes individual mobility metrics (indicating the daily distance traveled and unique locations), cluster identification (gatherings of a high number of devices) and individual device data at both the micro and macro levels. These are all foundational inputs that can be used in COVID-19 prediction models.

Question: In a country where a huge population resides in rural areas, how can AI be implemented?

Anind Datta:The purpose ofAI is to support decision makingby revealing patterns that emerge from large amounts of data. AI is particularly useful in scenarios where (a) data can be collected at a scale allowing reliable patterns to emerge, and (b) where manual efforts to both collect and analyse data do not work well.

In remote rural areas, manual data collection is challenging, and even if possible, such data is reliability-challenged due to the social barriers against honest disclosures of questions perceived as personal. In the current COVID-19 crisis, where data collection involves gathering information about personal habits and symptoms related to infection, these impediments only increase. Yet, a lot of this information can be gathered from behaviour exhibited on mobile phones, which have spread well into India's rural areas. Mobile data, accumulated at a scale, can allow for inferences to be made to help critical decision-making both in urban and rural areas.

Sputnik: Please, describe the ways in which AI and data can be used to battle COVID-19.

Anind Datta:In the context of COVID-19, data and AI technologies are being used in new ways, particularly in countries that adopt a scientific approach to public health. Data scientists are creating machine learning models to predict infection and mortality rates and to determine resource needs and allocation based on these predictions.

AI can be used to power two key tasks of pandemic mitigation: infection tracking and infection spread prediction. If done correctly, AI can help uncover three foundational pieces of information, crucial to tracking and predicting the spread: measuring social isolation by observing individual mobility, identifying clusters of more than a certain number of individuals and identifying the corresponding locations; and risk assessment of individuals and locations, at scale, by understanding the movement of infected individuals.

Question: Do you have some suggestion for the government regarding use of AI in slums and high density population?

Anind Datta:AI is particularly suited for analysing large amounts of data collected via machines. In slums and other high density areas, in context of the COVID-19 crisis, it is difficult to both maintain and track social distancing. For this reason, these regions can be triggers of infection waves that could provide deadly for the entire country. AI offers a mechanism to both collect and track behavioural signals from this area, which can then inform early-warning and alert systems that can drive tactical pandemic management activities.

AI, particularly,big data and machine learning techniquescan be used to identify the infection risk of individuals, which can then be projected to those individuals and others in the geographic locations they have visited. Data scientists are creating models to track the spread of the virus and to determine resource needs and allocation based on the prediction of hard-hit areas. AI is an enabler; it identifies patterns and provides insights at speeds well beyond what humans can do manually.

But, the key to the successful use of AI relies on the data that is being fed into the models. If this data is inaccurate or lacks scale the ability of the model to predict outcomes will be impacted in a negative way. Data can be obtained in various ways, either by requesting information directly from individuals (such as what populous countries is attempting to do with the Arogya Setu app or by seeking data from other available sources.

Question: Government's have been advocating app's which is also a mobile platform to fight against COVID-19. How useful is app in terms of contact tracing?

Anindya Datta:Arogya Setu app is a worthy effort and could serve as a useful consumer tool to minimise risky behaviour and receive current COVID-19 information. However, it is important to understand that the app by itself is simply a front end to information delivery. The effectiveness of the app is only as good as the information it has access to, but the app itself is not producing that information.

The quality of the risk information and therefore, the usefulness of the app, depend on a number of variables outside of the control of the app, including the magnitude of infection detection, which depends on testing. It is easy to see that less the testing, lower the value of the information disseminated via the app. What also matters is the risk models that are being used to build risk scores for geographies and sub-geographies. If the risk models are ineffective, even with adequate testing, the information delivered will be of little value.

In South Asia, where social stigma still plays a key part in social interaction, one might question the likelihood of truthful disclosures at scale.

Another, perhaps more reliable option, is to use other available data sources that can model the activities of the population at scale. In many cases location data and behavioural data can be used as inputs to COVID-19 predictive models.

Question: Certain groups have been opposing the medics. Can AI help medics find ways to track them without going to the location?

Anind Datta:Yes, location data of these groups can help doctors to track them. Location-based data can be used to track individual mobility without in-person engagement. Depending on the source of the data, it is also possible to use this data to communicate risk of infection in an anonymous manner using digital identification or communication through mobile devices.

Go here to read the rest:

Artificial Intelligence breaks barriers where policymakers may go wrong - The Nation

Importance of AI in the business quest for data-driven operations – TechTarget

The volume of data generated worldwide is soaring, with research firm IDC predicting that by 2025 the global datasphere will reach 175 zettabytes, up an astounding 430% from 33 zettabytes in 2018.

"There's a huge amount of data that companies have been able to capture, internal and external data, structured and unstructured data. And it has become very important for organizations to use all the data available to make data-driven decisions," said Madhu Bhattacharyya, managing director and global leader of Protiviti's enterprise data and analytics practice.

Any enterprise that wants to make use of its data stores must harness the power of artificial intelligence. The importance of AI in the business quest for data-driven decision-making is twofold: AI technologies are required to digest these massive data sets; and AI needs vast stores of data in order to get better at making accurate predictions. "In that way, the use of AI is going to give an organization a competitive edge," Bhattacharyya said.

From enabling businesses to deliver smoother customer experiences to helping them establish new business lines, AI's role in business is akin to the strategic value of electricity in the early 20th century, when electrification transformed industries like transportation and manufacturing and created new ones, like mass communications.

"AI is strategic because the scale, scope, complexity and the dynamism in business today is so extreme that humans can no longer manage it without artificial intelligence. AI is a competitive necessity that business has to deploy," said Chris Brahm, a partner and director at Bain & Co., and leader of the firm's global advanced analytics practice.

Much of AI's strategic value is based in the technology's ability to quickly identify patterns in data, even subtle or rapidly shifting ones, and then to learn how to adjust processes and procedures to produce the best outcome based on the information it uncovered.

As such, AI is being used to identify and deliver even more efficiencies in the automated business processes that operate businesses. It's being used to analyze vast volumes of data to create more personalized experiences for customers. And it's sorting large data sets to identify and perform tasks that it is trained to handle -- and then shift the tasks that need creativity and ingenuity to human workers to complete, thereby boosting organizational productivity.

"AI is very important to the enterprise in two main ways, namely automation and augmentation. Automation allows companies to scale their operation without the need to add more headcounts, while augmentation increases productivity and optimizes internal resources," said Lian Jye Su, a principal analyst at ABI Research.

AI can produce significant productivity gains for organizations by handling mundane, repetitive tasks and performing them at an exponentially higher scale, pace and accuracy than humans can. This leaves employees to focus on more of the business's higher-value functions, thereby layering efficiency gains on top of the productivity boost that the technology delivers.

When described as such, AI seems identical to automation technologies such as robotic process automation (RPA). There is a significant difference between the two types of technology. With RPA, workers use identified steps in a targeted business process to configure the RPA software, tasks that the software bots then perform as they're programmed to do.

On the other hand, AI uses data to generate the most efficient process and then, when combined with some automation software such as RPA, will perform the process to top efficiency. AI then can continue to refine its approach, as it identifies more efficiencies to bring to the process.

If highly efficient automation is one of the biggest values that AI delivers, the other is its capability to provide on-the-job support for human workers.

"AI makes it easier for the human to interact with the information," said Seth Earley, author of The AI-Powered Enterprise and CEO of Earley Information Science.

The ability of AI to analyze data and then draw conclusions from it aids and augments a long list of varied tasks performed by humans. AI can assist doctors in making medical diagnoses. It can take in customer data and other information to suggest to retail associates which sales pitches to make. It can analyze that same data together with the customer's voice to identify the customer's emotional level for call center workers and provide ways to adjust the interaction to reach the optimal outcome.

The importance of AI in business functions like finance and security is growing. AI can sort through reams of financial and industrywide statistics along with economic, consumer and specific customer data to help insurance companies, banks and the like in their underwriting procedures. AI can take automated action against cyber threats by analyzing IT systems, security tools and information about known threats, alert internal cybersecurity teams to new problems, and prioritize the threats that need human attention.

Just as AI can surpass the automation capabilities of RPA, AI also goes beyond the data-driven insights produced with current technologies such as business intelligence tools. While both data analytics technologies and AI analyze data, AI utilizes its intelligence components to draw conclusions, make recommendations and then guide human workers through processes, adjusting its recommendations as a process unfolds and as it takes in new information in real time. That, in turn, allows the AI to continuously learn and refine its conclusions and improve its recommendations over its entire lifecycle.

"What AI is doing is processing information throughout the organization; and it's speeding that flow so we can react more quickly, be more agile and meet needs more effectively," Earley said.

But efficiency and productivity gains delivered by AI-powered automation and augmentation is only part of the strategic importance of AI in business operations.

More significant, experts said, is the fact that AI gives organizations the ability to compete in a marketplace where customers, employees and partners increasingly expect the speed and personalization that the automation and augmentation deliver.

"AI is strategically important because it's building the capabilities that our customers demand and that our competitors will have," Earley said, saying that AI is the "digital machinery" that delivers the results that all those stakeholders want.

AI's role in using data to automate and enhance human work creates (and will continue to drive) cost-saving opportunities, improved sales and new revenue streams.

"Data is becoming overwhelming," said Karen Panetta, a fellow with the technical professional organization IEEE and Tufts University professor of electrical and computer engineering, "so if you're not going to use these new AI technologies, you'll be left behind in every aspect -- in understanding customers, new design methods, in efficiency and in every other area."

Read this article:

Importance of AI in the business quest for data-driven operations - TechTarget

THE OUTER LIMITS: Successfully Implementing AI at the Edge – Electronic Design

Date: Thursday, June 04, 2020 Time: 2:00 PM Eastern Daylight Time Sponsor: AvnetDuration: 1 Hour

Register Today!

Summary

The explosiveand often disruptivegrowth of the Internet of Things has accelerated its expansion in the vertical markets of countless industries. In response, edge computing has presented itself as a solution to issues ranging from heavy use of server-oriented IoT functionality and excessive bandwidth use to advanced security and enhanced functionality.

As AI has evolved into a significant force-multiplier in intelligent IoT devices and products, striking a balance between cloud and edge intelligence has become crucial to implementation. Presented by Alix Paultre, this webinar will cover the make-or-break aspects of selecting and implementing hardware for AI-powered solutions at the edgeand what well see as next-generation smart infrastructures emerge.

Overview of Topics:

PLUS AI EVERYWHERE.An exploration of the key ways AI at the edge will impact smart cities, facilities, and homes through tomorrows intelligent infrastructures.

Speaker

Alix Paultre, Senior Technology Editor and European Correspondent

Alix Paultre is an embedded electronics industry writer and journalist with over two decades of experience in the field. He currently resides in Wiesbaden, Germany, working as a Contributing Editor and European Correspondent for a variety of industry publications. Alix has also served as the Editor in Chief of Power Systems Design and the Editorial Director for the Electronic Design Group at Advantage Business Media, overseeing Electronic Component News and Wireless Design and Development. Alix started in the electronics media field as an Editor at Electronic Products (under Hearst), and gained his early electronics experience as an Electronic Warfare/Signals Intelligence Analyst for the former U.S. Army Security Agency (ASA).

The Amazing AI Giveaway

To qualify, register below and join the event by 2:00 PM ET on June 4 for a chance to win one of the following prizes. Winners will be notified the following day.

Register

The rest is here:

THE OUTER LIMITS: Successfully Implementing AI at the Edge - Electronic Design

Heres your stupid horoscope made by smart AI – The Next Web

Apparently there are still some people who believe that horoscopes are legitimate. Id like to sell each of them a share in the Brooklyn bridge (and direct them to some of our science articles here on TNW). But now is definitely not the time to be pissing off potential readers. So Im meeting them halfway instead.

I made OpenAIs GPT-2 (one of the worlds most advanced AI-powered text generators) spit out horoscopes for all 12 zodiac signs. Accomplishing this was actually quite easy. I went to the Talk To Transformer website where programmer Adam King maintains an interactive version of GPT-2 and entered What is my horoscope for prompts for each sign.

It works like this:

GPT-2 isnt always coherent. In the video above its generating nonsense and gibberish. And thats why, when I was generating the horoscopes below, I was tempted to just keep smashing the generate another button until something funny, clever, or shocking came out. But, since literally anything GPT-2 spits out will be equally as accurate as any human-generated horoscope, I decided it would be a waste of everyones time to do that.

Instead, what youre about to see are twelve completely unedited horoscopes, generated by entering the same What is my horoscope for prompt for each zodiac sign. The only time I pressed the generate button more than once was for Cancer, and thats because it spit out some morbid stuff about disease that didnt seem appropriate right now.

Anyway, we hope you enjoy this stupid horoscope made by smart AI.

Scorpio:

Horus (Scorpio), the lord of hell, is the number one wizard in this lesson. Does that mean he is the most powerful being in the universe? Definitely. Maybe in your life, too. But scorpios is the only alchemy of hope available to anyone of the twelve zodiacal signs.

Aries:

Authors note: Youll notice in the quote below that this is a horoscope for Pisces despite the fact I asked it for Aries. Apparently, if youre an Aries, the AIs advice is to become a Pisces instead.

As the astrological sign of Pisces, you might be nervous about your relationship with your husband/wife, your feeling that your friends need attention and perhaps looking forward to the time when your self-esteem could recover a bit after a stressful week or two. This is normal, and this is fine!

As a Pisces, you are very close to others, you might feel a little restless and overwhelmed, and you are good at following people around in life. If you want to get more personal, read our article on the 13 notes of your birth chart.

Leo:

Leo is always looking for new things to be doing or have to do so it can further evolve as an individual. I feel it will be the time for a more intensive working load in the days ahead of me. I will be home-bound from March 8th. So until then I will do all my activities home based so you may not be seeing me for a few weeks.

Please do come visit, and to anyone who will be in my near future, this email is a prompt and dear request.

Yours Sincerely,

Authors note: In the above quote, GPT-2 followed Yours Sincerely with a Twitter username that actually appears to belong to a real person. We purposely omitted it from this article, but thought it was noteworthy enough to mention.

Cancer:

See the sun after a period of deep darkness

Have a gift of comedy and take it out on others

Shout it from the top of your lungs

Bring forth as much pain as you think is needed

Put your faith in love and your faith in God

Treat people with kindness and dont be rude

Shout your defiance from your rooster

Finish what you start. Itll be easier to continue when you start

Get results with every first thing you do

Promote peace and unity wherever you can

By my hand I give life to everything, and every thing in turn gives life to me

Forget love, all that

Pisces

Due to the fact that my being a spiritual person, I have such a connection to other spiritual entities that for me the planets of the zodiac are a source of comfort for me. In the past I have a relationship with Saturn and Venus. However I cant justify having a relationship with other planets because they are not really in your field of vision. My life will only have meaning if the universe says my lifes work matters.

Aqours is all I can do for pisces what does it mean to me?

I am your energy of hope! Let your touch permeate into my heart and radiates your love. I am yours and I am yours too.

Taurus

A horoscope means a glimpse into the future and depends on different factors like your birth date, the colour of the moon, the cycles of your moon, some life circumstances and your own values and temperament. Even the address on your birth certificate is a very important one.

For a person born under the sign of the Roman god Taurus, the phases of the moon are seen as more negative than for a person born under the sign of the sun, for example. Your last lunar phase, called the New Moon, is the moment that your Moon Phase indicates most strongly. (Thats why a crescent Moon will always represent the New Moon.)

What signs are associated with astrology?

Sagittarius

You are energetic and passionate about life and your deepest desires. Your powerful intuition and vitality are infectious. Feelings of worthlessness and arrogance exist within you. If you feel drained, do not hesitate to seek help. Some aspects of your horoscope are higher than others. The Ascendant means to create and nurture life. They will have a higher responsibility. Empathy, optimism, sociability, and a willingness to feel helpless are your best traits. Know yourself and your best self.

Capricorn

You value the gentle and approachable aspects of life. You are extremely intuitive, likely to feel thoughts and feelings as immediate as your breath. This part of your personality tends to express itself in relationships through the support of others. Because your ambition has always come first, you will not hesitate to let others have their way. In order to express your full potential in life, you will have to approach any situation with tact and skill.

Gemini

A major chart reveal or astrological revelation can never happen at any other time than at this age. Its important to remember that since the age of 22, your horoscope has been preparing you for this date. If your total and life-long horoscope have released a forecast of having a boy as a partner, that could be because your high school sweetheart graduated, or if your future partner or future husband is still in high school.

Featuring so many unique qualities about you, it is the responsibility of the astrologer to bring to light these talents and talents worth experiencing!

How was your horoscope revealed?

It can be difficult for some astrologers to think.

Virgo

At this time, Scorpio is your ruler and Jupiter your friend, be they as great or as small as they may be. During the course of your life, you will have to turn and face both of them in order to remain self-sufficient. Fortunately for you, that can be done.

If the case arises that Scorpio is in conjunction with your Mercury meaning Jupiter is in conjunction with your Sun you should begin to look out for the influence that your Karmic tendencies hold for you. The placement of Jupiter in your sign may influence how important your personality is to you, as well as the strengths and weaknesses you have in dealing with other people.

Libra

The bright, radiant, adventurous personality of Libra is her greatest strength. Her loyalty and intimacy are palpable, and she is able to trust perfectly. Her heart is ruled by passion, but her emotions are often very tender, trying to find harmony. This allows her to know that other people matter and is the great strong master of balancing moods.

Libras desire to maintain balance and harmony and to be guided by the stars are very strong. A naturally good speaker and a well-spoken public speaker, Libra expresses her thought and ideas as accurately and subtly as she can. In practical terms, this means that Libra usually wants to improve the lives of others.

Aquarius

Cancer

Current position:

Starting vocation:

Potential aspiration:

Sign of passage:

Perception of negativity:

Self-actualization:

Constellation:

This is the Aquarian Age of Perseverance. You want to be a strong person, but you need to learn not to be too conscious about being a strong person.

Your greatest fantasies:

Dreams/Life goals:

They may be abstract ideas or life lessons.

Childhood personas:

Sources of inspiration:

Television, movies, books, newspapers, etc.

Consciousness side:

Sorry about that last one. Evidently being an Aquarius involves a lengthy acceptance process. On the bright side, at least youre not an Aries right? They dont even get a horoscope this week. Let us know what you think about GPT-2s Zodiac prowess in the comments.

Published April 28, 2020 18:16 UTC

Visit link:

Heres your stupid horoscope made by smart AI - The Next Web

Reducing the carbon footprint of artificial intelligence – MIT News

Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues.

Last June, researchers at the University of Massachusetts at Amherst released a startling report estimating that the amount of power required for training and searching a certain neural network architecture involves the emissions of roughly 626,000 pounds of carbon dioxide. Thats equivalent to nearly five times the lifetime emissions of the average U.S. car, including its manufacturing.

This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, each with different properties and computational resources.

MIT researchers have developed a new automated AI system for training and running certain neural networks. Results indicate that, by improving the computational efficiency of the system in some key ways, the system can cut down the pounds of carbon emissions involved in some cases, down to low triple digits.

The researchers system, which they call a once-for-all network, trains one large neural network comprising many pretrained subnetworks of different sizes that can be tailored to diverse hardware platforms without retraining. This dramatically reduces the energy usually required to train each specialized neural network for new platforms which can include billions of internet of things (IoT) devices. Using the system to train a computer-vision model, they estimated that the process required roughly 1/1,300 the carbon emissions compared to todays state-of-the-art neural architecture search approaches, while reducing the inference time by 1.5-2.6 times.

The aim is smaller, greener neural networks, says Song Han, an assistant professor in the Department of Electrical Engineering and Computer Science. Searching efficient neural network architectures has until now had a huge carbon footprint. But we reduced that footprint by orders of magnitude with these new methods.

The work was carried out on Satori, an efficient computing cluster donated to MIT by IBM that is capable of performing 2 quadrillion calculations per second. The paper is being presented next week at the International Conference on Learning Representations. Joining Han on the paper are four undergraduate and graduate students from EECS, MIT-IBM Watson AI Lab, and Shanghai Jiao Tong University.

Creating a once-for-all network

The researchers built the system on a recent AI advance called AutoML (for automatic machine learning), which eliminates manual network design. Neural networks automatically search massive design spaces for network architectures tailored, for instance, to specific hardware platforms. But theres still a training efficiency issue: Each model has to be selected then trained from scratch for its platform architecture.

How do we train all those networks efficiently for such a broad spectrum of devices from a $10 IoT device to a $600 smartphone? Given the diversity of IoT devices, the computation cost of neural architecture search will explode, Han says.

The researchers invented an AutoML system that trains only a single, large once-for-all (OFA) network that serves as a mother network, nesting an extremely high number of subnetworks that are sparsely activated from the mother network. OFA shares all its learned weights with all subnetworks meaning they come essentially pretrained. Thus, each subnetwork can operate independently at inference time without retraining.

The team trained an OFA convolutional neural network (CNN) commonly used for image-processing tasks with versatile architectural configurations, including different numbers of layers and neurons, diverse filter sizes, and diverse input image resolutions. Given a specific platform, the system uses the OFA as the search space to find the best subnetwork based on the accuracy and latency tradeoffs that correlate to the platforms power and speed limits. For an IoT device, for instance, the system will find a smaller subnetwork. For smartphones, it will select larger subnetworks, but with different structures depending on individual battery lifetimes and computation resources. OFA decouples model training and architecture search, and spreads the one-time training cost across many inference hardware platforms and resource constraints.

This relies on a progressive shrinking algorithm that efficiently trains the OFA network to support all of the subnetworks simultaneously. It starts with training the full network with the maximum size, then progressively shrinks the sizes of the network to include smaller subnetworks. Smaller subnetworks are trained with the help of large subnetworks to grow together. In the end, all of the subnetworks with different sizes are supported, allowing fast specialization based on the platforms power and speed limits. It supports many hardware devices with zero training cost when adding a new device.In total, one OFA, the researchers found, can comprise more than 10 quintillion thats a 1 followed by 19 zeroes architectural settings, covering probably all platforms ever needed. But training the OFA and searching it ends up being far more efficient than spending hours training each neural network per platform. Moreover, OFA does not compromise accuracy or inference efficiency. Instead, it provides state-of-the-art ImageNet accuracy on mobile devices. And, compared with state-of-the-art industry-leading CNN models , the researchers say OFA provides 1.5-2.6 times speedup, with superior accuracy. Thats a breakthrough technology, Han says. If we want to run powerful AI on consumer devices, we have to figure out how to shrink AI down to size.

The model is really compact. I am very excited to see OFA can keep pushing the boundary of efficient deep learning on edge devices, says Chuang Gan, a researcher at the MIT-IBM Watson AI Lab and co-author of the paper.

If rapid progress in AI is to continue, we need to reduce its environmental impact, says John Cohn, an IBM fellow and member of the MIT-IBM Watson AI Lab. The upside of developing methods to make AI models smaller and more efficient is that the models may also perform better.

The rest is here:

Reducing the carbon footprint of artificial intelligence - MIT News

Nuclear Fusion and Artificial Intelligence: the Dream of Limitless Energy – AI Daily

Ever since the 1930s when scientists, namely Hans Bethe, discovered that nuclear fusion was possible, researchers strived to initiate and control fusion reactions to produce useful energy on Earth. The best example of a fusion reaction is in the middle of stars like the Sun where hydrogen atoms are fused together to make helium releasing a lot of energy that powers the heat and light of the star. On Earth, scientists need to heat and control plasma, an ionised state of matter similar to gas, to cause particles to fuse and release their energy. Unfortunately, it is very difficult to start fusion reactions on Earth, as they require conditions similar to the Sun, very high temperature and pressure, and scientists have been trying to find a solution for decades.

In May 2019, a workshop detailing how fusion could be advanced using machine learning was held that was jointly supported by the Department of Energy Offices of Fusion Energy Science (FES) and Advanced Scientific Computing Research (ASCR). In their report, they discuss seven 'priority research opportunities':

'Science Discovery with Machine Learning' involves bridging gaps in theoretical understanding via identification of missing effects using large datasets; the acceleration of hypothesis generation and testing and the optimisation of experimental planning. Essentially, machine learning is used to support and accelerate the scientific process itself.

'Machine Learning Boosted Diagnostics' is where machine learning methods are used to maximise the information extracted from measurements, systematically fuse multiple data sources and infer quantities that are not directly measured. Classifcation techniques, such as supervised learning, could be used on data that is extracted from the diagnostic measurements.

'Model Extraction and Reduction' includes the construction of models of fusion systems and the acceleration of computational algorithms. Effective model reduction can result in shorten computation times and mean that simulations (for the tokamak fusion reactor for example) happen faster than real-time execution.

'Control Augmentation with Machine Learning'. Three broad areas of plasma control research would benefit significantly from machine learning: control-level models, real-time data analysis algorithms; optimisation of plasma discharge trajectories for control scenarios. Using AI to improve control mathematics could manage the uncertainty in calculations and ensure better operational performance.

'Extreme Data Algorithms' involves finding methods to manage the amount and speed of data that will be generated during the fusion models.

'Data-Enhanced Prediction' will help monitor the health of the plant system and predict any faults, such as disruptions which are essential to be mitigated.

'Fusion Data Machine Learning Platform' is a system that can manage, format, curate and enable the access to experimental and simulation data from fusion models for optimal usability when used by machine learning algorithms.

Read more:

Nuclear Fusion and Artificial Intelligence: the Dream of Limitless Energy - AI Daily

Scientists think well finally solve nuclear fusion thanks to cutting-edge AI – The Next Web

Scientists believe the world will see its first working thermonuclear fusion reactor by the year 2025. Thats a tall order in short form, especially when you consider that fusion has been almost here for nearly a century.

Fusion reactors not to be confused with common fission reactors are the holiest of Grails when it comes to physics achievements. According to most experts, a successful fusion reactor would function as a near-unlimited source of energy.

In other words, if theres a working demonstration of an actual fusion reactor by 2025, we could see an end to the global energy crisis within a few decades.

TAE, one of the companies working on the fusion problem, says the big difference-maker now is machine learning. According to a report from Forbes, Googles been helping TAE come up with modern solutions to decades-old math problems by using novel AI systems to facilitate the discovery of new fusion techniques.

The CEO of TAE says his company will commercialize fusion technology within the decade. Hes joined by executives from several other companies and academic institutions who believe were finally within a decade or so of debuting the elusive energy technology MIT researchers say theyll have theres done before 2028.

But, this level of optimism isnt reflected in the general scientific community. The promise of nuclear fusion has eluded the worlds top researchers for so long now that, barring a major peer-reviewed eureka moment, most self-respecting physicists are taking these new developments with an industrial-sized grain of salt.

The problems pretty simple: smash a couple of atoms together and soak up the resultant energy. But fusion is really, really difficult. It occurs naturally in stars such as our sun, but recreating the suns conditions on Earth is simply not possible with our current technology.

First off, the sun is much more massive than the Earth, and that mass comes with the fusion-friendly benefit of increased gravity.

All that extra gravity smashes the suns atoms into one another. The combination of pressure and heat (the suns core rocks out at a spicy 27 million degrees Fahrenheit) force hydrogen atoms to fuse together, thus becoming helium atoms. This results in the expulsion of energy.

What makes this type of energy so wonderful is the fact that fusion produces so much more energy than current methods. At least, it should. Unfortunately all the current terrestrial attempts at fusion have come up short because, though many have been successful at fusing atoms, they always take more energy to produce the temperatures required to fuse atoms on Earth than said atoms produce in the process. This is because, lacking the requisite gravity, our only choice is to turn up the heat. Instead of 27 million degrees, Earth-bound fusion occurs at several hundred million degrees.

But now weve harnessed the power of the machines, something previous researchers never had at their disposal. So just how, exactly, is AI supposed to be the difference-maker? Mostly in the area of data analysis. Physics experiments arent exactly simple, and sifting through the figurative tons of data produced by a fusion experiment is an inhumane task best left to the machines.

By giving physicists super human analysis abilities, they can turn around experiments faster. This enables quicker iterations and more meaningful results. Whether or not this is the game changer thatll finally put us over the fusion hump remains to be seen, but theres plenty of reason for optimism.

Published April 27, 2020 23:18 UTC

Continue reading here:

Scientists think well finally solve nuclear fusion thanks to cutting-edge AI - The Next Web

Artificial Intelligence in Agriculture Market Worth $4.0 Billion by 2026 – Exclusive Report by MarketsandMarkets – PRNewswire

CHICAGO, April 28, 2020 /PRNewswire/ -- According to the new market research report "Artificial Intelligence in Agriculture Marketby Technology (Machine Learning, Computer Vision, and Predictive Analytics), Offering (Software, Hardware, AI-as-a-Service, and Services), Application, and Geography - Global Forecast to 2026", published by MarketsandMarkets, the Artificial Intelligence in Agriculture Marketis estimated to be USD 1.0 billion in 2020 and is projected to reach USD 4.0 billion by 2026, at a CAGR of 25.5% between 2020 and 2026. The market growth is driven by the increasing implementation of data generation through sensors and aerial images for crops, increasing crop productivity through deep-learning technology, and government support for the adoption of modern agricultural techniques.

Request for PDF Brochure:

https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=159957009

By application, drone analytics segment projected to register highest CAGR during forecast period

The market for drone analytics is expected to grow at the highest rate due to its extensive use for diagnosing and mapping to evaluate crop health and to make real-time decisions. Favorable government mandates for the use of drones in agriculture are also expected to fuel the growth of the drone analytics market. Increasing awareness among farm owners regarding the advantages associated with AI technology is expected to further fuel the growth of the AI in agriculture market.

By technology, computer vision segment to register highest CAGR during forecast period

The increasing use of computer vision technology for agriculture applications, such as plant image recognition and continuous plant health monitoring and analysis, is one of the major factors contributing to the growth of the computer vision segment. The other factors include higher adoption of robots and drones in agriculture farms and increasing demand for improved crop yield due to the rising population. Computer vision allows farmers and agribusinesses alike to make better decisions in real-time.

Browsein-depth TOC on"Artificial Intelligence in Agriculture Market"81 Tables 40 Figures 152 Pages

Request more details on:

https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=159957009

AI in agriculture market in APAC projected to register highest CAGR from 2020 to 2026

The AI in agriculture market in Asia Pacific is expected to witness the highest growth during the forecast period. The wide-scale adoption of AI technologies in agriculture farms is the key factor supporting the growth of the market in this region. AI is increasingly applied in the agriculture sector in developing countries, such as India and China. The increasing adoption of deep learning and computer vision algorithm for agriculture applications is also expected to fuel the growth of the AI in agriculture market in the Asia Pacific region.

International Business Machines Corp. (IBM) (US), Deere & Company (John Deere) (US), Microsoft Corporation (Microsoft) (US), Farmers Edge Inc. (Farmers Edge) (Canada), The Climate Corporation (Climate Corp.) (US), ec2ce (ec2ce) (Spain), Descartes Labs, Inc. (Descartes Labs) (US), AgEagle Aerial Systems (AgEagle) (US), and aWhere Inc. (aWhere) (US) are the prominent players in the AI in agriculture market.

Related Reports:

Artificial Intelligence Marketby Offering (Hardware, Software, Services), Technology (Machine Learning, Natural Language Processing, Context-Aware Computing, Computer Vision), End-User Industry, and Geography - Global Forecast to 2025

Artificial Intelligence in Manufacturing Marketby Offering (Hardware, Software, and Services), Technology (Machine Learning, Computer Vision, Context-Aware Computing, and NLP), Application, Industry, and Geography - Global Forecast to 2025

About MarketsandMarkets

MarketsandMarkets provides quantified B2B research on 30,000 high growth niche opportunities/threats which will impact 70% to 80% of worldwide companies' revenues. Currently servicing 7500 customers worldwide including 80% of global Fortune 1000 companies as clients. Almost 75,000 top officers across eight industries worldwide approach MarketsandMarkets for their painpoints around revenues decisions.

Our 850 fulltime analyst and SMEs at MarketsandMarkets are tracking global high growth markets following the "Growth Engagement Model GEM". The GEM aims at proactive collaboration with the clients to identify new opportunities, identify most important customers, write "Attack, avoid and defend" strategies, identify sources of incremental revenues for both the company and its competitors. MarketsandMarkets now coming up with 1,500 MicroQuadrants (Positioning top players across leaders, emerging companies, innovators, strategic players) annually in high growth emerging segments. MarketsandMarkets is determined to benefit more than 10,000 companies this year for their revenue planning and help them take their innovations/disruptions early to the market by providing them research ahead of the curve.

MarketsandMarkets's flagship competitive intelligence and market research platform, "Knowledge Store" connects over 200,000 markets and entire value chains for deeper understanding of the unmet insights along with market sizing and forecasts of niche markets.

Contact:Mr. Sanjay GuptaMarketsandMarkets INC.630 Dundee RoadSuite 430Northbrook, IL 60062USA: +1-888-600-6441Email: [emailprotected]Visit Our Web Site: https://www.marketsandmarkets.comResearch Insight : https://www.marketsandmarkets.com/ResearchInsight/ai-in-agriculture-market.aspContent Source : https://www.marketsandmarkets.com/PressReleases/ai-in-agriculture.asp

SOURCE MarketsandMarkets

Originally posted here:

Artificial Intelligence in Agriculture Market Worth $4.0 Billion by 2026 - Exclusive Report by MarketsandMarkets - PRNewswire

Doctors are using AI to triage covid-19 patients. The tools may be here to stay – MIT Technology Review

The pandemic, in other words, has turned into a gateway for AI adoption in health carebringing both opportunity and risk. On the one hand, it is pushing doctors and hospitals to fast-track promising new technologies. On the other, this accelerated process could allow unvetted tools to bypass regulatory processes, putting patients in harms way.

At a high level, artificial intelligence in health care is very exciting, says Chris Longhurst, the chief information officer at UC San Diego Health. But health care is one of those industries where there are a lot of factors that come into play. A change in the system can have potentially fatal unintended consequences.

Before the pandemic, health-care AI was already a booming area of research. Deep learning, in particular, has demonstrated impressive results for analyzing medical images to identify diseases like breast and lung cancer or glaucoma at least as accurately as human specialists. Studies have also shown the potential of using computer vision to monitor elderly people in their homes and patients in intensive care units.

But there have been significant obstacles to translating that research into real-world applications. Privacy concerns make it challenging to collect enough data for training algorithms; issues related to bias and generalizability make regulators cautious to grant approvals. Even for applications that do get certified, hospitals rightly have their own intensive vetting procedures and established protocols. Physicians, like everybody elsewere all creatures of habit, says Albert Hsiao, a radiologist at UCSD Health who is now trialing his own covid detection algorithm based on chest x-rays. We dont change unless were forced to change.

As a result, AI has been slow to gain a foothold. It feels like theres something there; there are a lot of papers that show a lot of promise, said Andrew Ng, a leading AI practitioner, in a recent webinar on its applications in medicine. But its not yet as widely deployed as we wish.

QURE.AI

Pierre Durand, a physician and radiologist based in France, experienced the same difficulty when he cofounded the teleradiology firm Vizyon in 2018. The company operates as a middleman: it licenses software from firms like Qure.ai and a Seoul-based startup called Lunit and offers the package of options to hospitals. Before the pandemic, however, it struggled to gain traction. Customers were interested in the artificial-intelligence application for imaging, Durand says, but they could not find the right place for it in their clinical setup.

The onset of covid-19 changed that. In France, as caseloads began to overwhelm the health-care system and the government failed to ramp up testing capacity, triaging patients via chest x-raythough less accurate than a PCR diagnosticbecame a fallback solution. Even for patients who could get genetic tests, results could take at least 12 hours and sometimes days to returntoo long for a doctor to wait before deciding whether to isolate someone. By comparison, Vizyons system using Lunits software, for example, takes only 10 minutes to scan a patient and calculate a probability of infection. (Lunit says its own preliminary study found that the tool was comparable to a human radiologist in its risk analysis, but this research has not been published.) When there are a lot of patients coming, Durand says, its really an attractive solution.

Vizyon has since signed partnerships with two of the largest hospitals in the country and says it is in talks with hospitals in the Middle East and Africa. Qure.ai, meanwhile, has now expanded to Italy, the US, and Mexico on top of existing clients. Lunit is also now working with four new hospitals each in France, Italy, Mexico, and Portugal.

In addition to the speed of evaluation, Durand identifies something else that may have encouraged hospitals to adopt AI during the pandemic: they are thinking about how to prepare for the inevitable staff shortages that will arise after the crisis. Traumatic events like a pandemic are often followed by an exodus of doctors and nurses. Some doctors may want to change their way of life, he says. Whats coming, we dont know.

Hospitals new openness to AI tools hasnt gone unnoticed. Many companies have begun offering their products for a free trial period, hoping it will lead to a longer contract.

It's a good way for us to demonstrate the utility of AI, says Brandon Suh, the CEO of Lunit. Prashant Warier, the CEO and cofounder of Qure.ai, echoes that sentiment. In my experience outside of covid, once people start using our algorithms, they never stop, he says.

Both Qure.ais and Lunits lung screening products were certified by the European Unions health and safety agency before the crisis. In adapting the tools to covid, the companies repurposed the same functionalities that had already been approved.

QURE.AI

Qure.ais qXR, for example, uses a combination of deep-learning models to detect common types of lung abnormalities. To retool it, the firm worked with a panel of experts to review the latest medical literature and determine the typical features of covid-induced pneumonia, such as opaque patches in the image that have a ground glass pattern and dense regions on the sides of the lungs. It then encoded that knowledge into qXR, allowing the tool to calculate the risk of infection from the number of telltale characteristics present in a scan. A preliminary validation study the firm ran on over 11,000 patient images found that the tool was able to distinguish between covid and non-covid patients with 95% accuracy.

But not all firms have been as rigorous. In the early days of the crisis, Malik exchanged emails with 36 companies and spoke with 24, all pitching him AI-based covid screening tools. Most of them were utter junk, he says. They were trying to capitalize on the panic and anxiety. The trend makes him worry: hospitals in the thick of the crisis may not have time to perform due diligence. When youre drowning so much, he says, a thirsty man will reach out for any source of water.

Kay Firth-Butterfield, the head of AI and machine learning at the World Economic Forum, urges hospitals not to weaken their regulatory protocols or formalize long-term contracts without proper validation. Using AI to help with this pandemic is obviously a great thing to be doing, she says. But the problems that come with AI dont go away just because there is a pandemic.

UCSDs Longhurst also encourages hospitals to use this opportunity to partner with firms on clinical trials. We need to have clear, hard evidence before we declare this as the standard of care, he says. Anything less would be a disservice to patients.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

Originally posted here:

Doctors are using AI to triage covid-19 patients. The tools may be here to stay - MIT Technology Review

AI-based Infectious Disease Surveillance System Sent First Warning of Novel Coronavirus – HospiMedica

Image: BlueDots AI engine (Photo courtesy of BlueDot)

BlueDots AI engine had earlier successfully predicted that the Zika virus would spread to Florida six months before it happened and the 2014 Ebola outbreak would leave West Africa. Using artificial and human intelligence, BlueDots outbreak risk platform tracks over 150 infectious diseases globally in 65 languages, around the clock and anticipates their spread and impact. The company empowers national and international health agencies, hospitals, and businesses to better anticipate, and respond to, emerging threats. BlueDot was among the first in the world to identify the emerging risk from, and publish a scientific paper on, COVID-19, and delivers regular critical Insights to its partners and customers worldwide to mobilize timely, effective, efficient, coordinated, and measured responses.

BlueDot anticipates the impact of disease spread globally and globally using diverse datasets such as billions of flight itineraries, real time climate conditions, health system capacity, and animal & insect populations. BlueDot disseminates bespoke, near-real-time insights to clients including governments, hospitals and airlines, revealing COVID-19s movements. The companys intelligence is based on over 40 pathogen-specific datasets reflecting disease mobility and outbreak potential. BlueDot also delivers regular reporting to answer the most pressing questions, including which countries reported local cases, how severely cities outside of China were affected, and which cities risked transmitting COVID-19 despite having no official cases.

Related Links:BlueDot

See original here:

AI-based Infectious Disease Surveillance System Sent First Warning of Novel Coronavirus - HospiMedica

Codota raises $12 million for AI that suggests and autocompletes code – VentureBeat

Codota, a startup developing a platform that suggests and autocompletes Python, C, HTML, Java, Scala, Kotlin, and JavaScript code, today announced that it raised $12 million. The bulk of the capital will be spent on product R&D and sales growth, according to CEO and cofounder Dror Weiss.

Companies like Codota seem to be getting a lot of investor attention lately, and theres a reason. According to a study published by the University of Cambridges Judge Business School, programmersspend50.1% of their worktimenotprogramming; the other halfis debugging. And thetotal estimated cost of debugging is $312 billion per year. AI-powered code suggestion and review tools, then, promise to cut development costs substantially while enabling coders to focus on more creative, less repetitive tasks.

Codotas cloud-based and on-premises solutions which it claims are used by developers at Google, Alibaba, Amazon, Airbnb, Atlassian, and Netflix complete lines of code based on millions of Java programs and individual context locally, without sending any sensitive data to remote servers. They surface relevant examples of Java API within integrated development environments (IDE) including Android Studio, VSCode, IntelliJ, Webstorm, and Eclipse, and Codotas engineers vet the recommendations to ensure theyve been debugged and tested.

GamesBeat Summit 2020 Online | Live Now, Upgrade your pass for networking and speaker Q&A.

Codota says the program analysis, natural language processing, and machine learning algorithms powering its platform learn individual best practices and warn of deviation, largely by extracting an anonymized summary of the current IDE scope (but not keystrokes or string contents) and sending it via an encrypted connection to Codota. The algorithms are trained to understand the semantic models of code not just the source code itself and trigger automatically whenever they identify useful suggestions. (Alternatively, suggestions can be manually triggered with a keyboard shortcut.)

Codota is free for individual users the company makes money from Codota Enterprise, which learns the patterns and rules in a companys proprietary code. The free tiers algorithms are trained only on vetted open source code from GitHub, StackOverflow, and other sources.

Codota acquired competitor TabNine in December last year, and since then, its user base has grown by more than 1,000% to more than a million developers monthly. That positions it well against potential rivals like Kite, which raised $17 million last January for its free developer tool that leverages AI to autocomplete code, and DeepCode, whose product learns from GitHub project data to give developers AI-powered code reviews.

This latest funding round which was led by e.ventures, with the participation of existing investor Khosla Ventures and new investors TPY Capital and Hetz Ventures came after seed rounds totaling just over $2.5 million. It brings Codotas total raised to over $16 million. As a part of it, e.ventures general partner Tom Gieselmann will join Codotas board of directors.

Codota is headquartered in Tel Aviv. It was founded in 2015 by Weiss and CTO Eran Yahav, a Technion professor and former IBM Watson Fellow.

Originally posted here:

Codota raises $12 million for AI that suggests and autocompletes code - VentureBeat

Realeyes Announces The Development of Enhanced Emotion AI Technology Surpassing Industry Standards for Understanding People’s Attention and Emotions -…

NEW YORK, April 28, 2020 (GLOBE NEWSWIRE) -- Realeyes, a leading computer vision and emotion AI company, announced today the availability of its next-generation facial coding technology. Realeyes uses front-facing cameras and the latest in computer vision and machine learning technologies to detect attention and emotion among opt-in audiences as they watch video content. The enhanced classification system will provide customers with more sensitive, accurate insights into the emotional impact of their video content.

Realeyes continues to set the industry standard for facial coding accuracy. The improved classification system results in a 20% increase in emotion detection across all measured emotions from facial cues. It also reduces occasional false positive emotion readings by half. Realeyes is the most accurate emotion detection technology among leading API cloud providers, based on an internal benchmark study of thousands of videos.

Our technology has reached a new level of sophistication, with the accuracy of our detection beginning to rival that of humans across certain emotions like happiness and surprise, said Elnar Hajiyev, Chief Technology Officer at Realeyes. Realeyes is building transformational apps to enable companies to create more remarkable experiences for people, and it starts with a foundation of world-class core vision technology. Realeyes today holds 11 patents covering different aspects of building emotion AI technology, and has 29 pending.

The updated classifications are applied to all Realeyes products and improve on the platforms ability to accurately analyze a wider variety of viewers faces and emotions. The new classifications allow for more nuanced reading through more sensitive emotional curves and bring greater value to the data collected through facial coding.

Said Hajiyev: More accurate detection enables companies to better understand the pure attention and emotion response of their audiences. However, more accurate emotion detection also enables advertisers to better predict in-market outcomes such as video view-through rates, so they can create more engaging creative to maximize media spend.

Realeyes upgraded classifiers allow for a greater range of facial measurements across ethnicities, especially those of Asian heritage. Combined with improvements to Realeyes performance on mobile devices, the updates pave the way for an entirely new range of products and applications based on emotion AI, along with relevance in new markets around the world. Realeyes announced the appointment of its Japan country manager Kyoko Tanaka followed by last years strategic investment from notable international investors Draper Esprit, and NTT DOCOMO Ventures, Inc., the VC arm of NTT Group, Japans leading mobile operator.

Trained on the worlds richest database of facial coding data, Realeyes technology now incorporates more than 615 million emotional labels across more than 3.8 million video sessions to provide more nuanced insights into the emotional impact of video content. The recent update strengthens Realeyes predictive modeling for behaviors like view-through rate and responses like interest and likability, while providing best-in-class results 8x faster than its previous version.

About Realeyes

Using front-facing cameras and the latest in computer vision and machine learning technologies, Realeyes measures how people feel as they watch video content online, enabling brands, agencies and media companies to inform and optimize their content as well as target their videos at the right audiences. Realeyes technology applies facial coding to predictive, big-data analytics, driving bottom-line business outcomes for brands and publishers.

Founded in 2007, Realeyes has offices in New York, London, Tokyo and Budapest. Customers include brands such as Mars Inc, AT&T, Hersheys and Coca-Cola, agencies Ipsos, MarketCast and Publicis, and media companies such as Warner Media and Teads.

Media Contact:

Ben Billingsley

Broadsheet Communications

(917) 826 - 1103

ben@broadsheetcomms.com

Read the original here:

Realeyes Announces The Development of Enhanced Emotion AI Technology Surpassing Industry Standards for Understanding People's Attention and Emotions -...

AI used to predict Covid-19 patients’ decline before proven to work – STAT

Dozens of hospitals across the country are using an artificial intelligence system created by Epic, the big electronic health record vendor, to predict which Covid-19 patients will become critically ill, even as many are struggling to validate the tools effectiveness on those with the new disease.

The rapid uptake of Epics deterioration index is a sign of the challenges imposed by the pandemic: Normally hospitals would take time to test the tool on hundreds of patients, refine the algorithm underlying it, and then adjust care practices to implement it in their clinics.

Covid-19 is not giving them that luxury. They need to be able to intervene to prevent patients from going downhill, or at least make sure a ventilator is available when they do. Because it is a new illness, doctors dont have enough experience to determine who is at highest risk, so they are turning to AI for help and in some cases cramming a validation process that often takes months or years into a couple weeks.

advertisement

Nobody has amassed the numbers to do a statistically valid test of the AI, said Mark Pierce, a physician and chief medical informatics officer at Parkview Health, a nine-hospital health system in Indiana and Ohio that is using Epics tool. But in times like this that are unprecedented in U.S. health care, you really do the best you can with the numbers you have, and err on the side of patient care.

Epics index uses machine learning, a type of artificial intelligence, to give clinicians a snapshot of the risks facing each patient. But hospitals are reaching different conclusions about how to apply the tool, which crunches data on patients vital signs, lab results, and nursing assessments to assign a 0 to 100 score, with a higher score indicating an elevated risk of deterioration. It was already used by hundreds of hospitals before the outbreak to monitor hospitalized patients, and is now being applied to those with Covid-19.

advertisement

At Parkview, doctors analyzed data on nearly 100 cases and found that 75% of hospitalized patients who received a score in a middle zone between 38 and 55 were eventually transferred to the intensive care unit. In the absence of a more precise measure, clinicians are using that zone to help determine who needs closer monitoring and whether a patient in an outlying facility needs to be transferred to a larger hospital with an ICU.

Meanwhile, the University of Michigan, which has seen a larger volume of patients due to a cluster of cases in that state, found in an evaluation of 200 patients that the deterioration index is most helpful for those who scored on the margins of the scale.

For about 9% of patients whose scores remained on the low end during the first 48 hours of hospitalization, the health system determined they were unlikely to experience a life-threatening event and that physicians could consider moving them to a field hospital for lower-risk patients. On the opposite end of the spectrum, it found 10% to 12% of patients who scored on the higher end of the scale were much more likely to need ICU care and should be closely monitored. More precise data on the results will be published in coming days, although they have not yet been peer-reviewed.

Clinicians in the Michigan health system have been using the score thresholds established by the research to monitor the condition of patients during rounds and in a command center designed to help manage their care. But clinicians are also considering other factors, such as physical exams, to determine how they should be treated.

This is not going to replace clinical judgement, said Karandeep Singh, a physician and health informaticist at the University of Michigan who participated in the evaluation of Epics AI tool. But its the best thing weve got right now to help make decisions.

Stanford University has also been testing the deterioration index on Covid-19 patients, but a physician in charge of the work said the health system has not seen enough patients to fully evaluate its performance. If we do experience a future surge, we hope that the foundation we have built with this work can be quickly adapted, said Ron Li, a clinical informaticist at Stanford.

Executives at Epic said the AI tool, which has been rolled out to monitor hospitalized patients over the past two years, is already being used to support care of Covid-19 patients in dozens of hospitals across the United States. They include Parkview, Confluence Health in Washington state, and ProMedica, a health system that operates in Ohio and Michigan.

Our approach as Covid was ramping up over the last eight weeks has been to evaluate does it look very similar to (other respiratory illnesses) from a machine learning perspective and can we pick up that rapid deterioration? said Seth Hain, a data scientist and senior vice president of research and development at Epic. What we found is yes, and the result has been that organizations are rapidly using this model in that context.

Some hospitals that had already adopted the index are simply applying it to Covid-19 patients, while others are seeking to validate its ability to accurately assess patients with the new disease. It remains unclear how the use of the tool is affecting patient outcomes, or whether its scores accurately predict how Covid-19 patients are faring in hospitals. The AI system was initially designed to predict deterioration of hospitalized patients facing a wide array of illnesses. Epic trained and tested the index on more than 100,000 patient encounters at three hospital systems between 2012 and 2016, and found that it could accurately characterize the risks facing patients.

When the coronavirus began spreading in the United States, health systems raced to repurpose existing AI models to help keep tabs on patients and manage the supply of beds, ventilators and other equipment in their hospitals. Researchers have tried to develop AI models from scratch to focus on the unique effects of Covid-19, but many of those tools have struggled with bias and accuracy issues, according to a review published in the BMJ.

The biggest question hospitals face in implementing predictive AI tools, whether to help manage Covid-19 or advanced kidney disease, is how to act on the risk score it provides. Can clinicians take actions that will prevent the deterioration from happening? If not, does it give them enough warning to respond effectively?

In the case of Covid-19, the latter question is the most relevant, because researchers have not yet identified any effective treatments to counteract the effects of the illness. Instead, they are left to deliver supportive care, including mechanical ventilation if patients are no longer able to breathe on their own.

Knowing ahead of time whether mechanical ventilation might be necessary is helpful, because doctors can ensure that an ICU bed and a ventilator or other breathing assistance is available.

Singh, the informaticist at the University of Michigan, said the most difficult part about making predictions based on Epics system, which calculates a score every 15 minutes, is that patients ratings tend to bounce up and down in a sawtooth pattern. A change in heart rate could cause the score to suddenly rise or fall. He said his research team found that it was often difficult to detect, or act on, trends in the data.

Because the score fluctuates from 70 to 30 to 40, we felt like its hard to use it that way, he said. A patient whos high risk right now might be low risk in 15 minutes.

In some cases, he said, patients bounced around in the middle zone for days but then suddenly needed to go to the ICU. In others, a patient with a similar trajectory of scores could be managed effectively without need for intensive care.

But Singh said that in about 20% of patients it was possible to identify threshold scores that could indicate whether a patient was likely to decline or recover. In the case of patients likely to decline, the researchers found that the system could give them up to 40 hours of warning before a life-threatening event would occur.

Thats significant lead time to help intervene for a very small percentage of patients, he said. As to whether the system is saving lives, or improving care in comparison to standard nursing practices, Singh said the answers will have to wait for another day. You would need a trial to validate that question, he said. The question of whether this is saving lives is unanswerable right now.

See the original post:

AI used to predict Covid-19 patients' decline before proven to work - STAT

AI will transform our workflow and boost profits according to FreeAgent survey – TechRadar

FreeAgent, the cloud-accounting software specialist, has released the results of a new survey that underlines just how much automation can do for your work/life balance and business profits.

Key findings reveal that nearly 1 in 2 accountants (49%) believe an automated workflow will lead to a reduction in stress and/or boredom from dealing with data-entry tasks. A similar number (48%) believe it will help provide a better work/life balance.

The new research also highlights that 81% of accountants think they can save up to 2 hours a day (10 hours a week) by using Artificial Intelligence (AI) to automate simple tasks, unlocking up to 68,163 of additional revenue annually.

Nearly 1 in 4 (24%) Scottish accountants believe they could save a sizeable 3-4 hours a day through automation, potentially unlocking up to 119,285.25 of additional revenue annually.

When asked to predict how quickly the industry will embrace technology to automate some or all accounting tasks, 59% of the 200 respondents think only some tasks will be automated within one year, 72% think some tasks will be automated within five years and 59% think the majority of tasks will be automated within 10 years.

Younger people have more faith in technology, with 77% of 18-35 year olds stating that some accounting tasks will be automated within five years, compared to 57% of over 55s. In contrast, nearly 1 in 10 (9%) of those aged 55+ believe no automation is coming to the accounting industry within 10 years, compared to one in 50 (2%) of 18-35 year olds.

The research also suggests that the Welsh are the most skeptical of any imminent change technology will bring to the sector, with just 33% believing some accounting tasks will be automated in one years time, compared to 72% of those based in the South East.

Accountants see artificial intelligence as most useful for accurate auto-reconciling of data in client accounts (50%), preventing clients entering incorrect information (45%), and dealing with HMRC (44%) with 90% of those in larger firms (of over 300 people) being interested in using AI, compared to 76% of those in of firms up to 50 people

Again, age seems to have an impact on willingness to embrace technology, with 40% of over 55s say they are not interested in using AI in their practice, compared to just 14% of younger colleagues.

Unsurprisingly, the majority of accountants across the board (61%) believe that technologys biggest impact in a firm will be around increasing the automation of data. However, many accountants believe other aspects of the industry will be impacted by technology:

Over one in three (37%) think technology will lead to less face-to-face time spent with clients. Meanwhile, male accountants are twice as likely to believe technology will provide wider accessibility to challenger banks and new finance providers (42% vs 21%). Nearly 1 in 2 accountants (49%) believe automation will lead to a reduction in stress and/or boredom from dealing with data-entry tasks, and a similar number (48%) believe it will help provide a better work/life balance.

However, those working in the capital are clearly hoping technology will provide some relief from the day to day grind, with 68% of London-based accountants believing automation will help with work/life balance, compared to just 9% of those based in the East of England. Twice as many women (43%) as men (20%) believe automation will provide opportunities for growth.

Ed Molyneux, co-founder and CEO at FreeAgent said, When small simple tasks require paperwork and endless data-entry, they are the things that often get left to the end of the day or week. Not only does this take up a significant amount of time when you eventually get around to them, but it can lead to unnecessary levels of stress and boredom.

I am therefore not surprised that almost half the accountants we surveyed believe that automation will help provide a better work/life balance and reduce stress. Essentially automation gives people back time, which is not only beneficial to them but can save a company a huge amount in hours for just one employee over a year.

By simply eliminating mundane, complicated processes around simple tasks, automation can bring explosive change that truly has the potential to revolutionize accountancy as a profession. In our survey, we see that accountants - and women, in particular - believe automation can open the door to opportunities for growth in business and create the chance for them to excel.

With less time spent on admin and logging data, accountants then have time to focus on other aspects of the job including more consultative work, which will also bring significant benefits to their clients.

Read the original post:

AI will transform our workflow and boost profits according to FreeAgent survey - TechRadar

Google admits its diabetic blindness AI fell short in real-life tests – Engadget

The nurses in Thailand often had to scan dozens of patients as quickly as they could in poor lighting conditions. As a result, the AI rejected over a fifth of the images, and the patients were then told to come back. Thats a lot to ask from people who may not be able to take another day off work or dont have an easy way to go back to the clinic.

In addition, the research team struggled with poor internet connection and internet outages. Under ideal conditions, the algorithm can come up with a result in seconds to minutes. But in Thailand, it took the team 60 to 90 seconds to upload each image, slowing down the process and limiting the number of patients that can be screened in a day.

Google admits in the studys announcement that it has a lot of work to do. It still has to study and incorporate real-life evaluations before the AI can be widely deployed. The company added in its paper:

Since this research, we have begun to hold participatory design workshops with nurses, potential camera operators, and retinal specialists (the doctor who would receive referred patients from the system) at future deployment sites. Clinicians are designing new workflows that involve the system and are proactively identifying potential barriers to implementation.

See the original post here:

Google admits its diabetic blindness AI fell short in real-life tests - Engadget

Nathan Tanner: Taking responsibility for the inequality facing the Navajo Nation – Salt Lake Tribune

While some news organizations claim that poverty in tribal communities created the conditions for coronavirus to thrive, these analyses fail to account for factors that created and presently maintain social stratification in native communities. The Navajo suffer from the effects of pandemic illness disproportionately to non-native populations presently for the same reasons they did historically: systemic inequality caused by colonialism, capitalism and racism.

In his study of the 1918-1919 influenza epidemic among the Navajo, Utah State historian Robert McPherson asserted that the Navajo experienced such a disproportionate influenza mortality rate in the early 20th century because of their spiritual practices and living conditions e.g., tendency to live close to one another, engage in ceremony that required physical contact and a perceived lack of access to medical attention. However, this historical interpretation neglects the complex system of social stratification the Navajo have persistently encountered since the arrival of the first Euro-American colonists.

In a major way, the Navajo Nation in 2020 is experiencing the prolonged effects of the dispossession of their land, the intentional result of centuries of Euro-American pathogenic genocide, corporate and military expansion and sociopolitical destabilization. It can be assumed that in the absence of the U.S. federal governments land theft, forcing Americas indigenous peoples onto reservations what could easily be construed as a form of sociopolitical apartheid subverting and restructuring indigenous economies, complicating tribal authorization processes, battling tribal nations over sovereignty in court and severely limiting consumer networks (which force people to either live very near one another or travel great distances for essential resources and services), the Navajo would not be troubled by the current coronavirus.

While some may view this as an anachronistic reading of the causes of the current pandemic crisis, youd be hard pressed to convince indigenous folks or any serious student of history or sociology that this is not the case.

In her book, Roxanne Dunbar-Ortiz cites native historian Jack Forbes as having stressed that, While living persons are not responsible for what their ancestors did, they are responsible for the society they live in, which is a product of the past. That said, descendants of settlers, like me, can assist Navajo Nation and other tribal communities by doing the following:

1. Urge political representatives to carefully reconsider the eligibility rules they create when crafting policy like the CARES stimulus package. Navajo Nation President Jonathan Nez has described the complications Navajo Nation has had accessing essential federal funds amidst this COVID-19 crisis.

2. Encourage government agencies to collect tribal affiliation in vital statistics. Desi Rodriguez-Lonebear and others have called for increased visibility for native peoples where they have historically been erased.

Nathan Tanner, Urbana, Ill., is a former Salt Lake City teacher pursuing a Ph.D. in education policy, at the University of Illinois at Urbana-Champaign

See the article here:

Nathan Tanner: Taking responsibility for the inequality facing the Navajo Nation - Salt Lake Tribune

Why are white supremacists protesting to ‘reopen’ the US economy? – Thehour.com

Shannon Reid, University of North Carolina Charlotte

(The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.)

Shannon Reid, University of North Carolina Charlotte and Matthew Valasik, Louisiana State University

(THE CONVERSATION) A series of protests, primarily in state capitals, are demanding the end of COVID-19 lockdown restrictions. Among the protesters are people who express concern about their jobs or the economy as a whole.

But there are also far-right conspiracy theorists, white supremacists like Proud Boys and citizens militia members at these protests. The exact number of each group that attends these protests is unknown, since police have not traditionally monitored these groups, but signs and symbols of far right groups have been seen at many of these protests across the country.

These protests riskspreading the virus and have disrupted traffic, potentially delaying ambulances. But as researchers of street gangs and far-right groups violence and recruitment, we believe these protests may become a way right-wingers expand the spread of anti-Semitic rhetoric and militant racism.

Proud Boys, and many other far-right activists, dont typically focus their concern on whether stores and businesses are open. Theyre usually more concerned about pro-white, pro-male rhetoric. Theyre attending these rallies as part of their longstanding search for any opportunity to make extremist groups look mainstream and because they are always looking for potential recruits to further their cause.

Exploiting an opportunity

While not all far-right groups agree on everything, many of them now subscribe to the idea that Western government is corrupt and its demise needs to be accelerated through a race war.

For far-right groups, almost any interaction is an opportunity to connect with people with social or economic insecurities or their children. Even if some of the protesters have genuine concerns, theyre in protest lines near people looking to offer them targets to blame for societys problems.

Once theyre standing side by side at a protest, members of far-right hate groups begin to share their ideas. That lures some people deeper into online groups and forums where they can be radicalized against immigrants, Jews or other stereotypical scapegoats.

Its true that only a few will go to that extreme but they represent potential sparks for future far-right violence.

Official responses

President Donald Trump, a favorite of far-right activists, has tweeted encouragement to the protesters. Police responses have been uneven. Some protesters have been charged with violating emergency government orders against public gatherings.

Other events, however, have gone undisturbed by officials similar to how far-right free speech rallies in 2018 often were treated gently by police.

Police have tended to be hesitant to deal with far-right groups at these protests. As a result, the risk is growing of right-wing militants spreading the coronavirus, either unintentionally at rallies or in intentional efforts: Federal authorities have warned that some right-wingers are talking about specifically sending infected people to target communities of color.

One thing police could do which they often do when facing criminal groups is to track the level of coordination between different protests. Identifying far-right activists who attend multiple events or travel across state borders to attend a rally may indicate that they are using these events as part of a connected public relations campaign.

[You need to understand the coronavirus pandemic, and we can help. Read The Conversations newsletter.]

This article is republished from The Conversation under a Creative Commons license. Read the original article here: https://theconversation.com/why-are-white-supremacists-protesting-to-reopen-the-us-economy-137044.

See the rest here:

Why are white supremacists protesting to 'reopen' the US economy? - Thehour.com