Artificial intelligence: The rising star of education – Daily Sabah

We now know that technology continues to change our lives daily, and these changes have not only transformed the cornerstones of the economy and society but also our professions and education. For this reason, countries need to prepare for the future to keep pace with the changes and make radical transformations in teaching methods for young people, the locomotives of the future. The World Innovation Summit for Education (WISE) in Qatar, which has been held for 10 years, has redefined the meaning of being human by making this transformation front and center this year.

The event focused on how to prepare children for educational and technological transformations and what needs to be done to ensure that the basic educational needs of children from all income levels and migrant camps in the region are met.

Under the theme "Unlearn Relearn: What it means to be human," the two main topics of this year's WISE event, held in Qatar's capital Doha on Nov. 19-21, were ensuring equality in education through technology and entrepreneurship.

It was not surprising to see technology firms, especially artificial intelligence experts, showing the most interest in the event, held in line with the demands of the 21st century with the participation of more than 3,000 experts and celebrities from around the world.

Mind over matter

While the event touched on issues regarding educational technology, speeches were made on how developments related to the mind, such as neuroscience, affected education and how methods of education changed based on personality and creativity. In one of the most noteworthy presentations in the three-day forum, Armenian President Armen Sargsyan, who is also a physicist, described the relationship between quantum physics and human development. "Our communications are moving at the speed of light. We now live in the quantum world. Startup culture and personal creativity will be very strengthened." Meanwhile, Max Tegmark, a Swedish-American cosmologist and the president of the U.S.-based Future of Life, also co-founded by Elon Musk, delivered a remarkable and exemplary speech on artificial intelligence.

How will children be happy?

Tegmark stressed the need to identify acceptable and unacceptable points in using artificial intelligence as soon as possible and asked regarding the issues of global justice and equality of education: "How will artificial intelligence affect the interests of the weak and powerful?" He emphasized that artificial intelligence could solve or create problems and suggested, "We should ask the students what they want to happen in the future instead of asking what will happen in the future." An important issue that Tegmark highlighted was how happy students would be in the digital world of the future. "In a world where artificial intelligence will reign, we need to make sure that students are learning the vital skills necessary for their happiness and development," Tegmark noted. He further told the students about the acceptable and beneficial aspects of artificial intelligence and said that the future would be brighter.

In many of WISE 2019's sessions, topics such as how technology will change education as well as the scale-up processes of ventures and startups that have contributed to education were also discussed. Speakers and participants, especially from Africa and Asia, talked about how they created miracles in areas where children's educational opportunities were limited by combining entrepreneurial spirit with technology.

*** article 2 ***

Who wrote 'Henry VIII'?

Artificial intelligence has put an end to a discussion regarding British playwright William Shakespeare, one of the greatest names in literary history. Scholars know that part of Shakespeare's 1613 play "Henry VIII" was written by John Fletcher, but it was not possible to determine exactly which parts belonged to which author. Czech artificial intelligence expert Petr Plechac ran the works of Shakespeare, Fletcher and other contemporary writers through an algorithm to identify the differences in the authors' styles. Plechac then showed the algorithm "Henry VIII" and found the chapters Fletcher contributed to. The analysis revealed that Fletcher wrote plays for the acting company King's Men in 1616, of which Shakespeare was also a member.

Continue reading here:

Artificial intelligence: The rising star of education - Daily Sabah

Asia-Pacific Digital Transformation Markets 2019-2024: Focus on 5G, Artificial Intelligence, Internet of Things, and Smart Cities -…

DUBLIN--(BUSINESS WIRE)--The "Digital Transformation Asia Pacific: 5G, Artificial Intelligence, Internet of Things, and Smart Cities in APAC 2019 - 2024" report has been added to ResearchAndMarkets.com's offering.

This report identifies market opportunities for deployment and operations of key technologies within the Asia Pac region.

While the biggest markets China, Korea, and Japan often get the most attention, it is important to also consider the fast growing ASEAN region including Indonesia, Malaysia, Philippines, Singapore, Thailand, Brunei, Laos, Myanmar, Cambodia, and Vietnam. In fact, many lessons learned in leading Asia Pac countries will be applied to the ASEAN region.

By way of example, H3C Technologies Co. is planning to offer a comprehensive digital transformation platform within Thailand that includes core cloud and edge computing, big data, interconnectivity, information security, IoT, AI, and 5G solutions.

From predicting what will happen with 5G technology in the next few years to identifying how 5G will transform business, Digital Transformation Asia Pacific: 5G, Artificial Intelligence, Internet of Things, and Smart Cities in APAC 2019 - 2024 is must-have research for any ICT company looking to expand business within the region. This report represents the most comprehensive research available focused on the role and impact of 5G, AI, and IoT technologies in Asia Pac. It also provides analysis about how these technologies will have a positive feedback loop effect with smart cities.

The AI segment is currently very fragmented, characterized with most companies focusing on silo approaches to solutions. Longer-term, researchers see many solutions involving multiple AI types as well as integration across other key areas such as the Internet of Things (IoT) and data analytics. AI is expected to have a big impact on data management. However, the impact goes well beyond data management as we anticipate that these technologies will increasingly become part of every network, device, application, and service.

Data analytics at the edge of networks is very different than centralized cloud computing as data is contextual (example: collected and computed at a specific location) and may be processed in real-time (e.g. streaming data) via big data analytics technologies. Edge Computing represents an important ICT trend in which computational infrastructure is moving increasingly closer to the source of data processing needs. This movement to the edge does not diminish the importance of centralized computing such as is found with many cloud-based services. Instead, computing at the edge offers many complementary advantages including reduced latency for time sensitive data, lower capital costs and operational expenditures due to efficiency improvements.

For both core cloud infrastructure and edge computing equipment, the use of AI for decision making in IoT and data analytics will be crucial for efficient and effective decision making, especially in the area of streaming data and real-time analytics associated with edge computing networks. Real-time data will be a key value proposition for all use cases, segments, and solutions. The ability to capture streaming data, determine valuable attributes, and make decisions in real-time will add an entirely new dimension to service logic. In many cases, the data itself, and actionable information will be the service.

Many industry verticals will be transformed through AI integration with enterprise, industrial, and consumer product and service ecosystems. It is destined to become an integral component of business operations including supply chains, sales, and marketing processes, product and service delivery and support models. The term for AI support of IoT (or AIoT) is just beginning to become part of the ICT lexicon as the possibilities for the former adding value to the latter are only limited by the imagination.

AI, IoT, and 5G will provide the intelligence, communications, connectivity, and bandwidth necessary for highly functional and sustainable smart cities market solutions. The combination of these technologies are poised to produce solutions that will dramatically transform all aspects of ICT and virtually all industry verticals undergoing transformed through AI integration with enterprise, industrial, and consumer product and service ecosystems. The convergence of these technologies will attract innovation that will create further advancements in various industry verticals and other technologies such as robotics and virtual reality.

In addition, these technologies are destined to become an integral component of business operations including supply chains, sales, and marketing processes, product and service delivery and support models. There will be a positive feedback loop created and sustained by leveraging the interdependent capabilities of AI, IoT, and 5G (e.g. a term coined as AIoT5G). For example, AI will work in conjunction with IoT to substantially improve smart city supply chains. Metropolitan area supply chains represent complex systems of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer.

Smart cities in particular represent a huge market for Asia Pac digital transformation through a combination of solutions deployed urban environments that are poised to transform the administration and support of living and working environments. Accordingly, Information and Communications Technologies (ICT) are transforming at a rapid rate, driven by urbanization, industrialization of emerging economies, and the specific needs of various smart city initiatives. Smart city development is emerging as a focal point for growth drivers in several key ICT areas including 5G, AI, IoT, and the convergence of AI and IoT known as the Artificial Intelligence of Things or simply AIoT.

Sustainable smart city technology deployments depend upon careful planning and execution as well as monitoring and adjustments as necessary. For example, feature/functionality must be blended to work efficiently across many different industry verticals as smart city address the needs of disparate market segments with multiple overlapping and sometimes mutually exclusive requirements. This will stimulate the need for both cross-industry coordination as well as orchestration of many different capabilities across several important technologies.

Select Report Findings:

Report Benefits:

Target Audience:

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/w67926

See the rest here:

Asia-Pacific Digital Transformation Markets 2019-2024: Focus on 5G, Artificial Intelligence, Internet of Things, and Smart Cities -...

Opinion | The artificial intelligence frontier of economic theory – Livemint

Until recently, two big impediments limited what research economists could learn about the world with the powerful methods that mathematicians and statisticians, starting in the early 19th century, developed to recognize and interpret patterns in noisy data: Data sets were small and costly, and computers were slow and expensive. So it is natural that as gains in computing power have dramatically reduced these impediments, economists have rushed to use big data and artificial intelligence to help them spot patterns in all sorts of activities and outcomes.

Data summary and pattern recognition are big parts of the physical sciences as well. The physicist Richard Feynman once likened the natural world to a game played by the gods: You dont know the rules of the game, but youre allowed to look at the board from time to time, in a little corner, perhaps. And from these observations, you try to figure out what the rules are."

Feynmans metaphor is a literal description of what many economists do. Like astrophysicists, we typically acquire non-experimental data generated by processes we want to understand. The mathematician John von Neumann defined a game as (1) a list of players; (2) a list of actions available to each player; (3) a list of how pay-offs accruing to each player depend on the actions of all players; and (4) a timing protocol that tells who chooses what when. This elegant definition includes what we mean by a constitution" or an economic system": a social understanding about who chooses what when.

Like Feynmans metaphorical physicist, our task is to infer a game"which for economists is the structure of a market or system of marketsfrom observed data.

But then we want to do something that physicists dont: Think about how different games" might produce improved outcomes. That is, we want to conduct experiments to study how a hypothetical change in the rules of the game or in a pattern of observed behaviour by some players" (say, government regulators or a central bank) might affect patterns of behaviour by the remaining players.

Thus, structural model builders" in economics seek to infer from historical patterns of behaviour a set of invariant parameters for hypothetical (often historically unprecedented) situations in which a government or regulator follows a new set of rules. The government has strategies, and the people have counter-strategies, according to a Chinese proverb.

Structural models" seek such invariant parameters in order to help regulators and market designers understand and predict data patterns under historically unprecedented situations. The challenging task of building structural models will benefit from rapidly developing branches of artificial intelligence (AI) that dont involve more than pattern recognition. A great example is AlphaGo. The team of computer scientists that created the algorithm to play the Chinese game Go combined a suite of tools that had been developed by specialists in statistics, simulation, decision theory, and game theory communities.

Many of the tools used in just the right proportions to make an outstanding artificial Go player are also economists bread-and-butter tools for building structural models to study macroeconomics and industrial organization.

Of course, economics differs from physics in a crucial respect. Whereas Pierre-Simon Laplace regarded the present state of the universe as the effect of its past and the cause of its future," the reverse is true in economics: what we expect other people to do later causes what we do now.

We typically use personal theories about what other people want to forecast what they will do. When we have good theories of other people, what they are likely to do determines what we expect them to do. This line of reasoning, sometimes called rational expectations", reflects a sense in which the future causes the present" in economic systems. Taking this into account is at the core of building structural" economic models.

For example, I will join a run on a bank if I expect that other people will. Without deposit insurance, customers have incentives to avoid banks vulnerable to runs. With deposit insurance, customers dont care and wont run. On the other hand, if governments insure deposits, bank owners will want their assets to become as big and as risky as possible, while depositors wont care.

There are similar trade-offs with unemployment and disability insuranceinsuring people against bad luck may weaken their incentive to provide for themselvesand for official bailouts of governments and firms.

More broadly, my reputation is what others expect me to do. I face choices about whether to confirm or disappoint those expectations. Those choices will affect how others behave in the future. Central bankers think about that a lot.

Like physicists, we economists use models and data to learn. We dont learn new things until we appreciate that our old models cannot explain new data. We then construct new models in light of how their predecessors failed.

This explains how we have learned from past depressions and financial crises. And with big data, faster computers and better algorithms, we might see patterns where once we heard only noise.

*Thomas J. Sargent is professor of economics at New York University and senior fellow at the Hoover Institution

2019/project syndicate

Originally posted here:

Opinion | The artificial intelligence frontier of economic theory - Livemint

Seminar – Artificial Intelligence and its Impact on Young People – Council of Europe

By its present and future impact on social life and organisation or by its reliance on young people to programme and fine-tune AI technologies, AI is very closely related to young people. Yet, there is relatively little research and information about how AI will impact on young people as young citizens in transition to autonomy regarding their well-being, possibilities to participate and shape society and their access to rights, including social rights.

The seminars programme will explore the issues, role and possible contributions of the youth sector to ensure that AI is responsibly used in democratic societies and that young people have a say about matters that concern their present and future.The seminar will bring together some 50 youth leaders and experts in AI from the business and academic sectors.

More info, including the programme, at https://www.coe.int/en/web/youth/artificial-intelligence.

Original post:

Seminar - Artificial Intelligence and its Impact on Young People - Council of Europe

Triple your CX impact with artificial intelligence and these five tactics – CXNetwork

A practical guide with guest presenters and case studies from Microsoft and Sonos.

It is not enough to simply claim to be customer obsessed. In a climate where a moment of inconvenience could be enough to push customers to switch to your competitor, brands have no choice but to deliver what customers want. To do this with accuracy, brands need to consistently plug themselves into various sources of customer feedback.

But the reality is 91 per cent customer feedback is not properly used today, with many businesses overwhelmed by the task of processing the high volumes of insights and the soaring costs when deployed at scale.

This webinar, featuring case studies from the likes of Microsoft and Sonos, is a step-by-step guide on what it takes to drive value from unstructured CX feedback, providing insights on the set-up needed to allow text analytics to thrive.

Attend this webinar for practical insights to apply in your business on:

Frank Buckler, PhD., CX Pioneer, Book Author, Keynote Speaker and Founder & CEO Success Drivers

Rajul Jain, PhD., Senior Research ManagerMicrosoft

David Feick, PhD,Former Head of Customer InsightsSonos

Link:

Triple your CX impact with artificial intelligence and these five tactics - CXNetwork

BioSig Technologies Announces New Collaboration on Development of Artificial Intelligence Solutions in Healthcare – GlobeNewswire

Westport, CT, Dec. 03, 2019 (GLOBE NEWSWIRE) --

The Company partners with Reified Capital, a provider of advanced artificial intelligence-focused technical advisory services to the private sector

Collaboration to focus on machine learning and artificial intelligence solutions for healthcare

Initial solutions to be centered on BioSigs core competencies in electrophysiology

BioSig Technologies, Inc. (NASDAQ: BSGM) (BioSig or the Company), a medical technology company developing a proprietary biomedical signal processing platform designed to improve signal fidelity and uncover the full range of ECG and intra-cardiac signals, today announced that the Company entered into a technical collaboration with Reified Capital, a provider of advanced artificial intelligence-focused technical advisory services to the private sector. Reified was co-founded by Dr. Alexander D. Wissner-Gross and Timothy M. Sullivan, the founders of Gemedy.

The new collaboration with Cambridge, Massachusetts-based Reified will focus on developing a foundational artificial intelligence platform on the basis of integrated healthcare datasets, beginning with ECG and EEG data acquired by BioSigs first product, PURE EP(tm) System - a novel real-time signal processing platform engineered to provide electrophysiologists with high fidelity cardiac signals. Electrophysiology focused technological solutions developed under the terms of this collaboration will be integrated into the PURE EP(tm) technology platform. Reified is led by Harvard- and MIT-trained computer scientist and physicist Dr. Wissner-Gross, an award-winning computer scientist, physicist, entrepreneur and author. Technical expertise that the Reified team is planning to bring to the project includes data analysis, algorithmic modeling and development.

Integration of AI can open new avenues for improved diagnosis and more effective therapy delivery for bioelectronic medicine in particular and healthcare in general. We are thrilled to partner with Alex and his outstanding team, and look forward to working with them on developing world-class artificial intelligence and machine learning solutions, which, we believe, will benefit a worldwide physician audience, commented Kenneth L Londoner, Chairman and CEO of BioSig Technologies, Inc.

The application of modern AI and machine learning techniques to electrophysiology presents one of the most promising healthcare opportunities of our time, said Dr. Wissner-Gross. We look forward to our forthcoming collaboration with BioSig Technologies.

On November 21, 2019 the Company announced that it commenced patient enrollment in its first clinical trial for the PURE EP(tm) System.

About Reified CapitalReified Capital, LLC is a provider of advanced artificial intelligence-focused technical advisory services to the private sector. Reifieds areas of expertise include machine learning, data analysis, modeling and simulation, cybersecurity, knowledge management, cyber-physical systems, and autonomous systems.

About BioSig TechnologiesBioSig Technologies is a medical technology company developing a proprietary biomedical signal processing platform designed to improve the electrophysiology (EP) marketplace (www.biosig.com). Led by a proven management team and a veteran Board of Directors, BioSig Technologies is preparing to commercialize its PURE EP(tm) System. The technology has been developed to address an unmet need in a large and growing market.The Companys first product, PURE EP(tm) System is a computerized system intended for acquiring, digitizing, amplifying, filtering, measuring and calculating, displaying, recording and storing of electrocardiographic and intracardiac signals for patients undergoing electrophysiology (EP) procedures in an EP laboratory. The system is indicated for use under the supervision of licensed healthcare practitioners who are responsible for interpreting the data. This novel cardiac signal acquisition and display system is engineered to assist electrophysiologists in clinical decision-making during electrophysiology procedures in patients with abnormal heart rates and rhythms. BioSigs ultimate goal is to deliver technology to improve upon catheter ablation treatments for the prevalent and potentially deadly arrhythmias, Atrial Fibrillation and Ventricular Tachycardia. BioSig has partnered with Minnetronix on technology development and received FDA 510(k) clearance for the PURE EP(tm) System in August 2018.

Forward-looking Statements This press release contains forward-looking statements. Such statements may be preceded by the words intends, may, will, plans, expects, anticipates, projects, predicts, estimates, aims, believes, hopes, potential or similar words. Forward- looking statements are not guarantees of future performance, are based on certain assumptions and are subject to various known and unknown risks and uncertainties, many of which are beyond the Companys control, and cannot be predicted or quantified and consequently, actual results may differ materially from those expressed or implied by such forward-looking statements. Such risks and uncertainties include, without limitation, risks and uncertainties associated with (i) our inability to manufacture our products and product candidates on a commercial scale on our own, or in collaboration with third parties; (ii) difficulties in obtaining financing on commercially reasonable terms; (iii) changes in the size and nature of our competition; (iv) loss of one or more key executives or scientists; and (v) difficulties in securing regulatory approval to market our products and product candidates. More detailed information about the Company and the risk factors that may affect the realization of forward-looking statements is set forth in the Companys filings with the Securities and Exchange Commission (SEC), including the Companys Annual Report on Form 10-K and its Quarterly Reports on Form 10-Q. Investors and security holders are urged to read these documents free of charge on the SECs website at http://www.sec.gov. The Company assumes no obligation to publicly update or revise its forward-looking statements as a result of new information, future events or otherwise.

Read the original here:

BioSig Technologies Announces New Collaboration on Development of Artificial Intelligence Solutions in Healthcare - GlobeNewswire

Amazon makes three major AI announcements during re:Invent 2019 – AI News

Amazon has kicked off its annual re:Invent conference in Las Vegas and made three major AI announcements.

During a midnight keynote, Amazon unveiled Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer.

Transcribe Medical

The first announcement well be talking about is likely to have the biggest impact on peoples lives soonest.

Transcribe Medical is designed to transcribe medical speech for primary care. The feature is aware of medical speech in addition to standard conversational diction.

Amazon says Transcribe Medical can be deployed across thousands of healthcare facilities to provide clinicians with secure note-taking abilities.

Transcribe Medical offers an API and can work with most microphone-equipped smart devices. The service is fully managed and sends back a stream of text in real-time.

Furthermore, and most importantly, Transcribe Medical is covered under AWS HIPAA eligibility and business associate addendum (BAA). This means that any customer that enters into a BAA with AWS can use Transcribe Medical to process and store personal health information legally.

SoundLines and Amgen are two partners which Amazon says are already using Transcribe Medical.

Vadim Khazan, president of technology at SoundLines, said in a statement:

For the 3,500 health care partners relying on our care team optimisation strategies for the past 15 years, weve significantly decreased the time and effort required to get to insightful data.

SageMaker Operators for Kubernetes

The next announcement is Amazon SageMaker Operators for Kubernetes.

Amazons SageMaker is a machine learning development platform and this new feature lets data scientists using Kubernetes train, tune, and deploy AI models.

SageMaker Operators can be installed on Kubernetes clusters and jobs can be created using Amazons machine learning platform through the Kubernetes API and command line tools.

In a blog post, AWS deep learning senior product manager Aditya Bindal wrote:

Customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale.

Amazon says that compute resources are pre-configured and optimised, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete.

SageMaker Operators for Kubernetes is generally available in AWS server regions including US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland).

DeepComposer

Finally, we have DeepComposer. This one is a bit more fun for those who enjoy playing with hardware toys.

Amazon calls DeepComposer the worlds first machine learning-enabled musical keyboard. The keyboard features 32-keys and two octaves, and is designed for developers to experiment with pretrained or custom AI models.

In a blog post, AWS AI and machine learning evangelist Julien Simon explains how DeepComposer taps a Generative Adversarial Network (GAN) to fill in gaps in songs.

After recording a short tune, a model for the composers favourite genre is selected in addition to setting the models parameters. Hyperparameters are then set along with a validation sample.

Once this process is complete, DeepComposer then generates a composition which can be played in the AWS console or even shared to SoundCloud (then its really just a waiting game for a call from Jay-Z).

Developers itching to get started with DeepComposer can apply for a physical keyboard for when they become available, or get started now with a virtual keyboard in the AWS console.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

See more here:

Amazon makes three major AI announcements during re:Invent 2019 - AI News

The impact of artificial intelligence on humans – Bangkok Post

Will the machines take control? Not if we focus on developing the skills that AI cannot replicate

From Siri, the virtual assistant in Apple mobile devices, to self-driving cars, artificial intelligence (AI) is progressing rapidly, outperforming humans at some tasks. As with the majority of the changes happening globally, there will be positive and negative impacts as AI continues to shape the world we live in. Every single one of us will have to reckon with our ability to balance the human way of life and the transition to the AI cosmos.

According to a report by the technology research group IDC, spending on AI is expected to reach US$46 billion by 2020 with no signs of slowing down. AI is definitely on the rise in both business and life in general. The question is, will humans eventually lose control as machines become super-intelligent? Unforeseen consequences are likely whenever a new technology is introduced, and AI is no exception.

It is obvious that AI is a disruptive technology, revolutionising businesses and bringing new approaches to decision-making based on measurable outcomes. It can enhance efficiency and production volume, while cultivating new opportunities for revenue to flourish.

We have to face the fact that humans arent always the best at tedious and repetitive tasks, whereas machines dont get tired or complain. This is where AI is starting to play an important role: freeing humans from drudgery so that we can focus on interpersonal relations and more creative work.

Is it true that robots and AI will destroy jobs? That is something we hear quite often. Everyone has their own opinions about the pluses and minuses of the technology. However, if you think about it in a positive way, AI is actually encouraging evolution in the job market, as candidates come to realise they need to develop new types of skills in order to secure fulfilling work amid rapid technological advancements.

The truth is, people will still work, but they will work better with the assistance of AI. In other words, the unparalleled duo of human and machines coming together will soon turn into the new normal in the workforce. Already there are many routine white-collar tasks such as answering emails, data entry and related responsibilities that can be handled by intelligent assistants if businesses are prepared to recognise the potential.

Away from the office, we can see that more and more people are living in smart homes or equipping their residences with hardware and software that can reduce energy usage and provide better security, among other benefits. AI is also having a profound impact on healthcare, leading to improved diagnosis and treatment of many conditions, leading to healthier citizens and healthier economies.

The ability of technology to answer more questions, solve more problems and innovate in previously unimaginable ways goes beyond the capacity of the human brain for better or worse, depending on how one perceives this subject. The elevation of technology will allow individuals to focus on higher functions, with improved quality living standards.

Challenges will continue to come and go, but the biggest one will be for humans to find their place in this new world, by staking a claim to all the activities that call for their unique human abilities.

A study by PwC forecast that 7 million existing jobs will be replaced by AI in the UK from 2017 to 2037. However, 7.2 million new jobs could be created as well. Yes, many humans are wondering whether they will be part of the 7 million or part of the 7.2 million. Living with this uncertainty is a struggle for many given the transformative impact of AI on our society and the economic, political, legal and regulatory implications that need to be prepared for.

At its core, AI is about imitating human thought processes. Human beings essentially have to teach AI the how-to of practically everything, but AI cannot be taught how to be empathic, something only humans can do. It is one thing to allow machines to predict and help solve problems; it is another to purposely make them control the ways in which people will be made redundant.

Therefore, it is vital for us to be more sceptical of AI and recognise its shortcomings together with its potential. By focusing more on training people in soft skills, starting in school, we can help produce a greater number of employable humans who will be able to work alongside machines to deliver the best of both worlds.

Arinya Talerngsri is Chief Capability Officer and Managing Director at SEAC - Southeast Asias Lifelong Learning Center. She can be reached by email at arinya_t@seasiacenter.com or https://www.linkedin.com/in/arinya-talerngsri-53b81aa. Explore and experience our lifelong learning ecosystem today at https://www.yournextu.com

Read this article:

The impact of artificial intelligence on humans - Bangkok Post

What Jobs Will Artificial Intelligence Affect? – EHS Today

Its impossible to ignore the fact that advances in artificial intelligence (AI) is changing how we do our current jobs. But what has captured even more interest is how the increasing capability of this technology will affect future jobs.

In trying to determine the specific effects on which jobs and which sectors, many studies have been undertaking but its hard to capture this information.

To add further research to this topic theBrookings Institutionissued a reporton Nov. 20, presenting a new method of analyzing this issue.

By employing a novel technique developed by Stanford University Ph.D. candidate Michael Webb, the new report establishes jobexposure levelsby analyzing the overlap between AI-related patents and job descriptions, the report said. In this way, the research homes in on the impacts of AI specifically and does it by studying empirical statistical associations as opposed to expert forecasting.

The technique Webb used was able to quantify the overlap between thetext of AI patents and the text of job descriptions that can identify the kinds of tasks and occupations likely to be affected by particular AI capabilities.

We find that Webbs AI measures depict a very different range of impacts on the workforce than those from robotics and software. Where the robotics and software that dominate the automation field seem to mostly to involve routine or rule-based, tasks and thus lower-or-middle pay roles, AIs distinctive capabilities suggest that high-wage occupations will be some of the most exposed, the report noted.

Using patents are useful here because they provide timely predictions of the commercial relevance of specific technological applications. Occupational descriptions are also useful because they provide detailed insight into economic activities at the scale of the whole economy."

Findings

Based on these conclusions the report says that we have a lot to learn about AI, and that these are extremely early days in our inquiries.Whats coming may not resemble what we have experienced or expect to experience.

Society should get ready for a very different pattern of impact than those that accompaniedthe broad adoption of robotics and software. While the last waves of automation led to increases of inequity and wage polarization, its not clear that AI will have the same effects.

Read this article:

What Jobs Will Artificial Intelligence Affect? - EHS Today

India wants to be on the cusp of artificial intelligence but lacks the laws to back it up – Business Insider India

Machines are only set to get smarter as more applications of AI come to light. Research around AI has grown by seven-fold since 1996, according to a study by Tata Communications Services (TCS). Yet, India lags far behind.

We have one of the best technology talent pools in the world. If we fast-track and balance our progress on innovation, IP management, and entrepreneurship, we can realize the potential to become a global AI powerhouse, said Santosh Mohanty, the Global Head for Components Engineering Group at TCS.

It's hardly surprising since out of 22,000 PhD researchers around the world, only 386 are from India, according to the Global AI Talent Report 2018.

Lacking in laws

Until 2002, computer-related inventions were deemed ineligible for patents. Even though that has now changed, the existing laws pose their own challenges like being too ambiguous and vague.

For instance, an algorithm can't be patented until it has a practical application or use case even if it's solving a problem behind the scenes.

All of them, except for IBM, are focused on computer vision a form of machine perception that used deep learning to identify objects, videos, and images. IBM, on the other hand, is focused on natural language processing like chatbots.

Alphabet, Google's parent company, is second only to the Chinese tech giant Baidu in owning portfolios of patents related to deep learning.

See also:India is all set to deploy facial recognition but there is no law in place to keep a check

Here's what global tech CEOs have to say about India's data protection laws

Excerpt from:

India wants to be on the cusp of artificial intelligence but lacks the laws to back it up - Business Insider India

Artificial intelligence use ‘must be transparent and accountable’ – The Irish News

Companies planning on using artificial intelligence (AI) in their work should ensure it is transparent and accountable, the Information Commissioners Office (ICO) has said.

The UKs data watchdog has published its first draft regulatory guidance into the use of AI in collaboration with the Alan Turing Institute.

It warned that the public are still uneasy over the use of computer software to make decisions previously made by humans, so any systems must be transparent and provide clear explanations of decisions made.

The guidance identified four key principles for AI: transparency, accountability, consideration of context and reflection on impacts.

The ICO said it had found that more than half of people remain concerned about machines making complex, automated decisions about them.

The potential for AI is huge, but its implementation is often complex, which makes it difficult for people to understand how it works, said Simon McDougall, the ICOs executive director of technology and innovation.

And when people dont understand a technology, it can lead to doubt, uncertainty and mistrust.

Last year, ministers published the AI Sector Deal, a joint venture between the Government and industry to try to push the UK to the forefront of emerging technology such as AI.

The ICO and the Alan Turing Institutes draft guidance comes after an independent review by Professor Dame Wendy Hall and also the Government urged both parties to provide input on the subject.

The guidance said the four main principles are rooted in the General Data Protection Regulations (GDPR), EU-wide laws introduced last year to hand greater control over personal data to individuals.

The principles say organisations should ensure decisions made by AI are obvious and appropriately explained to people in a meaningful way.

On accountability, it says firms should ensure appropriate oversight of AI decision systems, and be answerable to others.

It also called for companies to reflect on the impact their AI use would have by ensuring they ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome.

The ICO said it will consult on its guidance until January 24, and Mr McDougall encouraged industry experts to respond to its draft before then.

The decisions made using AI need to be properly understood by the people they impact, he said.

This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built in to AI systems.

Go here to read the rest:

Artificial intelligence use 'must be transparent and accountable' - The Irish News

It Pays To Break Artificial Intelligence Out Of The Lab, Study Confirms – Forbes

null

Yes, artificial intelligence (AI) is proving itself to be a worthwhile tool in the business arena at least in focused, preliminary projects. Intelligent chatbots are a classic example. Now its a question of how quickly it can be expanded to deliver on a wider basis across the business to automate decisions around inventory or investments, for example.

Theres progress on this front, as shown in McKinseys latest survey of 2,360 executives, which shows a nearly 25 percent year-over-year increase in the use of AI in various business processes and there has been a sizable jump in companies spreading AI across multiple processes.

A majority of executives in companies that have adopted AI report that it has increased revenues in areas where it is used, and 44 percent say it has reduced costs, the surveys authors, Arif Cam, Michael Chui, and Bryce Hall, all with McKinsey, state.

The results also show that a small share of companies the authors call them AI high performers are attaining outsize business results from AI. Close to two in three companies, 63 percent, report revenue increases from AI adoption in the business units. Respondents from high performers are nearly three times likelier than their lagging counterparts to report revenue gains of more than 10 percent, the survey shows.

The leading AI use cases include marketing and sales, product and service development, and supply-chain management. In marketing and sales, respondents most often report revenue increases from AI use in pricing, prediction of likelihood to buy, and customer-service analytics, the surveys authors report. In product and service development, revenue-producing use cases include the creation of new AI-based products and new AI-based enhancements. And in supply-chain management, respondents often cite sales and demand forecasting and spend analytics as use cases that generate revenue.

What are these high performers doing differently? Strategy is a key area. For example, 72 percent of respondents from AI high performers say their companies AI strategy aligns with their corporate strategy, compared with 29 percent of respondents from other companies. Similarly, 65 percent from the high performers report having a clear data strategy that supports and enables AI, compared with 20 percent from other companies. Also, the application of standardized tools to be used across the enterprise is more likely to be seen at high performers.

Adoption of Strategic AI Approaches:

Retraining workers is also a key differentiator, the survey shows. One-third of high performers, 33%, indicate the majority of their workforce has received AI-related training over the past year, compared to five percent of lagging organizations. Over the next three years, 42% of high performers intend to extend such training to most of their workers, versus only 17% of their lagging counterparts.

For AI to take hold, the McKinsey authors urge ramping up workforce retraining. Even the AI high performers have work to do in several key areas, the surveys authors point out. Only 36 percent of respondents from these companies say their frontline employees use AI insights in real time for daily decision making. A minority, 42 percent, report they systematically track a comprehensive set of well-defined key performance indicators for AI. Likewise, only 35 percent of respondents from AI high performers report having an active continuous learning program on AI for employees.

See the original post here:

It Pays To Break Artificial Intelligence Out Of The Lab, Study Confirms - Forbes

Dyno Therapeutics Announces Research Published in Science Enabling Artificial Intelligence Approach to Create New AAV Capsids for Gene Therapies -…

CAMBRIDGE, Mass.--(BUSINESS WIRE)--Dyno Therapeutics, a biotechnology company pioneering use of artificial intelligence in gene therapy, today announced a publication in the journal Science that demonstrates the power of a comprehensive machine-guided approach to engineer improved capsids for gene therapy delivery. The research was conducted by Dyno co-founders Eric D. Kelsic, Ph.D. and Sam Sinai, Ph.D., together with colleague Pierce Ogden, Ph.D., at Harvards Wyss Institute for Biologically Inspired Engineering and the Harvard Medical School laboratory of George M. Church, Ph.D., a Dyno scientific co-founder. The publication, entitled Comprehensive AAV capsid fitness landscape reveals a viral gene and enables machine-guided design, is available here.1

AAV capsids are presently the most commonly used vector for gene therapy because of their established ability to deliver genetic material to patient organs with a proven safety profile. However, there are only a few naturally occurring AAV capsids, and they are deficient in essential properties for optimal gene therapy, such as targeted delivery, evasion of the immune system, higher levels of viral production, and greater transduction efficiency. Starting at Harvard in 2015, the authors set out to overcome the limitations of current capsids by developing new machine-guided technologies to rapidly and systematically engineer a suite of new, improved capsids for widespread therapeutic use.

In the research published in Science, the authors demonstrate the advance of their unique machine-guided approach to AAV engineering. Previous approaches have been limited by the difficulty of altering a complex capsid protein without breaking its function and by the general lack of knowledge regarding how AAV capsids interact with the body. Historically, rather than addressing this challenge directly, the most popular approaches to capsid engineering have taken a roundabout solution: generating libraries of new capsids by making random changes to the protein. However, since most random changes to the capsid actually result in decreased function, such random libraries contain few viable capsids, much less improved ones. Recognizing the limitation of conventionally generated capsid libraries, the authors implemented a machine-guided approach that gathered a vast amount of data using new high-throughput measurement technologies to teach them how to build better libraries and, ultimately, lead to synthetic capsids with optimized delivery properties.

Focusing on the AAV2 capsid, the authors generated a complete landscape of all single codon substitutions, insertions and deletions, then measured the functional properties important for in vivo delivery. They then used a machine-guided approach, leveraging these data to efficiently generate diverse libraries of AAV capsids with multiple changes that targeted the mouse liver and that outperformed AAVs generated by conventional random mutagenesis approaches. In the process, the authors systematic efforts unexpectedly revealed the existence of a previously-unrecognized protein encoded within the sequence of all the most popular AAV capsids, which they termed membrane-associated accessory protein (MAAP). The authors believe that the protein plays a role in the natural life cycle of AAV.

This is just the beginning of machine-guided engineering of AAV capsids to transform gene therapy, underscores co-author Sam Sinai, Ph.D., Lead Machine Learning Scientist and co-founder of Dyno Therapeutics. The success of the simple linear models used in this study has led us to pursue more data and higher capacity machine learning models, where the potential for improvement in capsid designs feels boundless.

The results in the Science publication demonstrate, for the first time, the power of linking a comprehensive set of advanced techniques large scale DNA synthesis, pooled in vitro and in vivo screens, next-generation sequencing readouts, and iterative machine-guided capsid design to generate optimized synthetic AAV capsids, explains co-first and co-corresponding author Eric D. Kelsic, Ph.D., CEO and co-founder of Dyno Therapeutics. At Dyno, our team is committed to advancing these technologies to identify capsids that meet the urgent needs of patients who can benefit from gene therapies.

About Dyno TherapeuticsDyno Therapeutics is a pioneer in applying artificial intelligence to gene therapy. The companys powerful and proprietary genetic engineering platform is designed to rapidly and systematically develop improved AAV capsids that redefine the gene therapy landscape. Dyno was founded by experienced biotech entrepreneurs and leading scientists in the fields of synthetic biology, gene therapy, and machine learning. The company is located in Cambridge, Massachusetts. For additional information, please visit the company website at http://www.dynotx.com

Original post:

Dyno Therapeutics Announces Research Published in Science Enabling Artificial Intelligence Approach to Create New AAV Capsids for Gene Therapies -...

Artificial Intelligence, climate change and the U.S military – The Red (Team) Analysis Society

AI, AI Everywhere

The Artificial Intelligence field (AI) is creating a continuity that encompasses climate change science and the preparedness of the U.S. military to climate risks. This continuity appears through the central role of AI in two apparently disconnected foresight civilian and military uses.

Climate Central published in Nature a new assessment of the effects of climate change estimates. It establishes that 300 million people will be threatened by the sea-level rise and coastal flooding by 2050. In 2100, the land where 200 million people live today could be submersed daily (Climate Central, Report: Flooded Future: Global vulnerability to sea level rise worse than previously understood, October 29, 2019). This estimate is a tripling from precedent assessments. It is the result of the use of AI to correct series of datasets.

AI predicts sea-level rise and coastal flooding will threaten 300 million people by 2050.

Previously we thought 80 million people would be at risk by 2100.

During the same period, the Centre for Climate and Security published an article about a recent publication by the U.S. Army War College. The document, Implications of Climate change for the U.S Army, however, cannot be found anymore on the publications page of the U.S. Army War College. A rapid internet search allows us to find the report cited in a few articles and posted in a pdf version on internet journals, such asViceandPopular Mechanics. Yet, it cannot be found on official Department of Defense websites.

Nonetheless, this document establishes that adapting to the violent ecological, military, political, economic and social consequences of climate change is a dire and imperative necessity for the Army and for the entire U.S. military. Some parts of this report are centred on the use of artificial intelligence for force enhancement and energy use. It also calls for the modernization of training through a better and systematic use of virtual training and simulation.

In other words, artificial intelligence is creating a cognitive bridge between climate science and the U.S. military. It also creates new adaptation possibilities to the short and long term consequences of climate change.

In this article, we are going to study the strategic consequences of this scientific and military uses of AI in the climate change field. We are also going to see how the introduction of AI in both climate change and military affairs defines the emergence of a new political and planetary era.

Between now and 2100, a total of 360 (310-420)million people living on coastlines will be put at risk by flooding induced byclimate change driven sea-level rise (ClimateCentral, ibid). Compared with the currentglobal population of 7,5 billion people, it means that one person in 22 isgoing to be put at risk by this planetary trend with, at least, an annualflood, while the rise of the ocean could reach almost two metres. Those resultsare in sharp contrast with a former assessment establishing that 80 millionpeople would be at risk at the end of the century.

Now, the lowest and most densely populated coastlines, as in Bangladesh, Vietnam, China, Indonesia, Thailand, the Netherlands, and Louisiana, among others, 237 to 300 million people will be threatened by annual flooding in 2050. Those humongous numbers are the result of a new calculation. This new approach rests upon the cleaning by an AI-neural network system of the dataset previously used by scientists (Climate Central report in Nature, Scott A. Kulp and Benjamin H. Strauss, New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding, 29 October 2019).

This dataset is a compilation of the NASA and other satellite and air based lidar observations (Kulp and Strauss, ibid). The AI system corrected different results. For example, it corrected the way some space or air sensors could confuse coast altitude with city skylines altitudes. Those errors were inducing that those higher elevations were safer. So, this new neural network digital elevation model generates new results. It also generates an interactive visualization that alerts about the shape of things soon to come.

Thisstudy also establishes that, very likely, the amplitude of the sea-level risewill overwhelm the ability and resources of countries and cities to buildcoastal flood defences, as levees and seawalls. It clearly appears thatdeveloping countries as well as old industrialized countries are at risks, fromthe Vietnam to the Florida coasts.

However, the authors of the study are keen to precise that their study does not factor in several variables. Among them are the future coastal population densities, the geomorphological consequences of wetland submersion and accelerated ground erosion. The authors also precise that they have not yet integrated the socioeconomic consequences of this climate-ocean trend. Neither have they developed scenarios about the mass migrations, social unrests and conflicts that this AI-based research implies.

In a previous article, we saw how the U.S. Army research branch makes use of climate change research in order to define and propose a massive military adaptation effort (Jean-Michel Valantin, The U.S Army versus a Warming Planet, The Red (Team) Analysis Society, November 12, 2019).

In thisreport, the authors promote the use of artificial intelligence in order todevelop smart electrical and distributed grid, because The automated,A.I.-enhanced force of the Armys future is one that runs on electricity, notJP-8 (fuel). More efficient or resilient production of electricity throughmicro-nuclear power generation or improved solar arrays can fundamentally alterthe mobility and the logistical challenges of a mechanized force (p.22).

So, this recommendations aim at developing the robustness and resiliency of the U.S. Army operations in an energy constrained and climate sensitive near-future. This development will depend upon the interactions between AI and robotization. That is to say the military integration of actuators (Hlne Lavoix, Sensor and Actuator for AI: Inserting Artificial Intelligence in Reality, The Red (Team) Analysis Society, 14 January 2019). Those are the AI extension into physical reality. So, in military terms, AI will support and optimize the deployment of mechanical ground forces on theatres of operations (Hlne Lavoix, Sensor and actuator (4): Artificial Intelligence, the Long March towards Advanced Robots and Geopolitics, The Red (Team) Analysis Society, May 13, 2019).

In order to better prepare military actors to these new realities, the report also advocates for a massive use of virtual reality. Indeed, training through virtual reality simulations could help to better prepare officers and actors (Hlne Lavoix, How to Win a War with Artificial intelligence and Few Casualties, The Red (Team) Analysis Society, May 27, 2019). As it happens, they will have to handle future semi-automatized military capabilities in a world brutalized by climate change. AI would also support the responses of the U.S. military against foreign anddomestic massive cyber attacks. And it would drive the development of the U.S. military in the current technological race.

It is difficult not to think that, in the parts about the use of artificial intelligence, the authors are not alluding to the current massive militarization of AI by the Chinese military, both in training and at the operational and decision-making levels (Jean-Michel Valantin, Militarizing Artificial Intelligence China (1) and (2),The Red Team Analysis Society, April 23, 2018).

It must be kept in mind that these recommendations are part of a U.S. Army advocacy for climate change adaptation. What motivates these military recommendations is the rapid multiplication of multidimensional risks (Jean-Michel Valantin, The Midwest, the Trade war and the Swine Flu pandemic: the Agricultural and Food Super Storm is Here, The Red (Team) Analysis, June 3, 2019), as those the Climate Central report defines about sea-level rise.

As wecan see, AI becomes a central feature of the new reality landscape. As such, itbecomes a climate science tool as well as a military tool for transformationand adaptation to our warming and riskier planet.

In other terms, AI is entering the fray of the hyper siege, i.e. the cascade of consequences that are interlocking social, infrastructural, biologic vulnerabilities with climate driven events. Those cascades are becoming an entity that is besieging contemporary societies (Jean-Michel Valantin, Hyper siege: Climate Change and U.S National Security,The Red (Team) Analysis Society, March 17, 2014 and The U.S Navy vs Climate and ocean change,The Red (Team) Analysis, June 11, 2018, and David Wallace-Wells, The Unhinabitable Earth, Life After Warming, 2019).

So, AI power unveils itself (Hlne Lavoix, When Artificial Intelligence will Power Geopolitics-Presenting AI, The Red (Team) Analysis Society, November 27, 2017), through scientific research and military preparedness, as a tool and a possible ally in the face of the rapidly coming perfect climate and social super storm.

In this ecological and strategic context, AI power becomes an artificial continuum, both technological and cognitive. It actuates itself through climate research and military adaptation to the very climate change that it helps foresee. This creates an unexpected alliance between AI power, climate science and military foresight and warning. This new AI power will be useful for adapting to the planetary crisis and its cascade of hyper violent consequences (Jean-Michel Valantin, The Planetary Crisis Rules, part 1, 2, 3, 4, 5, The Red (Team) Analysis Society).

Instrategic terms, the convergence of AI power and the will and capabilities toadapt to the Long emergency is going to define who will be the winners andlosers of the planetary crisis.

And therace is already on.

Visit link:

Artificial Intelligence, climate change and the U.S military - The Red (Team) Analysis Society

The Best Artificial Intelligence Stocks of 2019 — and The Top AI Stock for 2020 – The Motley Fool

Artificial intelligence (AI) -- the capability of a machine to mimic human thinking and behavior -- is one of the biggest growth trends today.Spending on AI systems will increase by more than two and a half times between 2019 and 2023, from $37.5 billion to $97.9 billion, for a compound annual growth rate of 28.4%,according to estimates by research firm IDC. Other sources are projecting even more torrid growth rates.

There are two broad ways you can get exposure to the AI space:

With this background in mind, let's look at which AI stocks are performing the best so far this year (through Nov. 25) and which one is my choice for best AI stock for 2020.

Image source: Getty Images.

The following chart isn't meant to be all-inclusive, as that would be impossible, and the chart has limits on the number of metrics. Notable among the companies missing areAdvanced Micro Devices and Intel. They were left out largely because NVIDIA is currently the leader in supplying AI chips. While there are things to like about shares of both of these companies, NVIDIA stock is the better play on AI, in my view.

Data by YCharts.

Graphics processing unit (GPU) specialist NVIDIA (NASDAQ:NVDA), e-commerce and cloud computing service titanAmazon, computer software and cloud computer service giant Microsoft, Google parent and cloud computing service provider Alphabet, old technology guard and multifaceted AI player IBM, and Micron Technology, which makes computer memory chips and related storage products, would best be put in the first category above. They produce and sell AI-related products and/or services. They're all also probably using AI internally, with Amazon and Alphabet being notably heavy users of the tech to improve their products.

iPhone makerApple (NASDAQ:AAPL), social media leader Facebook (NASDAQ:FB), video-streaming king Netflix, and Stitch Fix, an online personal styling service provider, would best be categorized in the second group since they're either primarily or solely using AI to improve their products and services.

Now let's look at some basic stats for the three best performers of this group.

Company

Market Cap

P/E(Forward)

Wall Street's 5-Year Estimated Average Annual EPS Growth

5-Year Stock Return

Apple

NVIDIA

Facebook

S&P 500

--

--

Data sources: YCharts (returns) and Yahoo! Finance (all else). P/E = price-to-earnings ratio. EPS = earnings per share. Data as of Nov. 25, 2019.

On a valuation basis alone, Facebook stock looks the most compelling when we take earnings growth estimates into account. Then would come Apple and then NVIDIA. However, there are other factors to consider, with the biggie being that projected earnings growth is just that, projected.

There's a good argument to be made that NVIDIA has a great shot at exceeding analysts' earnings estimates. Why? Because it has a fantastic record of doing so, and all one needs to do is listen to enough quarterly earnings calls with Wall Street analysts to realize why this is so: A fair number of them don't seem to have a strong grasp of the company's operations and products. (I'm not knocking, as most analysts don't have technical backgrounds, and they cover a lot of companies.)

Facebook stock probably has the potential to continue to be a long-term winner. But it's relatively high regulatory risk profile makes it not a good fit for all investors. Moreover, it will likely have to keep spending a ton of money to help prevent "bad actors" from using its site for various nefarious purposes. Indeed, this is one of the major internal functions for which the company is using AI. It also uses the tech to recognize and tag uploaded images, among other things.

Apple uses AI internally in various ways, with the most consumer-facing one being powering its voice assistant Siri. It's the best of these three stocks for more conservative investors, as it has a great long-term track record and pays a modest dividend.NVIDIA, however, is probably the better choice for growth-oriented investors who are comfortable with a moderate risk level.

Image source: Getty Images.

NVIDIA is the leading supplier of graphics cards for computing gaming, with AMD a relatively distant second. In the last several years, it's transformed itself into a major AI player, or more specifically, a force to be reckoned with in the fast-growing deep-learning category of AI. Its GPUs are the gold standard for AI training in data centers, and it's now making inroads into AI inferencing. (Inferencing involves a machine or device applying what it's learned in its training to new data. It can be done in data centers or "at the edge" -- meaning at the location of the machine or device that's collecting the data.)

NVIDIA is in the relatively early stages of profiting from many gigantic growth trends, including AI, esports, driverless vehicles, virtual reality (VR), smart cities, drones, and more. (There is some overlap in these categories, as AI is involved to some degree in most of NVIDIA's products.) There are no pure plays on AI, to my knowledge, but NVIDIA would probably come the closest.

Originally posted here:

The Best Artificial Intelligence Stocks of 2019 -- and The Top AI Stock for 2020 - The Motley Fool

Verata Health Named Leading Innovative Artificial Intelligence Company by the American Hospital Association – PR Web

This recognition from AHA is another step in our mission to improve patient care while saving time and money for health care providers. - Verata CEO Dr. Jeremy Friese

CHICAGO (PRWEB) December 03, 2019

Verata Health, the AI-powered platform developed to automate and streamline prior authorization for health care procedures, tests, and drugs, was selected by the American Hospital Association (AHA) as one of the most innovative artificial intelligence companies of 2019. Verata will be recognized for its innovation in health care AI at the AHA Executive Forum December 3rd in Chicago.

Errors in prior authorizations account for nearly a third of all provider write offs and contribute to surprise medical bills received by patients. Verata uses artificial intelligence to automate both simple and clinically complex prior authorizationsa process by insurance companies to screen and approve procedures, tests (including imaging), and drugs before they can be delivered to a patient. Verata automatically identifies when clinical criteria are met, reducing the burden to doctors and hospital staff, preventing errors that lead to expensive write-offs, and helping patients get faster care.

Veratas ultimate goal is removing obstacles patients face in order to get the care they need and deserve, said Verata CEO Dr. Jeremy Friese. The financial benefit to providers is a bonus. Its a win-win.

The AHA is a national organization that represents and advocates for hospitals and health systems, their surrounding communities and the patients they serve. This years executive forum, Medicine + Machines, will focus on how AI is driving positive patient experiences and outcomes and transforming current processes within health care organizations.

The event on December 3rd will kick off with a breakfast presentation, featuring Dr. Friese and Veratas Chief Medical Officer, Dr. YiDing Yu. Panelists will discuss how AI and automation are being used in hospital systems across the country, including for prior authorizations, to improve patient care.

Im pleased Verata is proving to make a difference in patients lives, and excited to continue spreading awareness of our platform, said Friese. This recognition from AHA is another step in our mission to improve patient care while saving time and money for health care providers.

About Verata Health

Verata Health is a Minnesota-based, physician-led company changing the way medical practices, hospitals, and payers tackle the challenges of prior authorization. By leveraging powerful artificial intelligence, Verata Health automates both simple and complex prior authorizations, delivering immediate financial value and helping patients get the care they deserve. For more information please visit http://www.veratahealth.com.

Share article on social media or email:

Read this article:

Verata Health Named Leading Innovative Artificial Intelligence Company by the American Hospital Association - PR Web

How To Get Your Rsum Past The Artificial Intelligence Gatekeepers – Forbes

Getty

By Jeff Mills, Director, Solution Marketing at SAP SuccessFactors

Its no longer a secret that getting past the robot rsum readers to a human let alone land an interview can seem like trying to get in to see the Wizard of Oz. As the rsums of highly qualified applicants are rejected by the initial automated screening, job seekers suddenly find themselves having to learn rsum submission optimization to please the algorithms and beat the bots for a meeting with the Wizard.

Many enterprise businesses use Artificial Intelligence (AI) and machine learning tools to screen rsums when recruiting and hiring new employees.Even small to midsize companies who use recruiting services are using whatever algorithm or search-driven automated rsum screening those services utilize.

Why dont human beings read rsums anymore? Well, they do, but usually later in the process after the initial shortlist by the bots. Unfortunately, desirable soft skills and unquantifiable experience can go unnoticed by the best-trained algorithms. So far, the only solution is human interaction.

Despite the view from outside the organization, HR has good reason for using automated processes for screening rsums. To efficiently manage the hundreds or even thousands of applications submitted for one position alone, companies have adopted automated AI screening tools to not only save time and human effort but also to find qualified and desirable candidates before they move on or someone else gets to them first.

Nobodys ever seen the Great Oz!

The wealth of impressive time-saving and turnover reduction metrics equates to success and big ROI for organizations who automate recruiting and hiring processes. Most tales of headaches and frustration go untold for many thousands of qualified applicants whose rsums somehow failed to tickle the algorithm just right.

This trend is changing, however, as the bias built into AI and machine learning algorithms unintentionally or otherwise becomes more glaringly apparent and undeniable. Sure, any new technology will have its early adopters and zealous promoters and apologists as well as the naysayers and skeptics. But when that technology shows promise to change industry and increase profit, criticism can be drowned out and ignored.

The problem of bias in AI is not a new concern. For several years, scientists and engineers have warned that because AI is created and developed by humans, the likelihood of bias finding its way into the program code is high if not certain. And the time to think about that and address it as much as possible is during the design, development, and testing process. Blind spots are inevitable. Once buy-in is achieved and business ecosystems integrate that technology, the recursive and reciprocal influences of technology, commerce, and society can make changing course slow and/or costly.

Consider the recent trouble Amazon found itself in for some of its hiring practices when it had been determined that their AI recruiting tool was biased against women. AI in itself is not biased and performs only as it is instructed and adapts to new information. Rather, the bias comes from the way human beings program and develop the way machines learn and execute commands. Or if the outputs of the AI are taken at face value and never trained by ongoing human interaction, they can never adapt.

Bias enters in a few ways. One source is rooted in the data sets used to train algorithms for screening candidates. Other sources of bias enter when certain criteria are privileged, such as growing up in a certain area, attending a top university, or certain age preferences. By using the data for existing employees as a model for qualified candidates, the screening process can become a kind of feedback loop of biased criteria.

A few methods and practices can help correct or avoid this problem. One is to use broad swaths of data, including data from outside your company and even your industry. Also, train algorithms on a continual basis, incorporating new data, and monitoring algorithm function and results. Set benchmarks for measuring data quality and have humans screen rsums as well. Management of automated recruiting and screening solutions can go a long way in minimizing bias as well as reducing the number of qualified candidates who get their rsums rejected.

Bell out of order, please knock

As mentioned earlier, change takes time once these processes are in place and embedded. Until widespread acceptance that problems exist, and steps are taken to address them, the best job seekers can do is adapt.

With all of the possible ways that programmers biases influence the bots screening rsums, what can people applying for jobs do to improve their chances of getting past the AI gatekeepers?

The good news is that these moves will not only help eliminate false negatives and keep your rsum out of the abyss, but they are likely to make things easier for the human beings it reaches.

Well, why didnt you say so? Thats a horse of a different color!

So, what are they looking for? How do you beat the bots?

In the big picture, AI is still young, and we are working out the kinks and bugs not only at a basic code and function level, but also on the human level. We are still learning how to navigate and account for our roles and responsibilities in the overall ecosystem of human-computer interaction.

The bottom line is that AI, machine learning, and automation can eliminate bias or reinforce it. That separation may never be pure, but its an ideal that is not only worth striving for, it is absolutely necessary to work toward. The impact and consequences of our choices today will leave long-lasting effects on every area of human life.

And the bright side is that were already beginning to see how those theoretical concerns can play out in the real world, and we have an opportunity to improve a life-changing technological development whose reach and impact we can still only dimly imagine. In the meantime, job seekers looking to beat the bots are not entirely powerless, but can do what human beings have done well for ages: adapt.

Interested in how to deliver a great candidate experience? Read our guide on how to Transform the Candidate Experience.

Read the original here:

How To Get Your Rsum Past The Artificial Intelligence Gatekeepers - Forbes

Highlights: Addressing fairness in the context of artificial intelligence – Brookings Institution

When society uses artificial intelligence (AI) to help build judgments about individuals, fairness and equity are critical considerations. On Nov. 12, Brookings Fellow Nicol Turner-Lee sat down with Solon Barocas of Cornell University, Natasha Duarte of the Center for Democracy & Technology, and Karl Ricanek of the University of North Carolina Wilmington to discuss artificial intelligence in the context of societal bias, technological testing, and the legal system.

Artificial intelligence is an element of many everyday services and applications, including electronic devices, online search engines, and social media platforms. In most cases, AI provides positive utility for consumerssuch as when machines automatically detect credit card fraud or help doctors assess health care risks. However, there is a smaller percentage of cases, such as when AI helps inform decisions on credit limits or mortgage lending, where technology has a higher potential to augment historical biases.

Policing is another area where artificial intelligence has seen heightened debateespecially when facial recognition technologies are employed. When it comes to facial recognition and policing, there are two major points of contention: the accuracy of these technologies and the potential for misuse. The first problem is that facial recognition algorithms could reflect biased input data, which means that their accuracy rates may vary across racial and demographic groups. The second challenge is that individuals can use facial recognition products in ways other than their intended usemeaning that even if these products receive high accuracy ratings in lab testing, any misapplication in real-life police work could wrongly incriminate members of historically marginalized groups.

Technologists have narrowed down this issue by creating a distinction between facial detection and facial analysis. Facial detection describes the act of identifying and matching faces in a databasealong the lines of what is traditionally known as facial recognition. Facial analysis goes further to assess physical features such as nose shape (or facial attributes) and emotions (or affective computing). In particular, facial analysis has raised civil rights and equity concerns: an algorithm may correctly determine that somebody is angry or scared but might incorrectly guess why.

When considering algorithmic bias, an important legal question is whether an AI product causes a disproportional disadvantage, or disparate impact, on protected groups of individuals. However, plaintiffs often face broad challenges in bringing anti-discrimination lawsuits in AI cases. First, disparate impact is difficult to detect; second, it is difficult to prove. Plaintiffs often bear the burden of gathering evidence of discriminationa challenging endeavor for an individual when disparate impact often requires aggregate data from a large pool of people.

Because algorithmic bias is largely untested in court, many legal questions remain about the application of current anti-discrimination laws to AI products. For example, under Title VII of the 1964 Civil Rights Act, private employers can contest disparate impact claims by demonstrating that their practices are a business necessity. However, what constitutes a business necessity in the context of automated software? Should a statistical correlation be enough to assert disparate impact by an automated system? And how, in the context of algorithmic bias, can a plaintiff feasibly identify and prove disparate impact?

Algorithmic bias is a multi-layered problem that requires a multi-layered solution, which may include accountability mechanisms, industry self-regulation, civil rights litigation, or original legislation. Earlier this year, Sen. Ron Wyden (D-OR), Sen. Cory Booker (D-NJ), and Rep. Yvette Clark (D-NY) introduced the Algorithmic Accountability Act, which would require companies to conduct algorithmic risk assessments but allow them to choose whether or not to publicize the results. In addition, Rep. Mark Takano (D-CA) introduced the Justice in Forensic Algorithms Act, which addresses the transparency of algorithms in criminal court cases.

However, this multi-layered solution may require stakeholders to first address a more fundamental question: what is the goal that were trying to solve? For example, to some individuals, the possibility of inaccuracy is the biggest challenge when using AI in criminal justice. But to others, there are certain use cases where AI does not belong, such as in the criminal justice or national security contexts, regardless of whether or not it is accurate. Or, as Barocas describes these competing goals, when the systems work well, theyre Orwellian, and when they work poorly, theyre Kafkaesque.

Follow this link:

Highlights: Addressing fairness in the context of artificial intelligence - Brookings Institution

Newsrooms have five years to embrace artificial intelligence or they risk becoming irrelevant – Journalism.co.uk

A new report published this week (18 November 2019) looking at the intersection of AI and journalism has issued a warning to global newsrooms: collaborate with your competitors or face extinction.

The study, 'New powers, new responsibilities. A global survey of journalism and artificial intelligence' is a joint project between Polis, the international journalism think-tank at London School of Economics and Political Science, and the Google News Initiative, who has funded the research.

It surveyed 71 international news organisations on their on use of artificial intelligence for editorial purposes across a seven-month period, showing that just 37 per cent of them have a dedicated AI strategy.

Charlie Beckett, director, Polis, London School of Economics and Political Science, said that newsrooms have between two and five years to develop a meaningful strategy, or risk fading out of the digital landscape.

"This is a marathon, not a sprint - but theyve got to start running now," he said.

"Youve got two years to start running and at least working out your route and if youre not active within five years, youre going to lose the window of opportunity. If you miss that, youll be too late."

Even by the lowest possible trajectory, the rate of which natural language processing, translations, text generations and deep-fakes are developing means that newsrooms cannot afford to drag their heels, as the knowledge gap will only widen.

"Deepfakes have already accelerated in the last six months from a something in lab to something kids in Macedonia can churn out. Im not trying to panic people - the report stresses the positivity of AI - but there is a real sense of urgency here," he explained.

"Its really clear if you look at other industries that AI is shaping customer behaviour. People expect personalisation, be that in retail or housing, for production, supply or content creation. They use AI because of the efficiencies that it generates and how it enhances the services or products it offers.

"So if we, as journalists, are going to be living in that world, journalism is going to look very dumb if it doesnt have those capabilities. If journalism doesnt get its act together, worse than looking antiquated, it won't be looked at at all."

Despite these alarm bells, integrating AI into editorial process can have a range of benefits, including taking the burden out of long-winded tasks and sifting through large databases to produce local stories.

While many global newsrooms like Reuters News are quite advanced in this field, integrating significant cultural and operational changes is challenging for cash-strapped local and regional newsrooms.

The report details that many newsrooms struggle because of those financial limitations, but also because of lack of expertise, managerial strategy and time to prioritise AI, as well as scepticism around the technology. Some of these concerns touch on established arguments around algorithmic bias, filter bubbles and the influence of machine learning over editorial decisions.

The report also offers an eight-step pathway to integrating AI in newsrooms, even for those starting from scratch. It starts with assessing readiness of AI right through to creating task-specific roles with AI resources.

But dialogue and exchanging best practices from those in similar circumstances are also important.

Networking is not just a nice idea though, it can be a commercial arrangement.

"It could be a developing a good machine learning program, saying 'Can we benefit by selling this onto other newsrooms, so others can also benefit?' Or sharing data so you can train data better to develop better newsroom tools.

"When you train natural language processing you need a big dataset of images. One newsroom may not have that - are there opportunities for newsrooms to get together and share these tools and benefit? Its not altruism, its called benign self-interest."

Not co-operating, he argues, will lead to mutual self-destruction. But he is already seeing early signs of co-operation, recognising that competitors face common issues. Only by tackling the problem together can they resume their rivalry.

"To have healthy competition, you need healthy business," said Beckett.

"Get the tech right, then that will allow you to focus much harder on what you do differently and best, and how your editorial product is different to your competitors."

Ultimately, he said that AI will be neither the saviour nor the demise of journalism. But it will at least allow local journalists to leave their newsdesk more often and do more field work.

"If youve ever worked in a local newsroom, you know there are people who cant leave the office because they are churning out stories.

"They are losing the very idea of being local, which is going out into the streets, to meet people and to interact with your community. This is to augment and to power-up journalism, its not about journalism turning into a bland robotic product, its about getting back to distinctive journalism."

Want to know how to use breaking news to grow your audience? Find out how at Newsrewired on 27 November at Reuters, London. Head to newsrewired.com for the full agenda and tickets

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).

Here is the original post:

Newsrooms have five years to embrace artificial intelligence or they risk becoming irrelevant - Journalism.co.uk

4 Reasons to Use Artificial Intelligence in Your Next Embedded Design – DesignNews

For many, just mentioning artificial intelligence brings up mental images of sentient robots at war with mankind and mans struggle to avoid the endangered species list. While this may one day be a real scenario for when (perhaps a big if?) mankind ever creates an artificial general intelligence (AGI), the more pressing matter is whether embedded software developers should be embracing or fearing the use of artificial intelligence in their systems. Here are five reasons why you may want to include machine learning in your next project.

Reason #1 Marketing Buzz

From an engineering perspective, including a technology or methodology in a design simply because it has marketing buzz is something that every engineer should fight. The fact though is that if there is a buzz around something, odds are it will in the end help to sell the product better. Technology marketing seems to come in cycles, but there are always underlying themes that are driving those cycles that at the end of the day do turn out to be real.

Artificial intelligence has progressed through the years, with deep learning on the way. (Image source: Oracle)

Machine learning has a ton of buzz around it right now. Im finding this year that had industry events, machine learning typically makes up at least 25% of the event talks. Ive had several clients tell me that they need machine learning in their product and when I ask them their use case and why they need it, the answer is just that they need it. Ive heard this same story from dozens of colleagues, but the push for machine learning seems relentless right now. The driver is not necessarily engineering, but simply leveraging industry buzz to sell product.

Reason #2 The Hardware Can Support It

Its truly amazing how much microcontroller and application processors have changed in just the last few years. Microcontrollers which I have always considered to be resource constrained devices are now supporting megabytes of flash and RAM, having on-board cache and reaching system clock rates of 1 GHz and beyond! These little controllers are now even supporting DSP instructions which means that they can efficiently execute inferences.

With the amount of computing power available on these processors, it may not require much additional cost on the BOM to be able to support machine learning. If theres no added cost, and the marketing department is pushing for it, then leveraging machine learning might make sense simply because the hardware can support it!

Reason #3 It May Simplify Development

Machine learning has risen on the buzz charts for a reason. It has become a nearly indispensable tool for the IoT and the cloud. Machine learning can dramatically simplify software development. For example, have you ever tried to code up an application that can recognize gestures, handwriting or classify objects? These are really simple problems for a human brain to solve, but extremely difficult to write a program for. In certain program domains such as voice recognition, image classification and predictive maintenance, machine learning can dramatically simplify the development process and speed-up development.

With an ever expanding IoT and more data than one could ever hope for, its becoming far easier to classify large datasets and then train a model to use that information to generate the desired outcome for the system. In the past, developers may have had configuration values or acceptable operation bars that were constantly checked during runtime. These often involved lots of testing and a fair amount guessing. Through machine learning this can all be avoided by providing the data, developing a model and then deploying the inference on an embedded systems.

Reason #4 To Expand Your Solution Toolbox

One aspect of engineering that I absolutely love is that the tools and technologies that we use to solve problems and development products is always changing. Just look at how you developed an embedded one, three and five years ago! While some of your approaches have undoubtedly stayed constant, there should have been considerable improvements and additions to your processes that have improved your efficiency and the way that you solve problems.

Leveraging machine learning is yet another tool to add to the toolbox that in time, will prove to be an indispensable tool for developing embedded systems. However, that tool will never be sharpened if developers dont start to learn about, evaluate and use that tool. While it may not make sense to deploy a machine learning solution for a product today or even next year, understanding how it applies to your product and customers, the advantages and disadvantages can help to ensure that when the technology is more mature, that it will be easier to leverage for product development.

Real Value Will Follow the Marketing Buzz

There are a lot of reasons to start using machine learning in your next design cycle. While I believe marketing buzz is one of the biggest driving forces for tinyML right now, I also believe that real applications are not far behind and that developers need to start experimenting today if they are going to be successful tomorrow. While machine learning for embedded holds great promise, there are several issues that I think should strike a little bit of fear into the cautious developer such as:

These are concerns for a later time though, once weve mastered just getting our new tool to work the way that we expect it to.

Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 200 articles on embedded software development techniques, is a sought-after speaker and technical trainer, and holds three degrees which include a Masters of Engineering from the University of Michigan. Feel free to contact him at [emailprotected], at his website, and sign-up for his monthly Embedded Bytes Newsletter.

January 28-30:North America's largest chip, board, and systems event,DesignCon, returns to Silicon Valleyfor its 25th year!The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard?Register to attend!

Excerpt from:

4 Reasons to Use Artificial Intelligence in Your Next Embedded Design - DesignNews