Daily Archives: July 2, 2020

AI Models Say These Are The Best Stocks To Short Today – Forbes

Posted: July 2, 2020 at 4:46 pm

The June jobs report would have had a more significant effect on markets had it not been ... [+] overshadowed by the spike in coronavirus cases

Markets were up big on the opening bell today with a massive jobs report beat for June from the Labor Department. Nonfarm payrolls soared by 4.8 million jobs, a record increase, and far better than the 2.9 million increase that was expected. The unemployment rate also fell to 11.1% versus the expected 12.4%. This was a massive increase on the 2.7 million jobs gained in May and is showing the economic rebound that many economists had predicted. However, markets soon gave back some of those gains as a surge in coronavirus cases was reported across the country, with the country adding 50,600 cases in a single day with Florida having a 10,000 spike alone. If youre looking for a place to hedge your exposure, or bet against the market, our deep learning algorithms paired with Artificial Intelligence (AI) technology has you covered with the Top Shorts today.

First on the list is Alpine Immune Sciences Inc, which is a clinical stage biopharmaceutical company engaged in developing and discovering protein-based immunotherapies targeting a range of diseases. Our AI systems have identified factor scores of A in Technical, F in Momentum Volatility, and F in Quality Value for the stock that is up 162.47% for the year. The recent gap higher is giving short sellers an opportunity to bet against the stock here. The financials are mixed, with revenue growing by 62.7% in the last fiscal year to $1.74M, which is a growth of 63.54% over the last three fiscal years from the $1.73M level. Operating Income grew by 16.3% in the last fiscal year but was negative $(43.57)M, much worse than the $(13.47)M from three years ago. EPS grew by 17.54% in the last fiscal year but was negative $(2.28), compared to $(1.2) three years ago. ROE was (113%) in the last year compared to (17.5%) three years ago. Forward 12M Revenue is expected to grow by 66.48% and the stock trades with a forward 12M P/E of 17.29.

Price of Alpine Immune Sciences Inc compared to its Simple Moving Average

Concert Pharmaceuticals Inc is the next Top Short today, with factor scores identified by our deep learning algorithms of C in Technical, C in Momentum Volatility, and D in Quality Value. The stock is up just slightly on the year, at 4.35%. The company aims to discover and develop small-molecule drugs by incorporating new elements into molecules or by leveraging currently approved drugs for new treatments. Revenue grew by 27.42% over the last three fiscal years to $1.08M from $0.06M three years ago. Operating Income grew by 2.21% in the last fiscal year but was negative $(79.02)M, compared to $(51.18)M three years ago. EPS grew by 6.99% in the last fiscal year but came in at negative $(3.29) compared to $4.06 three years ago. ROE was (58.1%) in the last year, much worse than the 67.8% three years ago.

Price of Concert Pharmaceuticals Inc compared to its Simple Moving Average

Another Top Short today is Everspin Technologies Inc MRAM , a semiconductor company involved in providing Magnetoresistive Random Access Memory (MRAM) products. Factor scores of rated F in Technical, D in Momentum Volatility, and C in Quality Value have been identified by our AI technology for the stock that is up 25.73% for the year. Revenue grew by 0.22% in the last fiscal year to $37.5M, a growth of 4.59% over the last three fiscal years from $35.94M. Operating Income was negative, however, at $(13.63)M in the last fiscal, a growth of 19%. In the last three fiscal years, Operating Income grew by 45.37% from the negative $(20.21)M level. EPS was also negative at $(0.85) in the last fiscal year, a growth of 18.82%, from $(1.69) three years ago. That was a growth of 59.17% over the last three fiscal years. ROE was (67.6%) in the last year compared to (111.8%) three years ago. Forward 12M Revenue is expected to grow by 7.18% and the stock trades with an expensive forward 12M P/E of 31.32.

Price of Everspin Technologies Inc compared to its Simple Moving Average

Shiftpixy Inc makes the Top Short list today after receiving factor scores of D in Technical, D in Momentum Volatility, and C in Quality Value from our AI technology. The company is primarily a staffing enterprise, specialized in human capital management with an initial focus on the restaurant industry in Southern California. The small-cap company has had a tough year, already down 33.69%, and the longer-term downtrend does not appear to be changing anytime soon. Revenue grew by 940.74% in the last fiscal year to $5.4M from a level of $20.2M three years ago. Operating Income grew by 29.94% in the last fiscal year but was negative $(15.7)M, from $(7.5)M three years ago. EPS was negative $(22.9) in the last fiscal year, growing by 70.52%, compared to $(11.19) three years ago. ROE was (261.1%) three years ago.

Price of Shiftpixy Inc compared to its Simple Moving Average

Our final Top Short today is SG Blocks Inc, a US-based design and construction services company using code-engineered cargo shipping containers for safe and sustainable construction. It is engaged in the redesign, re-purpose and converting of heavy-gauge steel cargo shipping containers into SG Blocks - safe green building blocks for commercial, industrial, and residential building construction. Factor scores have been identified as C in Technical, F in Momentum Volatility, and C in Quality Value for the stock that has lost 32.75% for the year already. The financials are a bit messy, as while EPS grew by 41.92% in the last fiscal year and grew by 66.03% over the last three fiscal years, it came in at negative $(22.85) in the last fiscal and negative $(39.07) three years ago. Revenue was $2.99M in the last fiscal year compared to $5.06M three years ago. Operating Income was negative $(3.77)M in the last fiscal year, worse than the $(3.26)M from three years ago. ROE was (121%) in the last year, much worse than the reading of (53.2%) three years ago.

Price of SG Blocks Inc compared to its Simple Moving Average

Follow this link:

AI Models Say These Are The Best Stocks To Short Today - Forbes

Posted in Ai | Comments Off on AI Models Say These Are The Best Stocks To Short Today – Forbes

Data Quantity, Complexity Drives Use of AI in Drug Discovery and Testing – Xconomy

Posted: at 4:46 pm

XconomySan Francisco

The quantity of data about medicines, diseases, and biology is growing. So too, are the number of companies that employ artificial intelligence in drug discovery. Most of the low-hanging fruit in drug research has already been picked, and the industry is clamoring to make sense of the new data, according to Jeffrey Lu, CEO and co-founder of Engine Biosciences.

Theres only one way to do thatuse machines to process the complexity, Lu said.

Lu was one of the speakers featured last week during Xconomys Xcelerating Life Sciences San Francisco event. His startups technology platform uses AI to uncover gene interactions underlying diseases. The company also uses AI to test therapies that target these interactions, an approach it contends is faster, less expensive, and more precise than conventional drug discovery techniques. The company is discovering drugs for its own internal R&D, as well as for the pipelines of its pharmaceutical partners.

AI and machine learning techniques are also finding applications in clinical trials. San Francisco-based startup Unlearn.AI is developing technology that uses historical clinical trial data to create a virtual version of a real person, called a digital twins. The digital version predicts what would happen if the patient had received a placebo. This approach is intended to reduce the number of patients needed to test an experimental drug in a clinical trial, while at the same time increasing its statistical power, according to co-founder and CEO Charles Fisher.

Speaking on the panel, Fisher said that that the duration and expense of clinical development cries out for new approaches that will save on both fronts. He added that new technologies can also reduce the risks faced by patient volunteers.

We really owe it not only to those patient volunteers, but [also] to all of the patients that are waiting for new treatments, to do this more efficiently, Fisher said.

Atomwise has been applying its AI technology for drug discovery research since 2012. Excitement about AI is driven in part by the potential to address hard to reach drug targetsthe notion of drugging the undruggable, said Abraham Heifets, the startups CEO and another speaker on the panel. AI is offering a way to unlock biology in ways scientists couldnt do before, he said.

AI technologies are helping the pharmaceutical industry create a billion molecules a month that can be tested in three to six weeks, Heifets said. But those molecules are useful only if scientists have sophisticated, powerful tools to evaluate them. Tied to all these new molecules are vast amounts of data points, which Heifets characterized as both good and bad. While having more data can help researchers find the answers they seek, bad data can lead them astray.

We still live in a world of garbage in, garbage out, Heifets said.

Atomwise puts a great amount of effort into data cleaning and data care. Those efforts ensure that a prediction will be borne out by the experiment, Heifets said. Even tiny errors, such as a misplaced decimal point or a mistyped word, can lead to problems because the algorithms learn the bad data, Fisher said. Because the pharmaceutical industry is heavily regulated, Unlearns data processing is done in a way such that every step can be traced back. Those measures ensure there is transparency about the data, Fisher said.

Image: iStock/metamorworks

Frank Vinluan is an Xconomy editor based in Research Triangle Park. You can reach him at fvinluan@xconomy.com.

Follow this link:

Data Quantity, Complexity Drives Use of AI in Drug Discovery and Testing - Xconomy

Posted in Ai | Comments Off on Data Quantity, Complexity Drives Use of AI in Drug Discovery and Testing – Xconomy

Council of Europe participating in the panel on Accountable AI solutions of the AI for Good Global Summit 2020 – Council of Europe

Posted: at 4:46 pm

On 2 July 2020, Jan Kleijssen, Director of the Information Society Action against Crime, participated in an online panel on accountable AI solutions of the AI for Good Global Summit 2020.

Alongside Aimee Van Wynsberghe (TU Delft), FritsBussemaker and Arthur Van Der Wees (Institute for accountability in the digital age I4ADA), Jan Kleijssen drew attention to the need for a clear, proportionate and preferably binding legal framework for artificial intelligence. This legal framework should be based on existing human rights and rule of law standards , as well as on the various ethical guidelines on AI already elaborated.

A joint effort by all stakeholders (international and national public, private, civil society and academic) will be essential in order to establish such a legal instrument , which could become a cornerstone of a global framework for digital cooperation, called for by the High Level Panel on Digital Cooperation established by the Secretary General of the United Nations. At the Council of Europe, a specific body of national experts, the Ad Hoc Committee on Artificial Intelligence (CAHAI), has been set up to work on this . In addition , within the Council of Europe sector-specific, and complementary standards are being drawn up in the fields of justice, criminal law, data protection, bio-ethics, education, culture and others.

View original post here:

Council of Europe participating in the panel on Accountable AI solutions of the AI for Good Global Summit 2020 - Council of Europe

Posted in Ai | Comments Off on Council of Europe participating in the panel on Accountable AI solutions of the AI for Good Global Summit 2020 – Council of Europe

The DOD needs to define AI and protect its data better, watchdog says – FedScoop

Posted: at 4:45 pm

Written by Jackson Barnett Jul 2, 2020 | FEDSCOOP

What is artificial intelligence, anyway?

Its a question that the Department of Defense should answer, according to a new report by DODs inspector general. The watchdog saysthat whileparts of the DOD have their own definitions, the department mustsettle on a standard, establish strong governance structures for the technology and develop more consistent securitycontrols so as notto put the militarys AItechnology and other systemsat risk.

Without consistent application of security controls, malicious actors can exploit vulnerabilities on the networks and systems of DoD Components and contractors and steal information related to some of the Nations most valuable AI technologies, the report states.

The desired security controls appear to be basic, like using strong passwords and monitoring for unusual network activity. Much of the security updates need to happen at service-level offices working on AI,but contractorsalso mustbe included in the uniform standards as well, the IG says.

The reportcommends the DODs early work to adopt goals and initiatives, and incorporate ethics principles into its AI development. But more standardization of that work needs to happen for it to mean something, the IG says. Much of the department-wide standardization and coordination needs to happen in the Joint AI Center (JAIC), the DODs AI hub.

As of March 2020, while the JAIC has taken some steps, additional actions are needed to develop and implement an AI governance framework and standards, the report said.

Much of the IG report echos criticism from a RAND Cooperation report on the JAIC. The RAND report detailed a lack of structure in the new office and recommend better coordination across the department, as does the IG report.

Responding to the report, the DOD CIO said that the JAIC has taken several steps already that the IG recommend. They include plans for a AI Executive Steering Group and several other working groups and subcommittees to coordinate work in specific areas like workforce recruitment and standards across the departments.

The final report does not completely reflect a number of actions the JAIC took over the past year to enhance DoD-wide AI governance and to accelerate scaling AI and its impact across the DoD, the CIO wrote to the IG.

The fuel that all AI runs on data is still in short (usable) supply. The IG recommended the DOD CIO set up more data-sharing mechanisms. While data sharing will increase the ability for data-driven projects to flourish, the JAIC needs better visibility as to how many AI initiatives are underway across the department.

Currently, the DOD doesnt know how many AI projects its many components have under way. Thats a problem if offices like the JAIC are to be a central hub for both AI policy and fielding.

Without a reliable baseline of active AI projects within the DoD the JAIC will not be able to effectively execute its mission to maintain an accounting of DoD AI initiatives, the report stated.

Go here to see the original:

The DOD needs to define AI and protect its data better, watchdog says - FedScoop

Posted in Ai | Comments Off on The DOD needs to define AI and protect its data better, watchdog says – FedScoop

3 AI Healthcare Stocks to Ride Out the Coronavirus Resurgence – Yahoo Finance

Posted: at 4:45 pm

The implementation of artificial intelligence and machine learning in the healthcare sector has been rather diverse and extensive over the past few months, owing to the coronavirus pandemic. The reliance on these technologies is growing, primarily because of their ability to simplify the assessment of complicated data and cases. Therefore, now would be the ideal time for one to look at a couple of AI stocks from an investment perspective.

Prospects Are Bright for AI in Healthcare

Looking closely at the growth prospects of AI in the healthcare sector, one finds can find many. According to a report by Markets and Markets, artificial intelligence in the global healthcare market is expected to grow from $4.9 billion in 2020 to $45.2 billion by 2026 at a CAGR of 44.9%.

AI in Healthcare: Factors Driving it Amid COVID-19

While the major reasons for using artificial intelligence in healthcare are the rising volume of healthcare data and increasing complexities of datasets, the need to bring healthcare costs down, and improving computing power and lessening hardware costs.

In addition, more cross-industry partnerships and collaborations, and the ever-growing gap between workforce in healthcare and patients are fueling the need for improvised healthcare services.

However, the aforementioned factors are driving the need for AI applications in healthcare right now owing to its crucial role in speeding up the development of a vaccine or drug for coronavirus.

To actually understand how impactful the usage of AI in healthcare is, lets consider a data reported by Toronto-based AI platform BlueDot on New Years Eve. According to Wired, the platform picked up a cluster of unusual pneumonia cases in Wuhan, China.

The platform uses natural language processing and ML to track, locate and report on the spread of infectious diseases. This cluster of unusual pneumonia cases was reportedas COVID-19 by the World Health Organization nine days after it was spotted by BlueDot. The data clearly helped warn the world of the emerging global health crisis.

Over the past few months, AI has been used for predicting, screening, alerting, faster diagnosis, automated deliveries and laboratory drug discovery. With the pandemic still rampant around the world as new cases emerge every day, several new implementations of AI and ML are being used by companies and countries to bring the pandemic under control by exerting all possible modes of using the technology.

3 Stocks to Buy

We have, therefore, chosen three stocks that have extensive operations of AI in the healthcare sector. All these stocks carry a Zacks Rank #1 (Strong Buy) or 2 (Buy). You can seethe complete list of todays Zacks #1 Rank stocks here.

QIAGEN N.V. QGEN is a provider of insight solutions that transform biological materials into molecular insights. QIAGEN has an expected earnings growth rate of 21.7% for the current year. The company, which carries a Zacks Rank #1, belongs to the Zacks Medical - Biomedical and Genetics industry. The Zacks Consensus Estimate for the companys current-year earnings has moved 26.1% north in the past 60 days.

Thermo Fisher Scientific Inc. TMO is a provider of analytical instruments, laboratory equipment, software, reagents, instrument systems, consumables, chemicals, supplies and services etc. Thermo Fisher Scientific has an expected earnings growth rate of 15.9% for next year. The company, which carries a Zacks Rank #2, belongs to the Zacks Medical - Instruments industry. The Zacks Consensus Estimate for the companys current-year earnings has moved 0.2% north in the past 60 days.

Teladoc Health, Inc. TDOC is the provider of virtual healthcare services on a business-to-business basis. Teladoc Health has an expected earnings growth rate of 35.9% for next quarter. The company, which carries a Zacks Rank #2, belongs to the Zacks Medical Services industry. The Zacks Consensus Estimate for the companys current-year earnings has moved 1.8% north in the past 60 days.

More Stock News: This Is Bigger than the iPhone!

It could become the mother of all technological revolutions. Apple sold a mere 1 billion iPhones in 10 years but a new breakthrough is expected to generate more than 27 billion devices in just 3 years, creating a $1.7 trillion market.

Story continues

Read the original:

3 AI Healthcare Stocks to Ride Out the Coronavirus Resurgence - Yahoo Finance

Posted in Ai | Comments Off on 3 AI Healthcare Stocks to Ride Out the Coronavirus Resurgence – Yahoo Finance

MIT removes huge dataset that teaches AI systems to use racist, misogynistic slurs – The Next Web

Posted: at 4:45 pm

MIT has taken offline a massive and highly-cited dataset that trained AI systems to use racist and misogynistic terms to describe people, The Register reports.

The training set called 80 Million Tiny Images, as thats how many labeled images its scraped from Google Images was created in 2008 to develop advanced object detection techniques. It has been used to teach machine-learning models to identify the people and objects in still images.

As The Registers Katyanna Quach wrote: Thanks to MITs cavalier approach when assembling its training set, though, these systems may also label women as whores or bitches, and Black and Asian people with derogatory language. The database also contained close-up pictures of female genitalia labeled with the C-word.

The Register managed to get a screenshot of the dataset before it was taken offline:

Credit: The Register

80 Million Tiny Images serious ethical shortcomings were discovered by Vinay Prabhu, chief scientist at privacy startup UnifyID, and Abeba Birhane, a PhD candidate at University College Dublin. They revealed their findings in the paper Large image datasets: A pyrrhic win for computer vision?, which is currently under peer review for the 2021 Workshop on Applications of Computer Vision conference.

This damage of such ethically dubious datasets reaches far beyond bad taste; the dataset has been fed into neural networks, teaching them to associate image with words. This means any AI model that uses the dataset is learning racism and sexism, which could result in sexist or racist chatbots, racially-biased software, and worse. Earlier this year, Robert Williams was wrongfully arrested in Detroit after a facial recognition system mistook him for another Black man. Garbage in, garbage out.

As Quach wrote: The key problem is that the dataset includes, for example, pictures of Black people and monkeys labeled with the N-word; women in bikinis, or holding their children, labeled whores; parts of the anatomy labeled with crude terms; and so on needlessly linking everyday imagery to slurs and offensive language, and baking prejudice and bias into future AI models.

After The Register alerted the university about the training set this week, MIT removed the dataset, and urged researchers and developers to stop using the training library, and delete all copies of it. They also published an official statement and apology on their site:

It has been brought to our attention that the Tiny Images dataset contains some derogatory terms as categories and offensive images. This was a consequence of the automated data collection procedure that relied on nouns from WordNet. We are greatly concerned by this and apologize to those who may have been affected.

The dataset is too large (80 million images) and the images are so small (32 x 32 pixels) that it can be difficult for people to visually recognize its content. Therefore, manual inspection, even if feasible, will not guarantee that offensive images can be completely removed.

We therefore have decided to formally withdraw the dataset. It has been taken offline and it will not be put back online. We ask the community to refrain from using it in future and also delete any existing copies of the dataset that may have been downloaded.

Examples of AI showing racial and gender bias and discrimination are numerous. As TNWs Tristan Greene wrote last week: All AI is racist. Most people just dont notice it unless its blatant and obvious.

But AI isnt a racist being like a person. It doesnt deserve the benefit of the doubt, it deserves rigorous and constant investigation. When it recommends higher prison sentences for Black males than whites, or when it cant tell the difference between two completely different Black men it demonstrates that AI systems are racist, Greene wrote. And, yet, we still use these systems.

Published July 1, 2020 18:36 UTC

Here is the original post:

MIT removes huge dataset that teaches AI systems to use racist, misogynistic slurs - The Next Web

Posted in Ai | Comments Off on MIT removes huge dataset that teaches AI systems to use racist, misogynistic slurs – The Next Web

Remdesivir’s controversial cost, early vaccine data, and AI at the end of life – STAT

Posted: at 4:45 pm

Whats a fair price for remdesivir? How do we know whether vaccines work? And does AI have a place in end-of-life care?

We discuss all that and more this week on The Readout LOUD, STATs biotech podcast. First, we dig into the long-awaited price for Gilead Sciences Covid-19 treatment and break down the disparate reactions from lawmakers, activists, and Wall Street analysts. Then, STATs Matthew Herper joins us to discuss some of the first detailed data on a potential vaccine for the novel coronavirus. Finally, we talk about a new use for AI: nudging clinicians to broach delicate conversations with patients about their end-of-life goals and wishes.

For more on what we cover, heres the remdesivir news; heres more on the vaccine data; heres the story on AI; and heres the latest in STATs coronavirus coverage.

advertisement

Well be back next Thursday evening and every Thursday evening so be sure to sign up onApple Podcasts,Stitcher,Google Play, or wherever you get your podcasts.

And if you have any feedback for us topics to cover, guests to invite, vocal tics to cease you can emailreadoutloud@statnews.com.

advertisement

Interested in sponsoring a future episode of The Readout LOUD? Email us atmarketing@statnews.com.

See the rest here:

Remdesivir's controversial cost, early vaccine data, and AI at the end of life - STAT

Posted in Ai | Comments Off on Remdesivir’s controversial cost, early vaccine data, and AI at the end of life – STAT

Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? – Spend Matters

Posted: at 4:45 pm

Just like there are eight levels to analytics as mentioned in a recent Spend Matters PRO brief, artificial intelligence (AI) has various stages of the technology today even though there is no such thing as true AI by any standard worth its technical weight.

But just because we dont yet have true AI doesnt mean todays AI cant help procurement improve its performance. We just need enough computational intelligence to allow software to do the tactical and non-value-added tasks that software should be able to perform with all of the modern computational power available to us. As long as the software can do the tasks as well as an average human expert the vast majority of the time (and kick up a request for help when it doesnt have enough information or when the probability it will outperform a human expert is less than the expert performing a task) thats more than good enough.

The reality is, for some basic tactical tasks, there are plenty of software options today (e.g., intelligent invoice processing). And even for some highly specialized tasks that we thought could never be done by a computer, we have software that can do it better, like early cancerous growth detection in MRIs and X-rays.

That being said, we also have a lot of software on the market that claims to be artificial intelligence but that is not even remotely close to what AI is today, let alone what useful software AI should be. For software to be classified as AI today, it must be capable of artificial learning and evolving its models or codes and improve over time.

So, in this PRO article, we are going to define the levels of AI that do exist today, and that may exist tomorrow. This will allow you to identify what truth there is to the claims that a vendor is making and whether the software will actually be capable of doing what you expect it to.

Not counting true AI, there are five levels of AI that are available today or will likely be available tomorrow:

Lets take a look at each group.

Go here to see the original:

Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? - Spend Matters

Posted in Ai | Comments Off on Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? – Spend Matters

China and AI: What the World Can Learn and What It Should Be Wary of – Nextgov

Posted: at 4:45 pm

China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.

The move has led at least in the West to warnings of a global AI arms race and concerns about the growing reach of Chinas authoritarian surveillance state. But treating China as a villain in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese governments approach to AI that are highly concerning and rightly should be condemned, its important that this does not cloud all analysis of Chinas AI innovation.

The world needs to engage seriously with Chinas AI development and take a closer look at whats really going on. The story is complex and its important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.

Nesta has explored the broad spectrum of AI activity in China the good, the bad and the unexpected.

The Good

Chinas approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its ageing population.

Applications include AI doctor chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.

Since the outbreak of COVID-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis and new drug development. AI tools used in Wuhan, China, to tackle COVID-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.

The Bad

But there are also elements of Chinas use of AI which are seriously concerning. Positive advances in practical AI applications which are benefiting citizens and society dont detract from the fact that Chinas authoritarian government is also using AI and citizens data in ways that violate privacy and civil liberties.

Most disturbingly, reports and leaked documents have revealed the governments use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in Chinas Xinjiang province.

The emergence of opaque social governance systems which lack accountability mechanisms are also a cause for concern.

In Shanghais smart court system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tools potential biases, the quality of the data and the soundness of the algorithm, making it hard for them to challenge the decisions made.

Chinas experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens digital rights.

The Unexpected

Commentators have often interpreted the State Councils 2017 Artificial Intelligence Development Plan as an indication that Chinas AI mobilisation is a top-down, centrally planned strategy.

But a closer look at the dynamics of Chinas AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.

Beyond the thriving major cities of Beijing, Shanghai and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an AI Town, clustering together the tech company Alibaba, Zhejiang University and local businesses to work collaboratively on AI development. Chinas local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.

Chinas accelerating AI innovation deserves the worlds full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understand and learn from the nuances of whats really happening.

Hessy Elliott is aresearcher atNesta.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Visit link:

China and AI: What the World Can Learn and What It Should Be Wary of - Nextgov

Posted in Ai | Comments Off on China and AI: What the World Can Learn and What It Should Be Wary of – Nextgov

Machines That Can Understand Human Speech: The Conversational Pattern Of AI – Forbes

Posted: at 4:45 pm

Early on in the evolution of artificial intelligence, researchers realized the power and possibility of machines that are able to understand the meaning and nuances of human speech. Conversation and human language is a particularly challenging area for computers, since words and communication is not precise. Human language is filled with nuance, context, cultural and societal depth, and imprecision that can lead to a wide range of interpretations. If computers can understand what we mean when we talk, and then communicate back to us in a way we can understand, then clearly weve accomplished a goal of artificial intelligence.

Conversational interaction as a pattern of AI

Conversational pattern of AI

This particular application of AI is so profound that it makes up one of the fundamental seven patterns of AI: the conversation and human interaction pattern. The fundamental goal of the conversational pattern is to enable machines to communicate with humans in human natural language patterns, and for machines to communicate back to humans in the language they understand. Instead of requiring humans to conform to machine-modes of interaction such as typing, swiping, clicking, or using computer programming languages, the power of the conversational pattern is that we can interact with machines the way we interact with each other: by speaking, writing, and communicating in a way that our brains have already been wired to understand.

Many cases of todays narrow applications of AI are focused on human communication. If a computer can understand what a human means when they communicate, we can create all manner of applications of practical value from chatbots and conversational agents to systems that can read what we write in our documents and emails and even systems that can accurately translate from one human language to another without losing meaning and context.

Machine to human, machine to machine, and human to machine interactions are all examples of how AI communicates and understands human communication. Some real-life examples include voice assistants, content generation, chatbots, sentiment analysis, mood analysis, and intent analysis, and also machine powered translation. The applications of the conversational pattern are so broad that entire market sectors are focused on the use of AI-enabled conversational systems, from conversational finance to telemedicine and beyond. Beyond simply understanding written or spoken language, the power of the conversational pattern of AI can be seen in machine ability to understand sentiment, mood, and intent, or take visual gestures and translate them into machine understandable forms.

Natural Language Processing: evolving over the past few decades

Accurately processing and generating human language is particularly complicated, with constant technology evolution happening over the past sixty years. One of the easier to solve problems is the conversion of audio waveforms into machine readable text, known as Automatic Speech Recognition (ARS). While ASR is somewhat complicated to implement, it doesnt need machine learning or AI capabilities generally, and some fairly accurate speech-to-text technologies have been around for decades. Speech-to-text is not natural language understanding. While the computer is transcribing what the human is saying, it is taking waveforms that it understands and converting them to words. It is not interpreting the data it is hearing.

The inverse capability, text-to-speech, also doesnt require much in the way of machine learning or AI to be performed. Text-to-speech is simply the generation of waveforms by the computer to speak words that are already known. There is no understanding of the meaning of those words when simply using text-to-speech. The technology behind text to speech has been around for years, you can hear it in the movie War Games (1983): would you like to play a game?

However, speech-to-text and text-to-speech isnt where AI and machine learning are needed, even though machine learning has helped text-to-speech become more human sounding, and speech-to-text more accurate. Natural language processing (NLP) involves more than translation of waveforms and generation of audio waveforms. Just because you have text doesnt mean that machines can understand it. To gain that understanding, machines need to be able to understand and generate parts of speech, extract and understand entities, determine meanings of words, and use much more complicated processing activities to connect together concepts, phrases, concepts, and grammar into the larger picture of intent and meaning.

Natural language processing consists of two parts: natural language understanding and natural language generation. Natural language understanding is where a computer interprets human input such as voice or text and can translate that into something the machine is capable of using in the intended manner. Natural language understanding consists of many subdomains in trying to understand intent from text generated from audio waveforms or typed by humans in text-mode interactions such as chatbots or messaging interfaces. AI is applied to lexical parsing to understand grammar rules and break sentences into structural components. Regardless of the approach used, most natural-language-understanding systems share some common components. Then, once the components are identified, each piece can be semantically understood to interpret words based on context and word order. Further logical analysis and deduction can be used to determine meaning based on what the various parts are referring to, using knowledge graphs and other Methods to deduce meaning.

Natural language generation is the process of the AI being able to prepare communication for humans in any form that is natural and does not sound like it was made by a computer. In order for a computer process to be considered natural language generation the computer actually has to interpret content and understand its meaning for effective communication. This involves the reverse of many of the steps identified in natural language understanding, taking concepts and generating human-understandable conversations from how the machine understands the way humans communicate.

Why is machine-facilitated conversation so important?

When it comes down to the pattern of human and computer communication, it is receiving so much focus because our interactions with systems can be very difficult at times. Typing or swiping can take time and not communicate our needs properly while reading static content like an FAQ might not be helpful for most customers. People want to interact with machines efficiently and effectively. Many user interfaces are quite suboptimal for human interaction, requiring confusing menu interaction, interactive voice response systems that are too simplistic, or rules-based chatbots that fail to satisfy user needs.

Development of more intelligent conversational systems goes back decades, with the ELIZA chatbot first developed in 1966 as an illustration of the possibilities of machine-mediated conversation. Nowadays, users are more familiar with voice assistants such as Alexa, Google Assistant, Apple Siri, Microsoft Cortana, and web-based chatbots. However, if youve interacted with any of them recently, they still are lacking in understanding in many significant ways. Theres no doubt that much of the work of AI researchers is going into improving the ways that machines can understand and generate human language and thus reinforce the power of those applications that leverage the conversational pattern of AI.

Follow this link:

Machines That Can Understand Human Speech: The Conversational Pattern Of AI - Forbes

Posted in Ai | Comments Off on Machines That Can Understand Human Speech: The Conversational Pattern Of AI – Forbes