Artificial intelligence software improves accuracy, doubles speed in evaluating CT scans of advanced cancer – UAB News

An ONeal Comprehensive Cancer Center scientist presents at a major oncology meeting about a novel artificial intelligence software tool to assist evaluating tumor response in advanced cancers.

An ONeal Comprehensive Cancer Center scientist presents at a major oncology meeting about a novel artificial intelligence software tool to assist evaluating tumor response in advanced cancers.In a multi-institution study presented this week at the annual meeting of the American Society of Clinical Oncology, researchers from the ONeal Comprehensive Cancer Center at the University of Alabama at Birmingham compared the current practice of evaluating tumor response in advanced cancer with an artificial intelligence software tool designed to assist radiologists.

It turns out that human-guided AI is more accurate, more reproducible and faster, said Andrew Smith, M.D., Ph.D., associate professor, vice chair of Clinical Research and co-director of AI in the UAB Department of Radiology, and director of the Tumor Metrics Lab, a component of the ONeal Cancer Centers Human Imaging shared resource, who presented the findings today at ASCO.

The Tumor Metrics Lab conducts image interpretation for all cancer patients on clinical trials in the ONeal Cancer Center who have tumor imaging to determine their response to treatment. The lab does more than 1,000 tumor metric reads per year.

Doctors track the progress of tumors using computed tomography scans. Radiologists measure the tumors manually on digital images of those scans and usually dictate their findings into text-based reports. But the group of researchers behind the new study hypothesized that this traditional system could be improved with some assistance from artificial intelligence. They used AI Mass, a cancer-specific implementation of the medical AI software platform AI Metrics, trained with more than 15,000 expert-labeled images.

AI Mass uses AI to, one, measure tumors after a single mouse click; two, automatically label the anatomic location of tumors; and three, track tumors over time, Smith said. AI Metrics is a product of a startup company of the same name, with Smith as CEO, that was spun off from UAB in 2019 and is now raising a seed round of capital.

In the study presented at ASCO by Smith, body CT images from 120 consecutive patients with advanced cancer were independently evaluated by 24 radiologists. The patients all had multiple serial imaging exams and had been treated with systemic therapy. Each radiologist categorized treatment responses and dictated text-based reports. Meanwhile, the AI-assisted software automatically calculated percent changes in tumor burden and categorized treatment response using standardized methods commonly found in clinical trials. A team of researchers looked for major errors such as incorrect measurements, erroneous language in reports or misidentifying the tumor location time spent in image interpretation and inter-reader agreement about the final tumor response. Twenty oncologic providers then evaluated the accuracy of the manually dictated text reports versus AI-assisted reports that included a graph, table and key images.

The AI-assisted approach increased accuracy by 25 percent, reduced major errors by 99 percent, was nearly two times faster than current practice methods, and improved inter-reader agreement by 45 percent, Smith said. The only error by the AI-assisted software was a freeform text note that we could not interpret, he noted. All of the tumor measurements, percent changes, etc., were correct. Smith, who is owner of AI Metrics as well as its CEO, did not directly participate in data gathering, have access to study data or conduct any of the statistical analysis.

Andrew Smith, M.D., Ph.D.It is gratifying to see such a practical application of artificial intelligence, said Cheri Canon, M.D., professor, chair and Witten-Stanley Endowed Chair of the Department of Radiology. Seldom do we hear of such overwhelmingly positive results from a study: 99 percent reduction in major errors. The impact this will have on patients, specifically cancer patients, will be far-reaching.

The work has other benefits, Canon noted. These include improved workflow for the radiologists, who are now more than ever burdened with complex imaging studies and increased incidence of burnout, and for our clinical colleagues, a clear and concise longitudinal report. This is a monumental improvement to the current standard of care and will in fact set a new standard.

The project involves collaborators from 21 institutions and three small businesses, including AI Metrics. The AI behind the software was trained using carefully annotated images from UAB and the National Institutes of Health by a team of UAB clinical research scientists, Smith says.

We drew freeform shapes around the edge of each tumor to train the AI to do the same, he explained. We also labelled the anatomic location of all kinds of tumors located across different parts of the body. We were able to train the AI to provide an anatomic location of the tumor. That had never been done before.

In practice, the user guides the AI, but the AI does the measuring and labeling, Smith said. We can extract the measurements in a digital form. Because the data is digital, we can generate a graph or table, and we can even save key images of all image findings. The AI-assisted reports are a major leap beyond a text-based report.

Importantly, radiologists work with the AI throughout the process, Smith says.

Lets say there is a tumor in the liver that needs to be followed over time, he said.In our AI Metrics software,theuser simply clicks on a lesion and a first AI algorithm measures it.The user can change it within two seconds if something is wrong. As you can imagine, the AI is more reliable than having different radiologists do this manually.This is what we call transparent AI, where the user both directsthe AI and can check it. Then a second AI algorithm provides the anatomic location. The user can easilycheck andcorrect this as well within about two seconds.

As part of the study, radiologists and 20 oncologic providers were asked to rate the experience of using the AI-assisted software and the value of the AI-assisted reports.

The AI-assisted software was preferred by 96 percent of radiologists, and the AI-assisted reports were preferred by 100 percent of oncologic providers, Smith said. We have established a new standard of care with AI. I think that having this software could save lives, though we dont yet have that kind of data.

Smith says the team is not done.

Since the study, we re-trained the AI on 55,000 tumors, and we hope to get closer to 100,000 in the coming months, he said. That is more tumors than an average radiologist measures in a lifetime. This is how we leverage the power of AI. The researchers are now writing an NIH grant to take their work into cancer screening and early cancer detection and management, Smith says. Most cancer therapies apply to only a few cancers or even a subtype of a single cancer. This technology applies to all solid cancers imaged on CT and MRI. We can apply this technology to many other stages of cancer.

Read an abstract of the study, Multi-institutional comparative effectiveness of advanced cancer longitudinal imaging response evaluation methods: Current practice versus artificial intelligence-assisted, on ASCOs website here.

Excerpt from:

Artificial intelligence software improves accuracy, doubles speed in evaluating CT scans of advanced cancer - UAB News

Coronavirus tests the value of artificial intelligence in medicine – FierceHealthcare

Albert Hsiao, M.D., Ph.D., and his colleagues at the University of California San Diego (UCSD) health system had been working for 18 months on anartificial intelligence program designed to help doctors identify pneumonia on a chest X-ray.

When thecoronavirushit the U.S., they decided to see what it could do.

The researchers quickly deployed the application, which dots X-ray images with spots of color where there may be lung damage or other signs of pneumonia. It has now been applied to more than 6,000 chest X-rays, and its providing some value in diagnosis, said Hsiao, director of UCSDs augmented imaging and artificial intelligence data analytics laboratory.

His team is one of several around the country that has pushed AI programs developed in a calmer time into the COVID-19 crisis to perform tasks like deciding which patients face the greatest risk of complications and which can be safely channeled into lower-intensity care.

The machine-learning programs scroll through millions of pieces of data to detect patterns that may be hard for clinicians to discern. Yet few of the algorithms have been rigorously tested against standard procedures. So while they often appear helpful, rolling out the programs in the midst of a pandemic could be confusing to doctors or even dangerous for patients, some AI experts warn.

AI is being used for things that are questionable right now, said Eric Topol, M.D., director of the Scripps Research Translational Institute and author of several books on health IT.

Topol singled out a system created by Epic, a major vendor of electronic health records software, that predicts which coronavirus patients may become critically ill. Using the tool before it has been validated is pandemic exceptionalism, he said.

RELATED:Boston startup using AI, remote monitoring to fight coronavirus

Epic said the companys model had been validated with data from more 16,000 hospitalized COVID-19 patients in 21 healthcare organizations. No research on the tool has been published, but, in any case, it was developed to help clinicians make treatment decisions and is not a substitute for their judgment, said James Hickman, a software developer on Epics cognitive computing team.

Others see the COVID-19 crisis as an opportunity to learn about the value of AI tools.

My intuition is its a little bit of the good, bad and ugly, said Eric Perakslis, Ph.D., a data science fellow at Duke University and former chief information officer at the Food and Drug Administration. Research in this setting is important.

Nearly $2 billion poured into companies touting advancements in healthcare AI in 2019. Investments in the first quarter of 2020 totaled $635 million, up from $155 million in the first quarter of 2019, according to digital health technology funderRock Health.

At least three healthcare AI technology companies have made funding deals specific to the COVID-19 crisis, including Vida Diagnostics, an AI-powered lung-imaging analysis company, according to Rock Health.

Overall, AIs implementation in everyday clinical care is less common than hype over the technology would suggest. Yet the coronavirus crisis has inspired some hospital systems to accelerate promising applications.

UCSD sped up its AI imaging project, rolling it out in only two weeks.

Hsiaos project, with research funding from Amazon Web Services, the University of California and the National Science Foundation, runs every chest X-ray taken at its hospital through an AI algorithm. While no data on the implementation has been published yet, doctors report that the tool influences their clinical decision-making about a third of the time, said Christopher Longhurst, M.D., UCSD Healths chief information officer.

The results to date are very encouraging, and were not seeing any unintended consequences, he said. Anecdotally, were feeling like its helpful, not hurtful.

RELATED:Headlines have touted AI over docs in reading medical images. New review finds evidence is limited

AI has advanced further in imaging than other areas of clinical medicine because radiological images have tons of data for algorithms to process, and more data makes the programs more effective, said Longhurst.

But while AI specialists have tried to get AI to do things like predict sepsis and acute respiratory distressresearchers at Johns Hopkins University recently won a National Science Foundation grantto use it to predict heart damage in COVID-19 patientsit has been easier to plug it into less risky areas such as hospital logistics.

In New York City, two major hospital systems are using AI-enabled algorithms to help them decide when and how patients should move into another phase of care or be sent home.

AtMount Sinai Health System, an artificial intelligence algorithm pinpoints which patients might be ready to be discharged from the hospital within 72 hours, said Robbie Freeman, vice president of clinical innovation at Mount Sinai. Freeman described the AIs suggestion as a conversation starter, meant to help assist clinicians working on patient cases decide what to do. AI isnt making the decisions.

NYU Langone Health has developed a similar AI model. It predicts whether a COVID-19 patient entering the hospital will suffer adverse events within the next four days, said Yindalon Aphinyanaphongs, M.D., Ph.D., who leads NYU Langones predictive analytics team.

The model will be run in a four- to six-week trial with patients randomized into two groups: one whose doctors will receive the alerts, and another whose doctors will not. The algorithm should help doctors generate a list of things that may predict whether patients are at risk for complications after theyre admitted to the hospital, Aphinyanaphongs said.

RELATED:Microsoft launches $40M AI for Health program to accelerate medical research

Some health systems are leery of rolling out a technology that requires clinical validation in the middle of a pandemic. Others say they didnt need AI to deal with the coronavirus.

Stanford Health Careis not using AI to manage hospitalized patients with COVID-19, saidRon Li, M.D., the centers medical informatics director for AI clinical integration. The San Francisco Bay Area hasnt seen the expected surge of patientswho would have provided the mass of data needed to make sure AI works on a population, he said.

Outside the hospital, AI-enabled risk factor modeling is being used to help health systems track patients who arent infected with the coronavirus but might be susceptible to complications if they contract COVID-19.

At Scripps Health in San Diego, clinicians are stratifying patients to assess their risk of getting COVID-19 and experiencing severe symptoms using a risk-scoring model that considers factors like age, chronic conditions and recent hospital visits. When a patient scores 7 or higher, a triage nurse reaches out with information about the coronavirus and may schedule an appointment.

Though emergencies provide unique opportunities to try out advanced tools, its essential for health systems to ensure doctors are comfortable with them, and to use the tools cautiously, with extensive testing and validation, Topol said.

When people are in the heat of battle and overstretched, it would be great to have an algorithm to support them, he said. We just have to make sure the algorithm and the AI tool isnt misleading, because lives are at stake here.

ThisKHNstory first published onCalifornia Healthline, a service of theCalifornia Health Care Foundation.Kaiser Health Newsis a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation, which is not affiliated with Kaiser Permanente.

See the article here:

Coronavirus tests the value of artificial intelligence in medicine - FierceHealthcare

The Use of Artificial Intelligence by Investment Advisers: Considerations Based on an Advisers Fiduciary Duties – JD Supra

Artificial intelligence (AI) is an increasingly important technology within the investment management industry.1AI has been used in a variety of waysincluding as the newest strategy for attempts to "beat the market" by outperforming passive index funds that are benchmarked against the S&P 500, despite the long-standing finding that index funds consistently win that contest.2

Investment advisers who use AI should consider the unique issues the technology raises in light of an adviser's fiduciary duty to its clients. In this client alert, we provide an overview of how AI is being used by investment advisers, the fiduciary duties applicable to investment advisers, and particular issues advisers should consider in designing AI-based programs, to ensure they are acting in the best interests of their clients.3

How Artificial Intelligence Is Being Adopted by Investment Advisers

AI is currently used by investment advisers in a variety of innovative ways:

Issues Raised by an Investment Adviser's Fiduciary Duties

Under federal law, an investment adviser is a fiduciary to its clients.8An adviser's fiduciary duty involves a duty of care and a duty of loyalty, which, although not defined specifically in the Investment Advisers Act of 1940 (Advisers Act), have been addressed and developed through U.S. Securities and Exchange Commission (SEC) interpretive releases and guidance, as well as case law.9As discussed below, these duties have implications for an adviser's use of AI. The specific obligations required by an adviser's fiduciary duty will depend upon what functions the adviser has agreed to assume for the client.10While the SEC has not provided specific guidance for advisers using AI, current guidance raises unique considerations for advisers to consider.11

Duty of Loyalty

The duty of loyalty requires investment advisers not to place their own interest ahead of their clients' interests.12An adviser must make full and fair disclosure to its client of all material facts relating to the advisory relationship and employ reasonable care to avoid misleading clients. Information provided to clients must be sufficiently specific so that a client is able to understand the investment adviser's business practices and conflicts of interest.

An adviser's duty of loyalty raises, among others, the following issues with respect to AI-based investment management programs:

What facts does an adviser need to disclose about its use of AI?

Advisers should consider disclosing information such as the following:

How should advisers think about the tension between disclosure obligations and confidentiality regarding proprietary technologies?

It is important for advisers to disclose enough information for investors to make an informed decision about engaging, and then managing the relationship with, the investment adviser. Advisers should be careful to not mislead clients, and information provided to clients should be sufficiently specific so that a client is able to understand the investment adviser's business practices. However, highly technical information about the process behind the AI's decisions might not be beneficial to a client's understanding of the adviser's platform.

Does an adviser need to disclose the historical success rate of returns from using artificial intelligence?

Historically, funds that employ AI have not outperformed the S&P 500.13Investment advisers might therefore be expected to provide disclosures indicating that an adviser has not conclusively proven AI's ability to predict securities prices and may not "beat the market."14

Duty of Care

The duty of care requires, among other things, the duty to provide advice appropriate for the client and the duty to monitor a client's investments, and the ongoing suitability of those investments, over the course of the relationship.15An adviser must develop a reasonable understanding of the client's objectives and have a reasonable belief that the advice it provides is in the best interest of the client, based on the client's portfolio and objectives.

An adviser's duty of care raises, among others, the following issues with respect to AI-based investment management programs:

Can an adviser replace traditional suitability assessments with alternative data or other AI-based tools?

If an AI-based system makes investment choices on behalf of clients using deep or machine learning that develops on its own, by tracking client behavior, or by using alternative data, the adviser should pay particularly close attention to how the recommendations generated by those data might differ or be in conflict with a client's explicit preferences and investment objectives. It is possible that an adviser using AI-based tools will make different assessments of what is best or appropriate for the client than if the adviser uses more traditional tools like suitability questionnaires that ask a client about her risk profile, investment objectives, and other characteristics. As a result, an adviser using AI-based systems to generate an investment management system may be at cross-purposes with the client, which would raise issues based on the adviser's duty of care.

How frequently should an investment adviser evaluate its AI program?

Because AI programs create their own rules based on the data they analyze, and autonomously make trading decisions, advisers should develop internal procedures for ensuring their programs are operating correctly.16For example, advisers should adequately test their AI before and periodically after it is integrated into the investment platform. In addition, advisers should develop strategies for procedures they can implement to adjust their AI programs if they do not produce favorable results. Advisers should also monitor for possible cybersecurity threats.

How should an adviser review investment decisions directed by AI to ensure the decisions still fit within a client's investment goals?

Advisers using AI should adopt and implement procedures that will periodically review the performance of their AI, to ensure that performance is within expected parameters and that decisions are not being made to the detriment of clients' investment goals. Ultimately, the adviser is responsible for all decisions made by its AI-based program and therefore cannot let an AI-based program simply run without the adviser's active monitoring.17

[1] For example, on May 26, 2020, HSBC announced the launch of its AI Powered US Equity Index family, the first equity index product powered by artificial intelligence and big data, using data insights from IBM Watson to guide equity trading decisions. HSBC Launches First Equity Index Products Powered by AI and Big Data, Business Wire (May 26, 2020), https://www.businesswire.com/news/home/20200526005556/en/HSBC-Launches-Equity-Index-Products-Powered-AI.

[2] In his 1972 book A Random Walk Down Wall Street, Burton Malkiel maintains that share prices evolve similarly to a random walk and have no relationship with historic values or other variables. Because future share prices are not based on any pattern, they cannot be predicted, and it is unlikely any random fund adviser will be able to out-perform a passive investment fund. Burton Malkiel, A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing. W.W. Norton, 1972. Since the 1972 publication, research continues to show that beating the market is unlikely. For example, a study released at the end of 2019 indicated that over the past 10 years, 89 percent of large-cap funds, 84 percent of mid-cap funds, and 89 percent of small-cap funds failed to outperform their S&P 500 benchmark on a relative basis. SPIVA U.S. Year-End 2019 Scorecard, S&P Dow Jones Indices (2019), https://us.spindices.com/indexology/core/spiva-us-year-end-2019.

[3] Of course, the federal securities laws raise issues that are important for investment advisers who employ AI to consider even outside their fiduciary duties. For example, advisers using AI may want to rely on Rule 3a-4 under the Investment Company Act of 1940, which provides a safe harbor from registration as an investment company for advisory programs that are reasonably customized to particular investors, with respect to their investment programs. If an AI-based program provides similar advice to a variety of clients, however, the adviser may not be able to rely on Rule 3a-4. Thus, advisers who manage client accounts based on AI may need to consider the specific contours of their AI programs in light of the requirements of Rule 3a-4. This alert by no means covers the range of issues AI models raise for investment advisers.

[4] Adam Satariano and Nishant Kumar, The Massive Hedge Fund Betting on AI, Bloomberg (Sept. 27, 2017), https://www.bloomberg.com/news/features/2017-09-27/the-massive-hedge-fund-betting-on-ai.

[5] Adam Satariano and Nishant Kumar, The Massive Hedge Fund Betting on AI, Bloomberg (Sept. 27, 2017), https://www.bloomberg.com/news/features/2017-09-27/the-massive-hedge-fund-betting-on-ai.

[6] Robo-advisers are online platforms, and typically registered investment advisers, that provide discretionary asset management services to their clients through online algorithmic-based programs. See Division of Investment Management, Robo Advisers, IM Guidance Update No. 2017-02 (Feb. 2017).

[7] See, e.g., Ryan W. Neal, Wealthfront Turns to Artificial Intelligence to Improve Robo Advice (Mar. 31, 2016), https://www.wealthmanagement.com/technology/wealthfront-turns-artificial-intelligence-improve-robo-advice.

[8] SEC v. Capital Gains Research Bureau, Inc., 375 U.S. 180, 194 (1963). An investment advisers fiduciary duty is imposed under the Advisers Act in recognition of the relationship of trust between an investment adviser and a client and is made enforceable by the antifraud provisions of Section 206 of the Advisers Act.

[9] See, e.g., Capital Gains 375 U.S. at 194; SEC v. Tambone, 550 F.3d 106, 146 (1st Cir. 2008); SEC v. Moran, 944 F. Supp. 286, 297 (S.D.N.Y 1996); Transamerica Mortgage Advisors, Inc. v. Lewis, 444 U.S. 11, 17 (1979); Commission Interpretation Regarding Standard of Conduct for Investment Advisers, Investment Advisers Act Release No. 5248 (July 12, 2019); IM Guidance Update No. 2017-02 (Feb. 2017); Proxy Voting by Investment Advisers, Investment Advisers Act Release No. 2106 (Jan. 31, 2003). See also SEC Commissioner Kara M. Stein, Surfing the Way: Technology, Innovation, and Competition, Remarks at Harvard Law Schools Fidelity Guest Lecture Series (No. 9, 2015), available at https://www.sec.gov/news/speech/surfing-wave-technology-innovation-and-competition-remarks-harvard-law-schools-fidelity (former SEC Commissioner Stein discussing the application of fiduciary duties to digital advice provided by robo-advisers).

[10] See, e.g., Investment Advisers Act Release No. 5248) (in interpretive guidance related to investment advisers fiduciary duties, explaining that while all investment advisers owe their clients a fiduciary duty, that fiduciary duty must be viewed in the context of the agreed-upon scope of the relationship between the adviser and the client; the fiduciary duty itself may not be waived, but the exact responsibilities of the adviser in managing the clients account may be limited).

[11] Although not directly on point, guidance released by the SECs Division of Investment Management with respect to robo-advisers provides some light on the types of considerations advisers should address based on their fiduciary duties to clients when adopting non-traditional methods of investment advisory services. IM Guidance Update No. 2017-02 (Feb. 2017). Additionally, former SEC personnel have spoken on the SECs own use of AI. See, e.g., SEC Commissioner Kara M. Stein, From the Data Rush to the Data Wars: A Data Revolution in Financial Markets (Sept. 27, 2018), available at https://www.sec.gov/news/speech/speech-stein-092718; Scott W. Bauguess, Acting Director and Acting Chief Economist, SEC Division of Economic and Risk Analysis, The Role of Big Data, Machine Learning, and AI in Assessing Risks: a Regulatory Perspective (June 21, 2017), available at https:// http://www.sec.gov/news/speech/bauguess-big-data-ai.

[12] Investment Advisers Act Release No. 5248.

[13] Mark Hulbert, Using AI for Picking Stocks? Not So Fast, Wall Street Journal (Jan 5, 2020), https://www.wsj.com/articles/use-ai-for-picking-stocks-not-so-fast-11578279960.

[14] In a response to a request for a no action letter, the staff of the SEC explained to an investment adviser purporting to rely on abilities as a psychic medium that he would be required to disclose that the predictive value of his methods had not been scientifically established. John Anthony, SEC Staff No Action Letter (Mar. 19, 1975). Similarly, because the predictive value of AI has not been conclusively established, investment advisers should consider the extent to which the SEC would expect them to disclose the lack of proof that AI-based algorithms are better than or even equal to the success of more traditional tools.

[15] Investment Advisers Act Release No. 5248.

[16] The Financial Industry Regulatory Authority (FINRA) published guidance in 2016 related to the governance and supervision framework advisers should have in place to adopt client-facing digital investment advice tools. While not directly related to AI, the guidance emphasizes the importance of supervising algorithms used in digital-advice tools and periodically assessing whether [the] algorithm is consistent with the firms investment and analytical approaches. See FINRA Report on Digital Investment Advice (Mar. 2016), available at https://www.finra.org/sites/default/ files/digital-investment-advice-report.pdf.

The SEC has previously brought enforcement actions against investment advisers for failing to supervise those in charge of programming and monitoring algorithmic or other automated trading strategies. In one enforcement action, the founder of a hedge fund allowed his co-founder complete control of operating and monitoring an algorithm to make trade decisions. He was made aware that the algorithm was not working as expected and made no efforts to follow-up with the co-founder or inform investors or prospective investors of any issues related to the algorithm. In the Matter of Timothy S. Dembski, Investment Advisers Act Release No. 4671 (March 24, 2017).

[17] For example, robo-adviser Wealthfront was the subject of an SEC enforcement action for providing a false statement that it monitored all client accounts to avoid transactions that might result in wash sale, which would potentially reduce investor returns, when it did not actually do so. See In the Matter of Wealthfront Advisers, Investment Advisers Act Release No. 5086 (Dec. 21, 2018).

See the rest here:

The Use of Artificial Intelligence by Investment Advisers: Considerations Based on an Advisers Fiduciary Duties - JD Supra

Lecturer in Artificial Intelligence for Digital Infrastructures job with UNIVERSITY OF BRISTOL | 208485 – Times Higher Education (THE)

Job number ACAD104575Division/School School of Computer Science, Electrical and Electronic Engineering and Engineering MathsContract type Open EndedWorking pattern Full timeSalary 38,017 - 49,553Closing date for applications 28-Jun-2020

The Smart Internet Lab at the University of Bristol is one of the UK's most renowned Communications and Networks research centres aiming to address grand technological, societal and industrial challenges. Our 200 experts on wireless, optical communications and networks challenge the complexity of tomorrow's world by fusing research expertise and innovation in a range of research areas such as: IoT, 5G/6G, Future Internet, Autonomous Networks, Machine Learning, Artificial Intelligence, Network Convergence, Mobile Edge Computing and Network Softwarization. Our unique offering across optical, wireless, IoT and cloud technologies enable us to bring together end-to-end network design and optimisation and impact regional, national and global ICT innovations.

BDFI is a University Research Institute that pioneers cross-disciplinary approaches to digital innovation. BDFI is developing in-depth understanding of sociotechnical insights to drive the creation of digital technologies for inclusive, prosperous and sustainable societies. The Institute has recently received 110m funding from UKRI, the private sector and philanthropy to develop a set of unique research facilities to fulfil its mission.

This post will address both physical & virtual elements, connectivity and cloud. Experience in the areas of AI for telecom infrastructure, AI for network computing, AI as a Service, Knowledge as a Service, network informatics and data analytics, resource abstraction and resource management, software control and autonomous operations is highly desirable. Many of these subjects are critical in the development of Smart Cities, Smart Manufacturing and Smart Utilities. Both the Smart Internet Lab and BDFI have a number of upcoming research projects where the research remit of this academic post are well aligned, so the successful applicant will be joining a vibrant and active cross-disciplinary research environment. Successful applicants will have a proven track record in high quality teaching at undergraduate and postgraduate levels.

Please include with your CV and covering letter with a statement on the contributions that you can make to teaching in the Department, especially in respect of innovation in delivery and content. We would like to encourage applications from groups under-represented in electronic engineering.

For informal discussion about the post, you are welcome to contact:

Professor Angela Doufexi (mvse-eee@bristol.ac.uk), (Head of Department) or

Professor Ian Nabney (sceem-hos@bristol.ac.uk), (Head School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths) or

Professor Dimitra Simeonidou (Dimitra.Simeonidou@bristol.ac.uk) (Director of Smart Internet Lab and co-Director of BDFI)

The selection process, including interviews is expected to take place in June/July 2020.

We welcome applications from all members of our community and are particularly encouraging those from diverse groups, such as members of the LGBT+ andBAME communities, to join us.

Visit link:

Lecturer in Artificial Intelligence for Digital Infrastructures job with UNIVERSITY OF BRISTOL | 208485 - Times Higher Education (THE)

Artificial Intelligence (AI) Market to Reach USD 202.57 Billion by 2026; Rising Demand for Cloud-based Applications to Aid Growth: Fortune Business…

Pune, May 25, 2020 (GLOBE NEWSWIRE) -- The global AI market is set to gain momentum from the rising utilization of cloud-based services and applications worldwide. Also, the increasing adoption of connected devices would impact the market positively in the coming years. This information is published by Fortune Business Insights in a recent report, titled, Artificial Intelligence (AI) Market Size, Share and Industry Analysis By Component (Hardware, Software, Services), By Technology (Computer Vision, Machine Learning, Natural Language Processing, Others), By Industry Vertical (BFSI, Healthcare, Manufacturing, Retail, IT & Telecom, Government, Others) and Regional Forecast, 2019-2026. The report further states that the global AI market size stood at USD 20.67 billion in 2018 and is projected to reach USD 202.57 billion by 2026, thereby exhibiting a CAGR of 33.1% during the forecast period.

Highlights of This Report:

Get Sample PDF Brochure: https://www.fortunebusinessinsights.com/enquiry/request-sample-pdf/artificial-intelligence-market-100114

An Overview of the Impact of COVID-19 on this Market:

The emergence of COVID-19 has brought the world to a standstill. We understand that this health crisis has brought an unprecedented impact on businesses across industries. However, this too shall pass. Rising support from governments and several companies can help in the fight against this highly contagious disease. There are some industries that are struggling and some are thriving. Overall, almost every sector is anticipated to be impacted by the pandemic.

We are taking continuous efforts to help your business sustain and grow during COVID-19 pandemics. Based on our experience and expertise, we will offer you an impact analysis of coronavirus outbreak across industries to help you prepare for the future.

Click here to get the short-term and long-term impact of COVID-19 on this Market.

Please visit: https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-market-100114

Drivers & Restraints-

Rising Demand for Industrial Robots to Propel Growth

The rising demand for customized robots is a vital driver of the AI market growth. Numerous reputed organizations in the developed nations are presently engaging in the development and supply of industrial robots equipped with the AI technology. Japan and South Korea, for instance, supplied approximately 38,600 and 41,400 units of industrial robots in 2016, respectively. Also, in the same year, China provided almost 87,000 units across the globe. Apart from that, AI technology is mainly required in the retail sector for enhancing customer service. Coupled with this, the increasing usage of machine learning (M2P and M2M) would contribute to the market growth. However, the rising concerns regarding the unreliability of AI algorithms and data privacy may hamper the market growth.

Segment-

Natural Language Processing Segment to Dominate Owing to Its Usage in Various Applications

In terms of technology, the market is segregated into natural language processing, machine learning, computer vision, and others. Amongst these, the computer vision segment held 22.5% AI market share in 2018. This system helps in identifying and detecting patterns. It also synthesizes, analyses, and acquires realistic interactive interfaces. Then, it utilizes the ID tags to showcase pictures of associated items. The natural language processing segment currently accounts of the maximum share as it is adopted for a wide range of applications, such as Informational Retrieval (IR), speech processing, semantic disambiguation, text parsing, and machine translation.

Speak to Analyst: https://www.fortunebusinessinsights.com/enquiry/speak-to-analyst/artificial-intelligence-market-100114

Regional Analysis-

Rising Adoption of AI by Biopharma Companies to Favor Growth in Asia Pacific

In 2018, North America procured USD 9.72 billion revenue and is set to remain in the leading position throughout the forecast period. This growth is attributable to the ongoing technological advancements in the fields of natural language processing, machine learning, and analytical tools. Besides, the rising awareness programs regarding the benefits of AI tools and systems would propel growth in this region. Asia Pacific, on the other hand, is expected to grow considerably backed by the major contribution of China. The government of this country is planning to merge with Baidu to support the implementation of AI and develop a deep learning laboratory consisting of military, manufacturing, smart agriculture, and intelligent logistics. Apart from that, AI is being extensively adopted by a large number of biopharma companies in this region. Developed nations, such as Japan are investing hefty amounts of money in creating AI algorithms to analyze large volumes of data.

Competitive Landscape-

Key Players Focus on Launching New Products to Strengthen Position

The market is fragmented with various companies operating across the world. They are mainly focusing on investing huge sums to develop new products. Numerous start-ups are adopting the strategy of mergers and acquisitions. Some of the others are considering the impact of the outbreak of Covid-19 pandemic and are making novel solutions to help people in performing various tasks. Below are a couple of the recent industry developments:

Fortune Business Insights lists out the names of all the AI service providers present in the global market. They are as follows:

Quick Buy AI Market Research Report https://www.fortunebusinessinsights.com/checkout-page/100114

Detailed Table of Content

TOC Continued...!!!

Get your Customized Research Report: https://www.fortunebusinessinsights.com/enquiry/customization/artificial-intelligence-market-100114

Have a Look at Related Research Insights:

Home Automation Market Size, Share and Industry Analysis by Product Type (Luxury, Mainstream, Managed, DIY Do It Yourself Home Automation System), Application (Safety and Security, Lighting, Entertainment, Heating, Ventilation and Air conditioning), Networking Technology (Wired & Wireless) and Regional Forecast 2018-2025

Identity And Access Management Market Size, Share and Industry Analysis By Component (Provisioning, Directory Services, Single Sign-On, Others), By Deployment Model (Cloud, On-Premises), By Enterprise Size (Large Enterprises, Small and Medium Enterprises), By Industry Vertical (BFSI, IT and Telecom, Retail and Consumer Packed Goods, Others) And Regional Forecast 2019-2026

Speech and Voice Recognition Market Size, Share & Industry Analysis, By Component (Solution, Services), By Technology (Voice Recognition, Speech Recognition), By Deployment (On-Premises, Cloud), By End-User (Healthcare, IT and Telecommunications, Automotive, BFSI, Government, Legal, Retail, Travel and Hospitality and Others) and Regional Forecast, 2019 - 2026

Internet of Things (IoT) Market Size, Share and Industry Analysis By Platform (Device Management, Application Management, Network Management), By Software & Services (Software Solution, Services), By End-Use Industry (BFSI, Retail, Governments, Healthcare, Others) And Regional Forecast, 2019 - 2026

About Us:

Fortune Business Insightsoffers expert corporate analysis and accurate data, helping organizations of all sizes make timely decisions. We tailor innovative solutions for our clients, assisting them address challenges distinct to their businesses. Our goal is to empower our clients with holistic market intelligence, giving a granular overview of the market they are operating in.

Our reports contain a unique mix of tangible insights and qualitative analysis to help companies achieve sustainable growth. Our team of experienced analysts and consultants use industry-leading research tools and techniques to compile comprehensive market studies, interspersed with relevant data.

At Fortune Business Insights, we aim at highlighting the most lucrative growth opportunities for our clients. We therefore offer recommendations, making it easier for them to navigate through technological and market-related changes. Our consulting services are designed to help organizations identify hidden opportunities and understand prevailing competitive challenges.

Contact Us:Fortune Business Insights Pvt. Ltd.308, Supreme Headquarters,Survey No. 36, Baner,Pune-Bangalore Highway,Pune- 411045, Maharashtra,India.Phone:US: +1-424-253-0390UK: +44-2071-939123APAC: +91-744-740-1245Email:sales@fortunebusinessinsights.comFortune Business InsightsLinkedIn|Twitter|Blogs

Read Press Release https://www.fortunebusinessinsights.com/press-release/artificial-intelligence-market-9227

Read the original:

Artificial Intelligence (AI) Market to Reach USD 202.57 Billion by 2026; Rising Demand for Cloud-based Applications to Aid Growth: Fortune Business...

Walmart Employees Are Out to Show Its Anti-Shoplifting AI Doesn’t Work – WIRED

In January, my coworker received a peculiar email. The message, which she forwarded to me, was from a handful of corporate Walmart employees calling themselves the Concerned Home Office Associates. (Walmarts headquarters in Bentonville, Arkansas, is often referred to as the Home Office.) While its not unusual for journalists to receive anonymous tips, they dont usually come with their own slickly produced videos.

The employees said they were past their breaking point with Everseen, a small artificial intelligence firm based in Cork, Ireland, whose technology Walmart began using in 2017. Walmart uses Everseen in thousands of stores to prevent shoplifting at registers and self-checkout kiosks. But the workers claimed it misidentified innocuous behavior as theft, and often failed to stop actual instances of stealing.

They told WIRED they were dismayed that their employerone of the largest retailers in the worldwas relying on AI they believed was flawed. One worker said that the technology was sometimes even referred to internally as NeverSeen because of its frequent mistakes. WIRED granted the employees anonymity because they are not authorized to speak to the press.

The workers said they had been upset about Walmarts use of Everseen for years, and claimed colleagues had raised concerns about the technology to managers, but were rebuked. They decided to speak to the press, they said, after a June 2019 Business Insider article reported Walmarts partnership with Everseen publicly for the first time. The story described how Everseen uses AI to analyze footage from surveillance cameras installed in the ceiling, and can detect issues in real time, such as when a customer places an item in their bag without scanning it. When the system spots something, it automatically alerts store associates.

Everseen overcomes human limitations. By using state-of-the-art artificial intelligence, computer vision systems, and big data we can detect abnormal activity and other threats, a promotional video referenced in the story explains. Our digital eye has perfect vision and it never needs a day off.

In an effort to refute the claims made in the Business Insider piece, the Concerned Home Office Associates created a video, which purports to show Everseens technology failing to flag items not being scanned in three different Walmart stores. Set to cheery elevator music, it begins with a person using self-checkout to buy two jumbo packages of Reeses White Peanut Butter Cups. Because theyre stacked on top of each other, only one is scanned, but both are successfully placed in the bagging area without issue.

The same person then grabs two gallons of milk by their handles, and moves them across the scanner with one hand. Only one is rung up, but both are put in the bagging area. They then put their own cell phone on top of the machine, and an alert pops up saying they need to wait for assistancea false positive. Everseen finally alerts! But does so mistakenly. Oops again, a caption reads. The filmmaker repeats the same process at two more stores, where they fail to scan a heart-shaped Valentines Day chocolate box with a puppy on the front and a Philips Sonicare electric toothbrush. At the end, a caption explains that Everseen failed to stop more than $100 of would-be theft.

False Positives

The video isnt definitive proof that Everseens technology doesnt work as well as advertised, but its existence speaks to the level of frustration felt by the group of anonymous Walmart employees, and the lengths they went to prove their objections had merit.

In interviews, the workers, whose jobs include knowledge of Walmarts loss prevention programs, said their top concern with Everseen was false positives at self-checkout. The employees believe that the tech frequently misinterprets innocent behavior as potential shoplifting, which frustrates customers and store associates, and leads to longer lines. Its like a noisy tech, a fake AI that just pretends to safeguard, said one worker.

See the original post:

Walmart Employees Are Out to Show Its Anti-Shoplifting AI Doesn't Work - WIRED

Coronavirus Update: Recent FTC Guidance on the Use of Artificial Intelligence and Algorithms in the Age of COVID-19 – Government Contracts Legal Forum

On April 8, 2020, the Federal Trade Commission (FTC) published a blog post titled, Using Artificial Intelligence and Algorithms, that offers important lessons about the use of AI and algorithms in automated decision-making. The post begins by noting that headlines today tout rapid improvements in AI technology, and the use of more advanced AI has enormous potential to improve welfare and productivity. But more sophisticated AI also presents risks, such as the potential for unfair or discriminatory outcomes. This tension between benefits and risks is a particular concern in Health AI, and the tension will continue as AI technologies are deployed to tackle the current COVID-19 crisis.

The FTC post reminds companies that, while the sophistication of AI is new, automated decision-making is not, and the FTC has a long history of dealing with the challenges presented by the use of data and algorithms to make decisions about consumers.

Click here to continue reading the full version of this alert.

Go here to read the rest:

Coronavirus Update: Recent FTC Guidance on the Use of Artificial Intelligence and Algorithms in the Age of COVID-19 - Government Contracts Legal Forum

How artificial intelligence is keeping time-critical shipments on track during pandemic – FreightWaves

Consumers are seeing and feeling the impact of COVID-19 supply chain interruptions and delays in their everyday lives, from shortages of paper goods and cleaning supplies in grocery stores, to rising prices for beef and poultry.

For specialized industries such as health care and aerospace, however, the stakes of supply chain interruptions and service failures have perhaps never been higher. So far the traditional hub-and-spoke time-critical logistics industry has largely struggled to adapt, while newer technology-enabled models in the industry are showing significant promise to perform in a crisis.

Artificial intelligence (AI) platforms in particular have shown remarkable resilience during the COVID-19 crisis and the ability to quickly pivot shipments with minimal delays and service failures. California-based Airspace Technologies was one of the first logistics providers in the time-critical space to implement a breakthrough AI-powered platform that they say has enabled them to swiftly adjust operations without interruptions to their 24/7, 365-days-a-year services.

Airspace was built with moments like these in mind. It was designed to perform in a crisis when time is of the essence and lives and entire industries are quite literally on the line, said Airspace Technologies CEO and co-founder Nick Bulcao.

With years of experience specializing in urgent medical deliveries, such as organs for transplant, as well as aerospace parts for downed aircraft, Airspace says they have noticed a significant impact on their business as elective surgeries are delayed and less aircraft are flying. But the automated, AI-driven software that is the heartbeat of their operations has made adjusting to the new realities of the industry immensely more manageable.

With lives on the line, Airspace moved quickly to set up new shipment networks and routes each day to begin transporting urgently needed COVID-19 test kits, blood and plasma units, and vital organs for transplant to get where they need to go. Their fully transparent, automated software platform also allows minute-by-minute real-time tracking of deliveries, so hospitals and labs know exactly where kits or urgent supplies are and when they will arrive.

Airspace is currently making between 250 and 300 health care-related deliveries each day, and has transported as many as 30 organs in just one week.

The companys aerospace parts delivery business has had its own heroic moments during the COVID-19 crisis. An independent delivery driver for Airspace in the Bay Area recounted a harrowing incident last month in which he was asked to make a critical aerospace part delivery not to an airport, but to Stanford University Medical Center instead. Sensing the urgency of the moment, the driver immediately retrieved the part and made his way to the hospital.

Arriving two hours earlier than expected, I called my point of contact, who was still over an hour away. After some coordination with the engineer and hospital staff, I handed over the critical part for the medevac helicopter stranded on the hospital roof to a nurse instead helping get the lifesaving equipment back in the air ahead of schedule, said Bryan Sperry, 61, the driver.

Airspace says software also allowed them to protect workers by rapidly transitioning their team to fully remote operations across the United States.

The key was doing so with zero disruption to our round-the-clock operations and with full capabilities still in place, said Ryan Rusnak, Airspace co-founder and chief technology officer. After some planning, it took the team less than 36 hours to make a complete transition. Theyre now remotely continuing to provide the seamless, end-to-end experience our customers expect.

The transition and dramatic decline in passenger flights has not been without its challenges, though. Fewer passenger flights means fewer routing options, often accompanied by delays that can be costly for customers. That is where the power of the AI platform can often make the biggest difference, Airspace says.

One of the key features of their AI software is an automated delay declaration, which allows the operations team to quickly pivot to the next optimal routing if an order experiences a flight delay even in the middle of a trip. For example, on one day in March this year, amid more than 100 flight cancellations at the Las Vegas airport, Airspaces technology allowed the company to reduce disruption to critical deliveries to less than 38-minute average delays, while over 60% of orders there experienced no delays at all.

The rapidly changing dynamics as a result of the COVID-19 pandemic have created enormous challenges across industries and supply chains, but the power of AI to keep industry and lifesaving goods and services moving in a crisis has shown a positive path toward maintaining affordability, speed, reliability and transparency in urgent logistics.

More here:

How artificial intelligence is keeping time-critical shipments on track during pandemic - FreightWaves

COVID-19 Impact: A Mix of Challenges and Opportunities | Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024 | Growing Adoption of Cloud…

LONDON--(BUSINESS WIRE)--Technavio has been monitoring the artificial intelligence-as-a-service (AIaaS) market and it is poised to grow by USD 15.14 billion during 2020-2024, progressing at a CAGR of over 48% during the forecast period. The report offers an up-to-date analysis regarding the current market scenario, latest trends and drivers, and the overall market environment.

Technavio suggests three forecast scenarios (optimistic, probable, and pessimistic) considering the impact of COVID-19. Please Request Free Sample Report on COVID-19 Impact

The market is concentrated, and the degree of concentration will accelerate during the forecast period. Alphabet Inc., Amazon.com Inc., Apple Inc., Intel Corp., International Business Machines Corp., Microsoft Corp., Oracle Corp., Salesforce.com Inc., SAP SE, and SAS Institute Inc. are some of the major market participants. The growing adoption of cloud based solutions will offer immense growth opportunities. To make the most of the opportunities, market vendors should focus more on the growth prospects in the fast-growing segments, while maintaining their positions in the slow-growing segments.

Growing adoption of cloud based solutions has been instrumental in driving the growth of the market.

Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024: Segmentation

Artificial Intelligence-as-a-Service (AIaaS) Market is segmented as below:

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR41175

Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024: Scope

Technavio presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources. Our artificial intelligence-as-a-service (AIaaS) market report covers the following areas:

This study identifies the increasing adoption of AI in predictive analysis as one of the prime reasons driving the artificial intelligence-as-a-service (AIaaS) market growth during the next few years.

Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024: Vendor Analysis

We provide a detailed analysis of vendors operating in the artificial intelligence-as-a-service (AIaaS) market, including some of the vendors such as Alphabet Inc., Amazon.com Inc., Apple Inc., Intel Corp., International Business Machines Corp., Microsoft Corp., Oracle Corp., Salesforce.com Inc., SAP SE, and SAS Institute Inc. Backed with competitive intelligence and benchmarking, our research reports on the artificial intelligence-as-a-service (AIaaS) market are designed to provide entry support, customer profile and M&As as well as go-to-market strategy support.

Register for a free trial today and gain instant access to 17,000+ market research reports.

Technavio's SUBSCRIPTION platform

Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024: Key Highlights

Table Of Contents:

Executive Summary

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation by End-user

Customer Landscape

Geographic Landscape

Drivers, Challenges, and Trends

Vendor Landscape

Vendor Analysis

Appendix

Scope of the report

Currency conversion rates for US$

Research methodology

List of abbreviations

About Us

Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

See more here:

COVID-19 Impact: A Mix of Challenges and Opportunities | Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024 | Growing Adoption of Cloud...

IIT-Ropar and TSW Launch a PG Programme in Artificial Intelligence – THE WEEK

(Eds: Disclaimer: The following press release comes to you under an arrangement with Business Wire India. PTI takes no editorial responsibility for the same.)MUMBAI, May 29, 2020 /PRNewswire/ -- IIT-Ropar, one of the eight new IITs established by the Ministry of Human Resource Development (MHRD), Government of India, and TSW, the executive education division of Times Professional Learning (a part of The Times of India Group), have launched a Post Graduate Certificate Programme in Artificial Intelligence & Deep Learning.

The programme will be coordinated by The Indo-Taiwan Joint Research Centre (ITJRC) on Artificial Intelligence (AI) and Machine Learning (ML), at IIT-Ropar. Supported by the Ministry of Science and Technology, Taiwan, ITJRC is a bilateral centre for collaborative research in disruptive technologies like AI and ML.

The programme, with its focus on Artificial Intelligence and Deep Learning, has an eligibility criterion of a minimum of 2 years of work experience in the IT industry. Though an engineering degree is a desirable prerequisite for this programme, one does not need a coding or mathematics background to be eligible. The selection into the programme will be on the basis of an application and an interview.

The programme has a duration of six months and classes will be held over weekends as live online instructor sessions, with IIT-Ropar faculty and notable industry experts. The programme has been designed with inputs from industry and strikes the right balance between rigor and effort, making it highly suitable for working professionals. The participants will get a joint certificate from TSW and IIT-Ropar, and also IIT-Ropar Executive Education Alumni status, upon course completion. The certificates will be awarded in a convocation ceremony at IIT-Ropar campus.

This programme, being industry led with its focus on Artificial Intelligence and Deep Learning, comes with an exhaustive curriculum that includes modules on Emerging Technologies & AI, Data Science, Machine Learning, Programming with Tensorflow, Deep Learning & Neural Networks, Image Recognition, Speech Recognition, Al Applications, and a Capstone Project.

In a LinkedIn report - 2020 Emerging Jobs Report India - featuring the top 15 emerging jobs, 'AI Specialist' stood at the number 2 position. According to Gartner, by 2020, AI would open up approximately 2.3 million job opportunities. Gartner also added that, "starting in 2020, AI-related job creation will cross into positive territory, reaching two million net-new jobs in 2025." Those aspiring to build a career in AI and Deep Learning (DL) can make a head-start with this programme.

IIT-Ropar is a highly ranked institution, having earned high ranks in 'Times Higher Education (THE) World University Rankings 2020', 'QS India Rankings 2020' and Union HRD Ministry's 'National Institutional Ranking Framework (NIRF)'. 'THE World University Rankings' is one of largest and most diverse university rankings that includes about 1,400 universities from 92 countries.

Prof. Sarit Kumar Das, Director, IIT-Ropar, said that, "IIT-Ropar has established itself as one of the top technological institutes in India. It focuses on promoting cutting-edge research and high quality publications in all the disciplines. It is expanding its outreach to industry and the best academic institutions in the world through active collaborations."

Dr. Rohit Sharma, Coordinator-ITJRC, commented, "ITJRC focuses on academia to academia, and academia to industry collaborations in various domains of AI and ML. The partnership with TSW will help us take our expertise in AI and ML to a larger audience."

Mr. Anish Srikrishna, CEO, Times Professional Learning (a part of The Times of India Group), added, "We are happy to partner with IIT-Ropar in bringing a programme in AI & DL to our learner community. With the job market for AI poised for growth, the programme surely will help fulfill the career aspirations of many students."

IIT-Ropar and TSW formally launched the programme in a virtual ceremony on 21st May, 2020, when Prof. Das and Mr. Srikrishna, signed an MoU for a long-term collaboration. The programme, the first to be launched as a part of this collaboration, will start accepting applications from 20th July, 2020.

To know more, please can log on to https://timestsw.com/course/post-graduate-cece-deep-learning/ or watch https://youtu.be/dqENWLUhLRU or email on tswadmission@timesgroup.com or call: +91-7400084666

About TSW

TSW is the executive education division of Times Professional Learning (a part of The Times of India Group), aimed to enhance leadership and general management skills of experienced professionals. With a vision to make world-class education accessible to aspiring business leaders through strategic collaborations, TSW's passion for excellence and a belief that 'Executive Education Empowers' works hand-in-hand with the organisation's aim to impart knowledge to the learner community nationwide.

About IIT-Ropar

IIT-Ropar was founded in 2008, as an engineering, science, and technology higher education institute located in Rupnagar, Punjab, India. It became the 'Highest-ranked Indian newcomer' in the Times Higher Education World University Ranking 2020, ensconced in the 301-350 band. It ranked No. 1 among the 56 Indian institutes that appeared in the list of best universities of the world. It was the 2nd Top Indian Institute in the list of Global Universities. On the 'research citation' parameter, the institute scored 100 and was ranked No. 1.

PWRPWR

Read the original here:

IIT-Ropar and TSW Launch a PG Programme in Artificial Intelligence - THE WEEK

What is Artificial Intelligence? | Azure Blog and Updates …

It has been said that Artificial Intelligence will define the next generation of software solutions. If you are even remotely involved with technology, you will almost certainly have heard the term with increasing regularity over the last few years. It is likely that you will also have heard different definitions for Artificial Intelligence offered, such as:

The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. Encyclopedia Britannica

Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Wikipedia

How useful are these definitions? What exactly are tasks commonly associated with intelligent beings? For many people, such definitions can seem too broad or nebulous. After all, there are many tasks that we can associate with human beings! What exactly do we mean by intelligence in the context of machines, and how is this different from the tasks that many traditional computer systems are able to perform, some of which may already seem to have some level of intelligence in their sophistication? What exactly makes the Artificial Intelligence systems of today different from sophisticated software systems of the past?

It could be argued that any attempt to try to define Artificial Intelligence is somewhat futile, since we would first have to properly define intelligence, a word which conjures a wide variety of connotations. Nonetheless, this article attempts to offer a more accessible definition for what passes as Artificial Intelligence in the current vernacular, as well as some commentary on the nature of todays AI systems, and why they might be more aptly referred to as intelligent than previous incarnations.

Firstly, it is interesting and important to note that the technical difference between what used to be referred to as Artificial Intelligence over 20 years ago and traditional computer systems, is close to zero. Prior attempts to create intelligent systems known as expert systems at the time, involved the complex implementation of exhaustive rules that were intended to approximate intelligent behavior. For all intents and purposes, these systems did not differ from traditional computers in any drastic way other than having many thousands more lines of code. The problem with trying to replicate human intelligence in this way was that it requires far too many rules and ignores something very fundamental to the way intelligent beings make decisions, which is very different from the way traditional computers process information.

Let me illustrate with a simple example. Suppose I walk into your office and I say the words Good Weekend? Your immediate response is likely to be something like yes or fine thanks. This may seem like very trivial behavior, but in this simple action you will have immediately demonstrated a behavior that a traditional computer system is completely incapable of. In responding to my question, you have effectively dealt with ambiguity by making a prediction about the correct way to respond. It is not certain that by saying Good Weekend I actually intended to ask you whether you had a good weekend. Here are just a few possible intents behind that utterance:

And more.

The most likely intended meaning may seem obvious, but suppose that when you respond with yes, I had responded with No, I mean it was a good football game at the weekend, wasnt it?. It would have been a surprise, but without even thinking, you will absorb that information into a mental model, correlate the fact that there was an important game last weekend with the fact that I said Good Weekend? and adjust the probability of the expected response for next time accordingly so that you can respond correctly next time you are asked the same question. Granted, those arent the thoughts that will pass through your head! You happen to have a neural network (aka your brain) that will absorb this information automatically and learn to respond differently next time.

The key point is that even when you do respond next time, you will still be making a prediction about the correct way in which to respond. As before, you wont be certain, but if your prediction fails again, you will gather new data, which leads to my suggested definition of Artificial Intelligence, as it stands today:

Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.

This is a somewhat appropriate definition of Artificial Intelligence because it is exactly what AI systems today are doing, and more importantly, it reflects an important characteristic of human beings which separates us from traditional computer systems: human beings are prediction machines. We deal with ambiguity all day long, from very trivial scenarios such as the above, to more convoluted scenarios that involve playing the odds on a larger scale. This is in one sense the essence of reasoning. We very rarely know whether the way we respond to different scenarios is absolutely correct, but we make reasonable predictions based on past experience.

Just for fun, lets illustrate the earlier example with some code in R! If you are not familiar with R, but would like to follow along, see the instructions on installation. First, lets start with some data that represents information in your mind about when a particular person has said good weekend? to you.

In this example, we are saying that GoodWeekendResponse is our score label (i.e. it denotes the appropriate response that we want to predict). For modelling purposes, there have to be at least two possible values in this case yes and no. For brevity, the response in most cases is yes.

We can fit the data to a logistic regression model:

Now what happens if we try to make a prediction on that model, where the expected response is different than we have previously recorded? In this case, I am expecting the response to be Go England!. Below, some more code to add the prediction. For illustration we just hardcode the new input data, output is shown in bold:

The initial prediction yes was wrong, but note that in addition to predicting against the new data, we also incorporated the actual response back into our existing model. Also note, that the new response value Go England! has been learnt, with a probability of 50 percent based on current data. If we run the same piece of code again, the probability that Go England! is the right response based on prior data increases, so this time our model chooses to respond with Go England!, because it has finally learnt that this is most likely the correct response!

Do we have Artificial Intelligence here? Well, clearly there are different levels of intelligence, just as there are with human beings. There is, of course, a good deal of nuance that may be missing here, but nonetheless this very simple program will be able to react, with limited accuracy, to data coming in related to one very specific topic, as well as learn from its mistakes and make adjustments based on predictions, without the need to develop exhaustive rules to account for different responses that are expected for different combinations of data. This is this same principle that underpins many AI systems today, which, like human beings, are mostly sophisticated prediction machines. The more sophisticated the machine, the more it is able to make accurate predictions based on a complex array of data used to train various models, and the most sophisticated AI systems of all are able to continually learn from faulty assertions in order to improve the accuracy of their predictions, thus exhibiting something approximating human intelligence.

You may be wondering, based on this definition, what the difference is between machine learning and Artificial intelligence? After all, isnt this exactly what machine learning algorithms do, make predictions based on data using statistical models? This very much depends on the definition of machine learning, but ultimately most machine learning algorithms are trained on static data sets to produce predictive models, so machine learning algorithms only facilitate part of the dynamic in the definition of AI offered above. Additionally, machine learning algorithms, much like the contrived example above typically focus on specific scenarios, rather than working together to create the ability to deal with ambiguity as part of an intelligent system. In many ways, machine learning is to AI what neurons are to the brain. A building block of intelligence that can perform a discreet task, but that may need to be part of a composite system of predictive models in order to really exhibit the ability to deal with ambiguity across an array of behaviors that might approximate to intelligent behavior.

There are a number of practical advantages in building AI systems, but as discussed and illustrated above, many of these advantages are pivoted around time to market. AI systems enable the embedding of complex decision making without the need to build exhaustive rules, which traditionally can be very time consuming to procure, engineer and maintain. Developing systems that can learn and build their own rules can significantly accelerate organizational growth.

Microsofts Azure cloud platform offers an array of discreet and granular services in the AI and Machine Learning domain, that allow AI developers and Data Engineers to avoid re-inventing wheels, and consume re-usable APIs. These APIs allow AI developers to build systems which display the type of intelligent behavior discussed above.

If you want to dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and the Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI, and Cognitive Toolkit, visit AI School.

Read the original post:

What is Artificial Intelligence? | Azure Blog and Updates ...

MS in Artificial Intelligence | Artificial Intelligence

The Master of Science in Artificial Intelligence (M.S.A.I.) degree program is offered by the interdisciplinary Institute for Artificial Intelligence. Areas of specialization include automated reasoning, cognitive modeling, neural networks, genetic algorithms, expert databases, expert systems, knowledge representation, logic programming, and natural-language processing. Microelectronics and robotics were added in 2000.

Admission is possible in every semester, but Fall admission is preferable. Applicants seeking financial assistance should apply before February 15, but assistantships are sometimes awarded at other times. Applicants must include a completed application form, three letters of recommendation, official transcripts, Graduate Record Examinations (GRE) scores, and a sample of your scholarly writing on any subject (in English). Only the General Test of the GRE is required for the M.S.A.I. program. International students must also submit results of the TOEFL and a statement of financial support. Applications must be completed at least six weeks before the proposed registration date.

No specific undergraduate major is required for admission, but admission is competitive. We are looking for students with a strong preparation in one or more relevant background areas (psychology, philosophy, linguistics, computer science, logic, engineering, or the like), a demonstrated ability to handle all types of academic work (from humanities to mathematics), and an excellent command of written and spoken English.

For more information regarding applications, please vist theMS Program AdmissionsandInformation for International Studentspages.

Requirements for the M.S.A.I. degree include: interdisciplinary foundational courses in computer science, logic, philosophy, psychology, and linguistics; courses and seminars in artificial intelligence programming techniques, computational intelligence, logic and logic programming, natural-language processing, and knowledge-based systems; and a thesis. There is a final examination covering the program of study and a defense of the written thesis.

For further information on course and thesis requirements, please visit theCourse & Thesis Requirementspage.

The Artificial Intelligence Laboratories serve as focal points for the M.S.A.I. program. AI students have regular access to PCs running current Windows technology, and a wireless network is available for students with laptops and other devices. The Institute also features facilities for robotics experimentation and a microelectronics lab. The University of Georgia libraries began building strong AI and computer science collections long before the inception of these degree programs. Relevant books and journals are located in the Main and Science libraries (the Science library is conveniently located in the same building complex as the Institute for Artificial Intelligence and the Computer Science Department). The University's library holdings total more than 3 million volumes.

Graduate assistantships, which include a monthly stipend and remission of tuition, are available. Assistantships require approximately 13-15 hours of work per week and permit the holder to carry a full academic program of graduate work. In addition, graduate assistants pay a matriculation fee and all student fees per semester.

For an up to date description of Tuition and Fees for both in-state and out-of-state students, please visit the site of theBursar's Office.

On-campus housing, including a full range of University-owned married student housing, is available to students. Student fees include use of a campus-wide bus system and some city bus routes. More information regarding housing is available here:University of Georgia Housing.

The University of Georgia has an enrollment of over 34,000, including approximately 8,000 graduate students. Students are enrolled from all 50 states and more than 100 countries. Currently, there is a very diverse group of students in the AI program. Women and international students are well represented.

Additional information about the Institute and the MSAI program, including policies for current students, can be found in the AI Student Handbook.

View post:

MS in Artificial Intelligence | Artificial Intelligence

What Are the Advantages of Artificial Intelligence …

The general benefit of artificial intelligence, or AI, is that it replicates decisions and actions of humans without human shortcomings, such as fatigue, emotion and limited time. Machines driven by AI technology are able to perform consistent, repetitious actions without getting tired. It is also easier for companies to get consistent performance across multiple AI machines than it is across multiple human workers.

Companies incorporate AI into production and service-based processes. In a manufacturing business, AI machines can churn out a high, consistent level of production without needing a break or taking time off like people. This efficiency improves the cost-basis and earning potential for many companies. Mobile devices use intuitive, voice-activated AI applications to offer users assistance in completing tasks. For example, users of certain mobile phones can ask for directions or information and receive a vocal response.

The premise of AI is that it models human intelligence. Though imperfections exist, there is often a benefit to AI machines making decisions that humans struggle with. AI machines are often programmed to follow statistical models in making decisions. Humans may struggle with personal implications and emotions when making similar decisions. Famous scientist Stephen Hawking uses AI to communicate with a machine, despite suffering from a motor neuron disease.

Read the rest here:

What Are the Advantages of Artificial Intelligence ...

AI Tutorial | Artificial Intelligence Tutorial – Javatpoint

The Artificial Intelligence tutorial provides an introduction to AI which will help you to understand the concepts behind Artificial Intelligence. In this tutorial, we have also discussed various popular topics such as History of AI, applications of AI, deep learning, machine learning, natural language processing, Reinforcement learning, Q-learning, Intelligent agents, Various search algorithms, etc.

Our AI tutorial is prepared from an elementary level so you can easily understand the complete tutorial from basic concepts to the high-level concepts.

In today's world, technology is growing very fast, and we are getting in touch with different new technologies day by day.

Here, one of the booming technologies of computer science is Artificial Intelligence which is ready to create a new revolution in the world by making intelligent machines.The Artificial Intelligence is now all around us. It is currently working with a variety of subfields, ranging from general to specific, such as self-driving cars, playing chess, proving theorems, playing music, Painting, etc.

AI is one of the fascinating and universal fields of Computer science which has a great scope in future. AI holds a tendency to cause a machine to work as a human.

Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."

So, we can define AI as:

Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning, and solving problems

With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that you can create a machine with programmed algorithms which can work with own intelligence, and that is the awesomeness of AI.

It is believed that AI is not a new technology, and some people says that as per Greek myth, there were Mechanical men in early days which can work and behave like humans.

Before Learning about Artificial Intelligence, we should know that what is the importance of AI and why should we learn it. Following are some main reasons to learn about AI:

Following are the main goals of Artificial Intelligence:

Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.

To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:

Following are some main advantages of Artificial Intelligence:

Every technology has some disadvantages, and thesame goes for Artificial intelligence. Being so advantageous technology still, it has some disadvantages which we need to keep in our mind while creating an AI system. Following are the disadvantages of AI:

Before learning about Artificial Intelligence, you must have the fundamental knowledge of following so that you can understand the concepts easily:

Our AI tutorial is designed specifically for beginners and also included some high-level concepts for professionals.

We assure you that you will not find any difficulty while learning our AI tutorial. But if there any mistake, kindly post the problem in the contact form.

Visit link:

AI Tutorial | Artificial Intelligence Tutorial - Javatpoint

It’s Called Artificial Intelligencebut What Is Intelligence? – WIRED

Elizabeth Spelke, a cognitive psychologist at Harvard, has spent her career testing the worlds most sophisticated learning systemthe mind of a baby.

Gurgling infants might seem like no match for artificial intelligence. They are terrible at labeling images, hopeless at mining text, and awful at videogames. Then again, babies can do things beyond the reach of any AI. By just a few months old, they've begun to grasp the foundations of language, such as grammar. They've started to understand how the physical world works, how to adapt to unfamiliar situations.

Yet even experts like Spelke don't understand precisely how babiesor adults, for that matterlearn. That gap points to a puzzle at the heart of modern artificial intelligence: We're not sure what to aim for.

Consider one of the most impressive examples of AI, AlphaZero, a program that plays board games with superhuman skill. After playing thousands of games against itself at hyperspeed, and learning from winning positions, AlphaZero independently discovered several famous chess strategies and even invented new ones. It certainly seems like a machine eclipsing human cognitive abilities. But AlphaZero needs to play millions more games than a person during practice to learn a game. Most tellingly, it cannot take what it has learned from the game and apply it to another area.

To some members of the AI priesthood, that calls for a new approach. What makes human intelligence special is its adaptabilityits power to generalize to never-seen-before situations, says Franois Chollet, a well-known AI engineer and the creator of Keras, a widely used framework for deep learning. In a November research paper, he argued that it's misguided to measure machine intelligence solely according to its skills at specific tasks. Humans don't start out with skills; they start out with a broad ability to acquire new skills, he says. What a strong human chess player is demonstrating isn't the ability to play chess per se, but the potential to acquire any task of a similar difficulty. That's a very different capability.

Chollet posed a set of problems designed to test an AI program's ability to learn in a more generalized way. Each problem requires arranging colored squares on a grid based on just a few prior examples. It's not hard for a person. But modern machine-learning programstrained on huge amounts of datacannot learn from so few examples. As of late April, more than 650 teams had signed up to tackle the challenge; the best AI systems were getting about 12 percent correct.

A self-driving car cannot intuit from common sense what will happen if a truck spills its load.

It isn't yet clear how humans solve these problems, but Spelke's work offers a few clues. For one thing, it suggests that humans are born with an innate ability to quickly learn certain things, like what a smile means or what happens when you drop something. It also suggests we learn a lot from each other. One recent experiment showed that 3-month-olds appear puzzled when someone grabs a ball in an inefficient way, suggesting that they already appreciate that people cause changes in their environment. Even the most sophisticated and powerful AI systems on the market can't grasp such concepts. A self-driving car, for instance, cannot intuit from common sense what will happen if a truck spills its load.

Josh Tenenbaum, a professor in MIT's Center for Brains, Minds & Machines, works closely with Spelke and uses insights from cognitive science as inspiration for his programs. He says much of modern AI misses the bigger picture, likening it to a Victorian-era satire about a two-dimensional world inhabited by simple geometrical people. We're sort of exploring Flatlandonly some dimensions of basic intelligence, he says. Tenenbaum believes that, just as evolution has given the human brain certain capabilities, AI programs will need a basic understanding of physics and psychology in order to acquire and use knowledge as efficiently as a baby. And to apply this knowledge to new situations, he says, they'll need to learn in new waysfor example, by drawing causal inferences rather than simply finding patterns. At some pointyou know, if you're intelligentyou realize maybe there's something else out there, he says.

This article appears in the June issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

Special Series: The Future of Thinking Machines

Original post:

It's Called Artificial Intelligencebut What Is Intelligence? - WIRED

Powering the Artificial Intelligence Revolution – HPCwire

It has been observed by many that we are at the dawn of the next industrial revolution: The Artificial Intelligence (AI) revolution. The benefits delivered by this intelligence revolution will be many: in medicine, improved diagnostics and precision treatment, better weather forecasting, and self-driving vehicles to name a few. However, one of the costs of this revolution is going to be increased electrical consumption by the data centers that will power it. Data center power usage is projected to double over the next 10 years and is on track to consume 11% of worldwide electricity by 2030. Beyond AI adoption, other drivers of this trend are the movement to the cloud and increased power usage of CPUs, GPUs and other server components, which are becoming more powerful and smart.

AIs two basic elements, training and inference, each consume power differently. Training involves computationally intensive matrix operations over very large data sets, often measured in terabytes to petabytes. Examples of these data sets can range from online sales data to captured video feeds to ultra-high-resolution images of tumors. AI inference is computationally much lighter in nature, but can run indefinitely as a service, which draws a lot of power when hit with a large number of requests. Think of a facial recognition application for security in an office building. It runs continuously but would stress the compute and storage resources at 8:00am and again at 5:00pm as people come and go to work.

However, getting a good handle on power usage in AI is difficult. Energy consumption is not part of standard metrics tracked by job schedulers and while it can be set up, it is complicated and vendor dependent. This means that most users are flying blind when it comes to energy usage.

To map out AI energy requirements, Dr. Miro Hodak led a team of Lenovo engineers and researchers, which looked at the energy cost of an often-used AI workload. The study, Towards Power Efficiency in Deep Learning on Data Center Hardware, (registration required) was recently presented at the 2019 IEEE International Conference on Big Data and was published in the conference proceedings. This work looks at the energy cost of training ResNet50 neural net with ImageNet dataset of more than 1.3 million images on a Lenovo ThinkSystem SR670 server equipped with 4 Nvidia V100 GPUs. AC data from the servers power supply, indicates that 6.3 kWh of energy, enough to power an average home for six hours, is needed to fully train this AI model. In practice, trainings like these are repeated multiple times to tune the resulting models, resulting in energy costs that are actually several times higher.

The study breaks down the total energy into its components as shown in Fig. 1. As expected, the bulk of the energy is consumed by the GPUs. However, given that the GPUs handle all of the computationally intensive parts, the 65% share of energy is lower than expected. This shows that simplistic estimates of AI energy costs using only GPU power are inaccurate and miss significant contributions from the rest of the system. Besides GPUs, CPU and memory account for almost quarter of the energy use and 9% of energy is spent on AC to DC power conversion (this is within line of 80 PLUS Platinum certification of SR670 PSUs).

The study also investigated ways to decrease energy cost by system tuning without changing the AI workload. We found that two types of system settings make most difference: UEFI settings and GPU OS-level settings. ThinkSystem servers provides four UEFI running modes: Favor Performance, Favor Energy, Maximum Performance and Minimum Power. As shown in Table 1, the last option is the best and provides up to 5% energy savings. On the GPU side, 16% of energy can be saved by capping V100 frequency to 1005 MHz as shown in Figure 2. Taking together, our study showed that system tunings can decrease energy usage by 22% while increasing runtime by 14%. Alternatively, if this runtime cost is unacceptable, a second set of tunings, which save 18% of energy while increasing time by only 4%, was also identified. This demonstrates that there is lot of space on system side for improvements in energy efficiency.

Energy usage in HPC has been a visible challenge for over a decade, and Lenovo has long been a leader in energy efficient computing. Whether through our innovative Neptune liquid-cooled system designs, or through Energy-Aware Runtime (EAR) software, a technology developed in collaboration with Barcelona Supercomputing Center (BSC). EAR analyzes user applications to find optimum CPU frequencies to run them at. For now, EAR is CPU-only, but investigations into extending it to GPUs are ongoing. Results of our study show that that is a very promising way to bring energy savings to both HPC and AI.

Enterprises are not used to grappling with the large power profiles that AI requires, the way HPC users have become accustomed. Scaling out these AI solutions will only make that problem more acute. The industry is beginning to respond. MLPerf, currently the leading collaborative project for AI performance evaluation, is preparing new specifications for power efficiency. For now, it is limited to inference workloads and will most likely be voluntary, but it represents a step in the right direction.

So, in order to enjoy those precise weather forecasts and self-driven cars, well need to solve the power challenges they create. Today, as the power profile of CPUs and GPUs surges ever upward, enterprise customers face a choice between three factors: system density (the number of servers in a rack), performance and energy efficiency. Indeed, many enterprises are accustomed to filling up rack after rack with low cost, adequately performing systems that have limited to no impact on the electric bill. Unfortunately, until the power dilemma is solved, those users must be content with choosing only two of those three factors.

Here is the original post:

Powering the Artificial Intelligence Revolution - HPCwire

Artificial intelligence is struggling to cope with how the world has changed – ZDNet

From our attitude towards work to our grasp of what two metres look like, the coronavirus pandemic has made us rethink how we see the world. But while we've found it hard to adjust to the new reality, it's been even harder for the narrowly-designed artificial intelligence models that have been created to help organisation make decisions. Based on data that described the world before the crisis, these won't be making correct predictions anymore, pointing to a fundamental problem in they way AI is being designed.

David Cox, IBM director of the MIT-IBM Watson AI Lab, explains that faulty AI is particularly problematic in the case of so-called black box predictive models: those algorithms which work in ways that are not visible, or understandable, to the user. "It's very dangerous," Cox says, "if you don't understand what's going on internally within a model in which you shovel data on one end to get a result on the other end. The model is supposed to embody the structure of the world, but there is no guarantee that it will keep working if the world changes."

The COVID-19 crisis, according to Cox, has only once more highlighted what AI experts have argued for decades: that algorithms should be more explainable.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

For example, if you were building a computer program that was a complete blackbox, aimed at predicting what the stock market would be like based on past data, there is no guarantee it's going to continue to produce good predictions in the current coronavirus crisis, he argues.

What you actually need to do is build a broader model of the economy that acknowledges supply and demand, understands supply-chains, and incorporates that knowledge, which is closer to something that an economist would do. Then you can reason about the situation more transparently, he says.

"Part of the reason why those models are hard to trust with narrow AIs is because they don't have that structure. If they did it would be much easier for a model to provide an explanation for why they are making decisions. These models are experiencing challenges now. COVID-19 has just made it very clear why that structure is important," he warns.

It's important not only because the technology would perform better and gain in reliability, but also because businesses would be far less reluctant to adopt AI if they trusted the tool more. Cox pulls out his own statistics on the matter: while 95% of companies believe that AI is key to their competitive advantage, only 5% say they've extensively implemented the technology.

While the numbers differ from survey to survey,the conclusion has been the same for some time now: there remains a significant gap between the promise of AI and its reality for businesses. And part of the reason that industry is struggling to deploy the technology boils down to a lack of understanding of AI. If you build a great algorithm but can't explain how it works, you can't expect workers to incorporate the new tool in their business flow. "If people don't understand or trust those tools, it's going to be a lost cause," says Cox.

Explaining AI is one of the main focuses of Cox's work. The MIT-IBM Watson Lab, which he co-directs, comprises of 100 AI scientists across the US university and IBM Research, and is now in its third year of operation. The Lab's motto, which comes up first thing on its website, is self-explanatory: "AI science for real-world impact".

Back in 2017, IBM announced a $240 million investment over ten years to support research by the firm's own researchers, as well as MIT's, in the newly-founded Watson AI Lab. From the start, the collaboration's goal has had a strong industry focus, with an idea to unlock the potential of AI for "business and society". The lab's focus is not on "narrow AI", which is the technology in its limited format that most organizations know today; instead the researchers should be striving for "broad AI". Broad AI can learn efficiently and flexibly, across multiple tasks and data streams, and ultimately has huge potential for businesses. "Broad AI is next," is the Lab's promise.

The only way to achieve broad AI, explains Cox, is to bridge between research and industry. The reason that AI, like many innovations, remains stubbornly stuck in the lab, is because the academics behind the technology struggle to identify and respond to the real-world needs of businesses. Incentives are misaligned; the result is that organizations see the potential of the tool, but struggle to use it. AI exists and it is effective, but is still not designed for business.

SEE: Developers say Google's Go is 'most sought after' programming language of 2020

Before he joined IBM, Cox spent ten years as a professor in Harvard University. "Coming from academia and now working for IBM, my perspective on what's important has completely changed," says the researcher. "It has given me a much clearer picture of what's missing."

The partnership between IBM and MIT is a big shift from the traditional way that academia functions. "I'd rather be there in the trenches, developing those technologies directly with the academics, so that we can immediately take it back home and integrate it into our products," says Cox. "It dramatically accelerates the process of getting innovation into businesses."

IBM has now expanded the collaboration to some of its customers through a member program, which means that researchers in the Lab benefit from the input of players from different industries. From Samsung Electronics to Boston Scientific through banking company Wells Fargo, companies in various fields and locations can explain their needs and the challenges they encounter to the academics working in the AI Watson Lab. In turn, the members can take the intellectual property generated in the Lab and run with it even before it becomes an IBM product.

Cox is adamant, however, that the MIT-IBM Watson AI Lab was also built with blue-sky research compatibility in mind. The researchers in the lab are working on fundamental, cross-industry problems that need to be solved in order to make AI more applicable. "Our job isn't to solve customer problems," says Cox. "That's not the right use for the tool that is MIT. There are brilliant people in MIT that can have a hugely disruptive impact with their ideas, and we want to use that to resolve questions like: why is it that AI is so hard to use or impact in business?"

Explainability of AI is only one area of focus. But there is also AutoAI, for example, which consists of using AI to build AI models, and would let business leaders engage with the technology without having to hire expensive, highly-skilled engineers and software developers. Then, there is also the issue of data labeling: according to Cox, up to 90% of the data science project consists of meticulously collecting, labeling and curating the data. "Only 10% of the effort is the fancy machine-learning stuff," he says. "That's insane. It's a huge inhibitor to people using AI, let alone to benefiting from it."

SEE: AI and the coronavirus fight: How artificial intelligence is taking on COVID-19

Doing more with less data, in fact, was one of the key features of the Lab's latest research project, dubbed Clevrer, in which an algorithm can recognize objects and reason about their behaviors in physical events from videos. This model is a neuro-symbolic one, meaning that the AI can learn unsupervised, by looking at content and pairing it with questions and answers; ultimately, it requires far less training data and manual annotation.

All of these issues have been encountered one way or another not only by IBM, but by the companies that signed up to the Lab's member program. "Those problems just appear again and again," says Cox and that's whether you are operating in electronics or med-tech or banking. Hearing similar feedback from all areas of business only emboldened the Lab's researchers to double down on the problems that mattered.

The Lab has about 50 projects running at any given time, carefully selected every year by both MIT and IBM on the basis that they should be both intellectually interesting, and effectively tackling the problem of broad AI. Cox maintains that within this portfolio, some ideas are very ambitious and can even border blue-sky research; they are balanced, on the other hand, with other projects that are more likely to provide near-term value.

Although more prosaic than the idea of preserving purely blue-sky research, putting industry and academia in the same boat might indeed be the most pragmatic solution in accelerating the adoption of innovation and making sure AI delivers on its promise.

See the article here:

Artificial intelligence is struggling to cope with how the world has changed - ZDNet

An AI future set to take over post-Covid world – The Indian Express

Updated: May 18, 2020 10:03:39 pm

Written by Seuj Saikia

Rabindranath Tagore once said, Faith is the bird that feels the light when the dawn is still dark. The darkness that looms over the world at this moment is the curse of the COVID-19 pandemic, while the bird of human freedom finds itself caged under lockdown, unable to fly. Enthused by the beacon of hope, human beings will soon start picking up the pieces of a shared future for humanity, but perhaps, it will only be to find a new, unfamiliar world order with far-reaching consequences for us that transcend society, politics and economy.

Crucially, a technology that had till now been crawling or at best, walking slowly will now start sprinting. In fact, a paradigm shift in the economic relationship of mankind is going to be witnessed in the form of accelerated adoption of artificial intelligence (AI) technologies in the modes of production of goods and services. A fourth Industrial Revolution as the AI-era is referred to has already been experienced before the pandemic with the backward linkages of cloud computing and big data. However, the imperative of continued social distancing has made an AI-driven economic world order todays reality.

Setting aside the oft-discussed prophecies of the Robo-Human tussle, even if we simply focus on the present pandemic context, we will see millions of students accessing their education through ed-tech apps, mothers buying groceries on apps too and making cashless payments through fintech platforms, and employees attending video conferences on relevant apps as well: All this isnt new phenomena, but the scale at which they are happening is unparalleled in human history. The alternate universe of AI, machine learning, cloud computing, big data, 5G and automation is getting closer to us every day. And so is a clash between humans (labour) and robots (plant and machinery).

This clash might very well be fuelled by automation. Any Luddite will recall the misadventures of the 19th-century textile mills. However, the automation that we are talking about now is founded on the citadel of artificially intelligent robots. Eventually, this might merge the two factors of production into one, thereby making labour irrelevant. As factories around the world start to reboot post COVID-19, there will be hard realities to contend with: Shortage of migrant labourers in the entire gamut of the supply chain, variations of social distancing induced by the fears of a second virus wave and the overall health concerns of humans at work. All this combined could end up sparking the fire of automation, resulting in subsequent job losses and possible reallocation/reskilling of human resources.

In this context, a potential counter to such employment upheavals is the idea of cash transfers to the population in the form of Universal Basic Income (UBI). As drastic changes in the production processes lead to a more cost-effective and efficient modern industrial landscape, the surplus revenue that is subsequently earned by the state would act as a major source of funds required by the government to run UBI. Variants of basic income transfer schemes have existed for a long time and have been deployed to unprecedented levels during this pandemic. Keynesian macroeconomic measures are increasingly being seen as the antidote to the bedridden economies around the world, suffering from near-recession due to the sudden ban on economic activities. Governments would have to be innovative enough to pump liquidity into the system to boost demand without harming the fiscal discipline. But what separates UBI from all these is its universality, while others remain targeted.

This new economic world order would widen the cracks of existing geopolitical fault lines particularly between US and China, two behemoths of the AI realm. Datanomics has taken such a high place in the valuation spectre that the most valued companies of the world are the tech giants like Apple, Google, Facebook, Alibaba, Tencent etc. Interestingly, they are also the ones who are at the forefront of AI innovations. Data has become the new oil. What transports data are not pipelines but fibre optic cables and associated communication technologies. The ongoing fight over the introduction of 5G technology central to automation and remote command-control architecture might see a new phase of hostility, especially after the controversial role played by the secretive Chinese state in the COVID-19 crisis.

The issues affecting common citizens privacy, national security, rising inequality will take on newer dimensions. It is pertinent to mention that AI is not all bad: As an imperative change that the human civilisation is going to experience, it has its advantages. Take the COVID-19 crisis as an example. Amidst all the chaos, big data has enabled countries to do contact tracing effectively, and 3D printers produced the much-needed PPEs at local levels in the absence of the usual supply chains. That is why the World Economic Forum (WEF) argues that agility, scalability and automation will be the buzzwords for this new era of business, and those who have these capabilities will be the winners.

But there are losers in this, too. In this case, the developing world would be the biggest loser. The problem of inequality, which has already reached epic proportions, could be further worsened in an AI-driven economic order. The need of the hour is to prepare ourselves and develop strategies that would mitigate such risks and avert any impending humanitarian disaster. To do so, in the words of computer scientist and entrepreneur Kai-Fu Lee, the author of AI Superpowers, we have to give centrality to our heart and focus on the care economy which is largely unaccounted for in the national narrative.

(The writer is assistant commissioner of income tax, IRS. Views are personal)

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Opinion News, download Indian Express App.

See the rest here:

An AI future set to take over post-Covid world - The Indian Express

A New Way To Think About Artificial Intelligence With This ETF – MarketWatch

Among the myriad thematic exchange traded funds investors have to consider, artificial intelligence products are numerous and some are catching on with investors.

Count the ROBO Global Artificial Intelligence ETF THNQ, +2.46% as the latest member of the artificial intelligence ETF fray. HNQ, which debuted earlier this week, comes from a good gene pool as its stablemate,the Robo Global Robotics and Automation Index ETF ROBO, -0.37%, was the original and remains one of the largest robotics ETFs.

That's relevant because artificial intelligence and robotics are themes that frequently intersect with each other. Home to 72 stocks, the new THNQ follows the ROBO Global Artificial Intelligence Index.

Adding to the case for A.I., even with a new product such as THNQ, is that the technology has hundreds, if not thousands, of applications supporting its growth.

Companies developing AV technology are mainly relying on machine learning or deep learning, or both, according to IHS Markit. A major difference between machine learning and deep learning is that, while deep learning can automatically discover the feature to be used for classification in unsupervised exercises, machine learning requires these features to be labeled manually with more rigid rulesets. In contrast to machine learning, deep learning requires significant computing power and training data to deliver more accurate results.

Like its family ROBO, THNQ offers wide reach with exposure to 11 sub-groups. Those include big data, cloud computing, cognitive computing, e-commerce and other consumer angles and factory automation, among others. Of course, semiconductors are part of the THNQ fold, too.

The exploding use of AI is ushering in a new era of semiconductor architectures and computing platforms that can handle the accelerated processing requirements of an AI-driven world, according to ROBO Global. To tackle the challenge, semiconductor companies are creating new, more advanced AI chip engines using a whole new range of materials, equipment, and design methodologies.

While THNQ is a new ETF, investors may do well to not focus on that rather focus on the fact the AI boom is in its nascent stages.

Historically, the stock market tends to under-appreciate the scale of opportunity enjoyed by leading providers of new technologies during this phase of development, notes THNQ's issuer. This fact creates a remarkable opportunity for investors who understand the scope of the AI revolution, and who take action at a time when AI is disrupting industry as we know it and forcing us to rethink the world around us.

The new ETF charges 0.68% per year, or $68 on a $10,000 investment. That's inline with rival funds.

2020 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Visit link:

A New Way To Think About Artificial Intelligence With This ETF - MarketWatch

Artificial intelligence-based imaging reconstruction may lead to incorrect diagnoses, experts caution – Radiology Business

Artificial intelligence-based techniques, used to reconstruct medical images, may actually be leading to incorrect diagnoses.

Thats according to the results of a new investigation, led by experts at the University of Cambridge. Scientists there devised a series of tests to assess such imaging reconstruction and discovered numerous artefacts and other errors, according to their study, published May 11 in theProceedings of the National Academy of Sciences.

This issue seemed to persist across different types of AI, they noted, and may not be easily remedied.

"There's been a lot of enthusiasm about AI in medical imaging, and it may well have the potential to revolutionize modern medicine; however, there are potential pitfalls that must not be ignored," co-author Anders Hansen, PhD, from Cambridge's Department of Applied Mathematics and Theoretical Physics, said in a statement. "We've found that AI techniques are highly unstable in medical imaging, so that small changes in the input may result in big changes in the output."

To reach their conclusions, Hansen and coinvestigatorsfrom Norway, Portugal, Canada and the United Kingdomused several assessments to pinpoint flaws in AI algorithms. They targeted CT, MR and nuclear magnetic resonance imaging, and tested them based on instabilities tied to movement, small structural changes, and those related to the number of samples.

See original here:

Artificial intelligence-based imaging reconstruction may lead to incorrect diagnoses, experts caution - Radiology Business