Is Artificial intelligence the Future of IT Help Desk? – Analytics Insight

Artificial intelligence is one of the biggest markets for growth within the field of technology today. In fact, AI is rapidly empowering us to make major changes to various fields within the realm of technology. Help desk is no stranger to the idea that there is room for improvement within this niche of technology.

Businesses use help desk software to manage a variety of different types of information. From customers questions and concerns to employee computer repair requests, help desk is a solution for organizing, responding to, and gathering results from each of those individual tickets that are completed.

If you utilize a help desk for your own business, then you may have wondered how help desk could be changing in the near future. You might even be surprised that one of the many ways help desk could change is by utilizing AI technology to improve its accuracy and dependability.

One of the biggest areas of improvement that comes through the AI world within technology is the use of bots to chat with customers about their needs and any questions which may arise. Using AI, a business can employ virtual chatbots to troubleshoot concerns from the person visiting the help desk, SysAid is one such business already trialling out and employing AI on help desk concerns. This can greatly reduce the number of tickets that the help desk employees go through on a daily basis.

Although you can give users the opportunity to code their ticket in a certain priority rank, you can also use AI to help formulate the order in which those tickets should be reviewed. This function would also make the help desk more intuitive for the user because it can help the user auto-populate various options.

There are a number of ways that AI tools can help to build insight into the information that a help desk might find useful. First, using AI tools, a help desk can populate responses to the problem that a person is reporting. This can help to reduce the number of tickets that the help desk has to respond to on a daily basis.

AI can also help to formulate the most popular types of insight that are requested through this tool. Tracking this type of data can help the tech team to understand where there are weaknesses in their systems in use. Further, this same type of data tracking can help the tech team understand their weaknesses in response to certain issues as well. Using this data, tech teams can enhance their own performance to the questions that are posed through help desk technology.

Artificial intelligence is going to change the way that a lot of different technology tools are able to help us in the future. These tools automate processes and actually help the tech team to manage and understand their own worth in a new light. Further, AI can automate the system in such a way that the tech team has time to focus less on help desk requests and more on the bigger issues at hand with their resources.

Go here to see the original:

Is Artificial intelligence the Future of IT Help Desk? - Analytics Insight

Experts Are Divided Over Future Of Artificial Intelligence But Agree On Its Growing Impact – Outlook India

As humans, we love contrast. It is no wonder that experts, while defining the impact of Artificial Intelligence (AI) on the future of humankind, are at two ends of the spectrum. One is a happy scenario of human beings and artificially intelligent machines coexisting in perfect harmony. Another is an Orwellian dystopia of AI dominance over human intelligence and civilization. While there may be disagreements about the future, everyone agrees on the impact and growing ubiquity of AI.

Let us look at the potential impact of AI on our society. Algorithms have been generally successful in predicting almost all the weather calamities (except, of course, earthquakes) with reasonable accuracies. Since we started using AI, global death share from natural disasters since 2010 has reduced from 0.47% of all world deaths to 0.02% in 2017. AI worked wonders in healthcare by increasing the accuracy and timeliness of disease detection. Using a combination of big data and machine learning algorithms, we can predict machine part failures better. Stability of electricity grids, metal productions and commodity prices are predicted with astonishing precisions.

Enterprises were quick to jump on the AI bandwagon. Big four tech companies are seen to have made most of it. In the midst of the pandemic, global news media on June 9 reported an all-time high share prices for these companieswith a combined market capitalisation of almost $5 trillion. These companies are changing the way we live, do business and relax. We navigate lot more smoothly now with maps on our phone and do not need a translator to understand another language. We have super-efficient digital assistants to manage our schedules intelligently and can buy essential items from our phones.

Consumer packed goods companies have started using big data and machine learning to determine which of the retail stores should get what commodity and at what price. Many of the manufacturing organisations worldwide have started using predictive analytics to analyse their planning efficiency. Using AI techniques, logistics and transportation companies have started planning significant route optimisation, reducing cost and delivering faster to ports. Banks, stock markets and insurance companies use data, machine learning techniques and natural language processing techniques to provide the precise stocks and other financial products recommendations to their customers. Transformative aspects of AI seem to be going beyond delivering powerful use cases and outcomes. It seems to be changing the model of business itself. Organisations are no longer getting measured by the number of employees, assets and real estate they hold. Classic adage of David killing a Goliath is not a fable anymore. AI seems to have the potential to take a powerful business opportunity, analyse a lot of data with powerful algorithms and present the outcome through multiple channels to bring transformation right at the doorsteps (or screens) of consumers. Possibly, that is where it is getting a bit worrisome.

In his 2018 best seller Factfulness, Dr. Hans Rosling points out five global risks that should worry the human race. He could not have been more prophetic. First of them was global pandemic; others being financial collapse, World War III, climate change and extreme poverty. In the middle of a significant disruption, AI is seen to present a real disturbing proposition. Can the enterprise bring in more automation to replace the severely depleted job markets? Can the potential of AI create a situation where powerful corporations and states with the power of algorithm, processing capability of big data get into a position of more unassailable lead - where they have absolute power and society? Would we be left with the intent and resources to focus on most important challenge in post COVID world - more people than ever in the state of extreme hunger?

German philosopher Arthur Schopenhauer wrote, Talent hits a target that no one else can hit, Genius hits the target no one else can see. Human geniuses have their limited time to shape the future as clock ticks on.

(The author is partner, Deloitte India. Views expressed are personal.)

See more here:

Experts Are Divided Over Future Of Artificial Intelligence But Agree On Its Growing Impact - Outlook India

"Artificial Intelligence Will Make Medicine Better in the Long Run" – Biophotonics.World

Image source: Leibniz IPHT

By: Sven Dring

Leibniz IPHT is increasingly focusing on artificial intelligence and learning systems. Thomas Bocklitz is heading the new research department "Photonic Data Science". We asked him how AI could help shape the future of diagnostics

What new possibilities does Photonic Data Science open up for diagnostics?

Photonic Data Science is a potpourri combining mathematical and statistical methods with algorithms and domain knowledge to translate measurement data into useful information. We usually translate photonic data into biomedical for example diagnostic information. By translating with the computer, robust diagnostic information can be extracted. Tiny details in complex data can be made useful for diagnostics. This opens up new possibilities for diagnostics.

Artificial intelligence (AI) then helps to evaluate this data. Which technologies researched at the institute are based on AI?

In the laser-based rapid test of infectious pathogens, machine learning methods and algorithms for data pre-treatment are used to translate Raman spectra of bacteria into a resistance prediction i.e. to predict pathogens and antibiotic resistances on the basis of the spectroscopically recorded data. For the compact microscope Medicars we use deep and machine learning techniques to translate multimodal image data into a tissue prediction for the detection of tumor margins. In smartphone microscopy, which is being researched by Rainer Heintzmann's team, image enhancement is achieved by means of deep learning procedures.

Where do the data sets come from that are currently mainly used? Can they be applied equally to all patients?

The data sets are generated within clinical studies, which we supervise from the beginning. The studies are still too small to exclude a gender bias, but we are working on the experimental design so that there is no gender bias in the training data set and we hope that the models will not generate any bias.

Does the automated analysis of medical control data also carry a risk? A loss of control?

Of course, every technology has risks, although these are manageable here. Artificial intelligence or machine learning processes only work well if the new test data is similar to the training data. We try to tackle this problem by creating the necessary similarity through standardization and model transfer in order to im- prove the predictions. There is a loss of control when the models are applied fully automatically. But in the medium term the models will only represent a second opinion, so there will be no loss of control.

Can physicians improve the learning systems? Is the procedure of AI applications comprehensible for them?

Physicians can increase the database or reduce the uncertainty of the metadata i.e. labels by pooling or voting, which leads to better models. The traceability of AI models is a major topic in current machine learning research Keyword "Explainable AI". The aim is to decipher these models in order to make it clearly understandable how mass-based learning methods and deep learning systems achieve their results.

Can AI be perfected to the point where it can eventually make better diagnoses than a human?

I think so, if the data is highly standardized. Another challenge is to demonstrate that improvement. This requires quite long clinical trials and is ethically problematic.

Could AI ever replace doctors instead of just to supporting them? For example, could operations be performed by AI-controlled robots at some point?

I don't think so, because there are many uncertainties in an operation that must be reacted to flexibly. This is not a prominent feature of current AI procedures. It's more likely that the surgical robots will do very specific things directly on the operator's instructions.

Will AI make medicine better?

In the long run, I think so. But first, it will make diagnostics more comparable and it will also allow data to be used not only sequentially, but in combination.

Artificial Intelligence, Machine Learning, Deep Learning

Decision making, problem solving, learning these are actions that we commonly associate with human thinking. We call their automation artificial intelligence (AI). An important part of AI is machine learning (ML). Scientists are researching algorithms andstatistical or mathematical methods with which computer systems can solve specific tasks.

For this purpose, machine learning methods construct a statistical-mathematical model from an example data set, the training data. On this basis, ML methods can make predictions or make decisions without having been explicitly programmed for it. ML techniques are used, for example, for spam detection in e-mail accounts, in image processing, and for the analysis of spectroscopic data. Deep learning is a method of machine learning that is similar to the way the human brain processes visual and other stimuli. Artificial neurons receive input, process it and pass it on to other neurons.Starting from a first, visible layer, the characteristics in the subsequent, hidden intermediate layers become increasingly abstract. The result is output in the last, again visible layer.

Making Tumor Tissue Visible with AI

Did the surgeon remove the entire tumor during surgery? In order to find out, researchers are combining optical methods with artificial intelligence (AI) and data pre-processing methods. AI is behind the compact Medicars microscope, for example, which enables rapid cancer diagnosis during surgery. Here, patterns and molecular details of a tissue sample irradiated with laser light are automatically evaluated and translated into classical images of standard diagnostics. Thus, tumor margins become visible.

"For this purpose, we train AI algorithms together with pathologists," explains Thomas Bocklitz. We take multimodal images of a tissue sample with our laser-based multi- modal microscope. In pathology, the tissue section is then embedded, stained, and an image of the HE- stained tissue section is taken (HE = haematoxylin-eosin). This enables the pathologist to recognize tumor tissue. Then we put the multimodal and the HE image side by side."

Based on the pathologist's analysis of the tissue structure and morphology, the research team teaches the algorithm which tissue is healthy and which is sick. "In this supervised approach, the algorithm learns to distinguish successive, healthy and diseased areas." With success: The accuracy of the predictions is more than 90 percent according to tests on a small group of patients.

Source: Leibniz IPHT

Continued here:

"Artificial Intelligence Will Make Medicine Better in the Long Run" - Biophotonics.World

How machine learning and artificial intelligence can drive clinical innovation – PharmaLive

By:

Dr. Basheer Hawwash, Principal Data Scientist

Amanda Coogan, Risk-Based Monitoring Senior Product Manager

Rhonda Roberts, Senior Data Scientist

Remarque Systems Inc.

Everyone knows the terms machine learning and artificial intelligence. Few can define them, much less explain their inestimable value to clinical trials. So, its not surprising that, despite their ability to minimize risk, improve safety, condense timelines, and save costs, these technology tools are not widely used by the clinical trial industry.

Basheer Hawwash

There are lots of reasons for resistance: It seems complicated. Those who are not statistically savvy may find the thought of algorithms overwhelming. Adopting new technology requires a change in the status quo.

Yet, there are more compelling reasons for adoption especially as the global pandemic has accelerated a trend toward patient-centricity and decentralized trials, and an accompanying need for remote monitoring.

Machine learning vs. artificial intelligence. Whats the difference?

Lets start by understanding what the two terms mean. While many people seem to use them interchangeably, they are distinct: machine learning can be used independently or to inform artificial intelligence; artificial intelligence cannot happen without machine learning.

Machine learning is a series of algorithms that analyze data in various ways. These algorithms search for patterns and trends, which can then be used to make more informed decisions. Supervised machine learning starts with a specific type of data for instance, a particular adverse event. By analyzing the records of all the patients who have had that specific adverse event, the algorithm can predict whether a new patient is also likely to suffer from it. Conversely, unsupervised machine learning applies analysis such as clustering to a group of data; the algorithm sorts the data into groups which researchers can then examine more closely to discern similarities they may not have considered previously.

In either case, artificial intelligence applies those data insights to mimic human problem-solving behavior. Speech recognition, self-driving cars, even forms that auto-populate all exist because of artificial intelligence. In each case, it is the vast amounts of data that have been ingested and analyzed by machine learning that make the artificial intelligence application possible.

Physicians, for instance, can use a combination of machine learning and artificial intelligence to enhance diagnostic abilities. In this way, given a set of data, machine learning tools can analyze images to find patterns of chronic obstructive pulmonary disease (COPD); artificial intelligence may be able to further identify that some patients have idiopathic pulmonary fibrosis (IPF) as well as COPD, something their physicians may neither have thought to look for, nor found unaided.

Amanda Coogan

Now, researchers are harnessing both machine learning and artificial intelligence in their clinical trial work, introducing new efficiencies while enhancing patient safety and trial outcomes.

The case of the missing data

Data is at the core of every clinical trial. If those data are not complete, then researchers are proceeding on false assumptions, which can jeopardize patient safety and even the entire trial.

Traditionally, researchers have guarded against this possibility by doing painstaking manual verification, examining every data point in the electronic data capture system to ensure that it is both accurate and complete. More automated systems may provide reports that researchers can look through but that still requires a lot of human involvement. The reports are static and must be reviewed on an ongoing basis and every review has the potential for human error.

Using machine learning, this process happens continually in the background throughout the trial, automatically notifying researchers when data are missing. This can make a material difference in a trials management and outcomes.

Consider, if you will, a study in which patients are tested for a specific metric every two weeks. Six weeks into the study, 95 percent of the patients show a value for that metric; 5 percent dont. Those values are missing. The system will alert researchers, enabling them to act promptly to remedy the situation. They may be able to contact the patients in the 5 percent and get their values, or they may need to adjust those patients out of the study. The choice is left to the research team but because they have the information in near-real time, they have a choice.

As clinical trials move to new models, with greater decentralization and greater reliance on patient-reported data, missing data may become a larger issue. To counteract that possibility, researchers will need to move away from manual methods and embrace both the ease and accuracy of machine-learning-based systems.

The importance of the outlier

In research studies, not every patient nor even every site reacts the same way. There are patients whose vital signs are off the charts. Sites with results that are too perfect. Outliers.

Rhonda Roberts

Often researchers discover these anomalies deep into the trial, during the process of cleaning the data in preparation for regulatory submission. That may be too late for a patient who is having a serious reaction to a study drug. It also may mean that the patients data are not valid and cannot be included in the end analysis. Caught earlier, there would be the possibility of a course correction. The patient might have been able to stay in the study, to continue to provide data; alternatively, they could be removed promptly along with their associated data.

Again, machine learning simplifies the process. By running an algorithm that continually searches for outliers, those irregularities are instantly identified. Researchers can then quickly drill down to ascertain whether there is an issue and, if so, determine an appropriate response.

Of course, an anomaly doesnt necessarily flag a safety issue. In a recent case, one of the primary endpoints involved a six-minute walk test. One site showed strikingly different results; as it happened, they were using a different measurement gauge, something that would have skewed the study results, but, having been flagged, was easily modified.

In another case, all the patients at a site were rated with maximum quality of life scores and all their blood pressure readings were whole numbers. Machine learning algorithms flagged these results because they varied dramatically from the readings at the other sites. On examination, researchers found that the site was submitting fraudulent reports. While that was disturbing to learn, the knowledge gave the trial team power to act, before the entire study was rendered invalid.

A changing landscape demands a changing approach

As quality management is increasingly focusing on risk-based strategies, harnessing machine learning algorithms simplifies and strengthens the process. Setting parameters based on study endpoints and study-specific risks, machine learning systems can run in the background throughout a study, providing alerts and triggers to help researchers avoid risks.

The need for such risk-based monitoring has accelerated in response to the COVID-19 pandemic. With both researchers and patients unable or unwilling to visit sites, studies have rapidly become decentralized. This has coincided with the emergence and growing importance of patient-centricity and further propelled the rise of remote monitoring. Processes are being forced online. Manual methods are increasingly insufficient and automated methods that incorporate machine learning and artificial intelligence are gaining primacy.

Marrying in-depth statistical thinking with critical analysis

The trend towards electronic systems does not replace either the need for or the value of clinical trial monitors and other research personnel; they are simply able to do their jobs more effectively. A machine-learning-based system runs unique algorithms, each analyzing data in a different way to produce visualizations, alerts, or workflows, which CROs and sponsors can use to improve patient safety and trial efficiency. Each algorithm is tailored to the specific trial, keyed to endpoints, known risks, or other relevant factors. While the algorithms offer guidance, the platform does not make any changes to the data or the trial process; it merely alerts researchers to examine the data and determine whether a flagged value is clinically significant. Trial personnel are relieved of much tedious, reproducible, manual work, and are able to use their qualifications to advance the trial in other meaningful ways.

The imperative to embrace change

Machine learning and artificial intelligence have long been buzzwords in the clinical trial industry yet these technologies have only haltingly been put to use. Its time for that pendulum to swing. We can move more quickly and more precisely than manual data verification, and data cleaning allow. We can work more efficiently if we harness data to drive trial performance rather than simply to prove that the study endpoints were achieved. We can operate more safely if we are programmed for risk management from the outset. All this can be achieved easily, with the application of machine learning and artificial intelligence. Now is the time to move forward.

Continue reading here:

How machine learning and artificial intelligence can drive clinical innovation - PharmaLive

Top Artificial Intelligence and Robotics Investments in July 2020 – Analytics Insight

Artificial Intelligence is growing at a faster pace. Despite the unprecedented situation of coronavirus, 2020 till date has witnessed sustained momentum. Funding has increased by 51% to $8.4B from the previous quarter. With this faster pace, it is also attracting a series of funding and financial investments. Lets go through some of the important investments in artificial intelligence and robotics companies in July 2020.

Amount Funded: $6 million

Transaction Name: Seed Round

Lead Investors: Kindred Capital and Capnamic Ventures

BotsAnduS creates robots that work with people in shopping centers, retail stores, office buildings, airports etc. They aim at digitising the full customer journey by automating the collection of onsite data and providing 24/7 customer service.

BotsAnduS has reported that it has raised a $6m seed funding, which was co-driven by Kindred Capital and Capnamic Ventures, along with angel investors participating in the round.

Amount Funded: $225 million

Transaction Name: Seed E Funding

Lead Investors: Alkeon Capital Management

UiPath, a leading RPA company has brought $225 million up in Seed E funding. The round was driven by Alkeon Capital Management. Other investors participating in the round are Accel, Coatue, Dragoneer, IVP, Madrona Venture Group, Sequoia Capital, Tencent, Tiger Global, Wellington. The funding will be utilized for developing automation solutions in order to mitigate the risks posed on productivity and supply of human workers.

Amount Funded: $100 million

Transaction Name: Series C Funding

Lead Investors: Next47

Skydio is a leader in autonomous flight technology and a U.S. drone manufacturer. Skydio raised $100 million in Series C funding. The round was driven by Next47 along with Levitate Capital, NTT DOCOMO Ventures, and existing investors including Andreessen Horowitz, IVP, and Playground participating in the round. The organization will utilize the funding to grow its operations in public sector markets and accelerate product development.

Amount Funded: $56.2 million

Transaction Name: Series A Funding

Lead Investors: Lightspeed Venture Partners

The Bay Area-based robotics startup , has raised $56.2 million up in Series A funding led by Kleiner Perkins, Lightspeed Venture Partners, Obvious Ventures, Pacific West Bank, B37 Ventures, Presidio (Sumitomo) Ventures, Blackhorn Ventures, Liquid 2 Ventures and Stanford StartX.

Dexterity offers robots for warehousing, logistics and supply chain operations. It has already seen a boost from the push for essential services during the COVID-19 pandemic.

Amount Funded: $13m

Transaction Name: Series A Funding

Lead Investors: Index Ventures

Abacus.AI, a San Francisco, CA-based AI research and AI cloud services company, has as of late brought $13m up in Series A funding driven by Index Ventures, with participation from Eric Schmidt, Ram Shriram, Decibel Ventures, Jerry Yang, Mariam Naficy, Erica Shultz, Neha Narkhede, Xuezhao Lan, and Jeannette Furstenberg.

The company will use the funding to grow its research team and scale its operations.

Amount Funded:

Transaction Name: Series A

Lead Investors: ETP Ventures

Deep Longevity, a biotechnology company transforming longevity R&D through AI-discovered biomarkers of aging, has raised Series A funding of an undisclosed amount. The round was led by ETP Ventures and different prominent investors like BOLD Capital Partners, Longevity Vision Fund, Oculus, Formic Ventures, and LongeVCalso participated.

Amount Funded:

Transaction Name: Seed funding

Lead Investors: Y Combinator

The company raised an undisclosed amount of Seed funding. Nana is building the Guild for the eventual future of work. A distributed workforce of tradespeople, starting with the $4B Appliance Repair industry. Nana is an on-demand home maintenance marketplace. A marketplace meets modern trade school, showing new aptitudes and associating the 10M+ Americans who will be affected via automation to more compelling jobs in the home services space.

Nana is a place for consumers to complete things, and AI and a learning management system for skilled professionals.

Amount Funded: $53m

Transaction Name: Series B

Lead Investors: DCVC

Caption Health, the Calif.-based medical artificial intelligence (AI) company, has raised a $53m Series B funding driven by existing investor DCVC. Other investors which participated in the round are Atlantic Bridge, Edwards Lifesciences, along with existing investor Khosla Ventures. . The company plans to scale up its operations and develop its AI technology platform.

Amount Funded: $6.5m

Transaction Name: Series A

Lead Investors: Debiopharm

Computational biology startup Nucleai raised $6.5m Series A funding led by Debiopharm a Swiss biopharmaceutical company. Previous investors Vertex Ventures and Grove Ventures also participated in the round.

Nucleai offers an AI-powered precision oncology platform that offers biomarker discovery and treatment decisions for cancer treatments. It combines machine learning and computer vision to model the characteristics of both the tumour and the patients immune system.

Amount Funded: 2.5m

Transaction Name: Seed Funding

Lead Investors: NPIF and XTX Ventures

Logicaly, a UK-based dtech startup declared that it has raised 2.5m seed funding from NPIF and XTX Ventures. The company aims to use the funding to continue developing its product in time for the US election.

The startup deploys AI to detect fake and news and misinformation as well as provide fact-checking service to combat fake news.

Read the original here:

Top Artificial Intelligence and Robotics Investments in July 2020 - Analytics Insight

Job interviews: Recruiters are using artificial intelligence to analyse what you say to find the right hire – TechRepublic

Harqen's AI platform analyses language to determine a candidate's suitability for a role, potentially making it less prone to bias than video-based recruitment technology.

Artificial-intelligence-based hiring tools are already transforming the recruitment process by allowing businesses to vastly speed up the time it takes to identify top talent. With algorithms able to scour applications databases in the fraction of a time it would take a human hiring manager, AI-assisted hiring has the potential not only to give precious time back to businesses, but also draw in candidates from wider and more diverse talent pools.

AI-assisted hiring is also posited as a potential solution for reducing human bias whether subconscious or otherwise in the hiring process.

SEE: Robotic process automation: A cheat sheet (free PDF) (TechRepublic)

US company Harqen has been offering hiring technologies to some of the world's biggest companies for years, partnering with the likes of Walmart, FedEx and American Airlines to streamline and improve their hiring processes. Originating as an on-demand interviewing provider, the company has now expanded into AI with a new platform that it says offers a more dependable and bias-free means of matching employers with employees.

The solution, simply called the Harqen Machine Learning Platform, analyses candidate's answers to interview questions and assesses the type of words and language used in their responses. According to Harqen, this allows it to put together a profile of psychological traits that can be used to help determine a candidate's suitability for a role.

Combined with a resume analysis, which provides a more straightforward determiner of whether a candidate's professional and educational background fits with the requirements of the job, Harqen says its machine-learning platform is capable of making the same hiring decision as human recruiters 95% of the time. In one campaign that assessed approximately 3,500 job applications with "a very large US diagnostic firm" in early 2020, Harqen's machine-learning platform successfully predicted 2,193 of the candidate applications that were accepted, and 1,292 that were declined.

Key to Harqen's offering is what the company's chief technology officer Mark Unak describes as the platform's linguistic analysis, which can identify word clusters that are specific to certain job types but also offers a personality analysis based on the so-called "big five" traits, also known as the OCEAN model (openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism), which can help hiring managers determine a candidate's enthusiasm for the position.

"We have a dictionary of terms where most positive words are ranked as a +5 and most negative words are ranked as a -5, so we can determine how enthusiastic you are in the answers that you're giving," Unak tells TechRepublic.

"We can also use a linguistic analysis to analyse the grammar," he adds, noting that about 60% of our vocabulary consists of just 80 words. Those are the pronouns, the propositions, the articles and the intransigent verbs. "The remaining 10,000 words in the English language fill in that 40%. By the analysis of how you use that, we can get a psychological trait analysis."

Harqen's machine-learning tool analyses word clusters to help determine candidates' personality traits, such as enthusiasm.

Source: Harqen.ai

According to Unak, using a machine-learning system that determines a candidate's suitability based on linguistic analysis is a more accurate and impartial method than those that rely on facial-scanning or vocal-inflection algorithms. Such machine-learning techniques within hiring are on the rise and are increasingly being adopted by major companies around the world.

"That's kind of problematic," says Unak. "Not everybody expresses emotions in the same way, with the same facial expressions, and not everybody expresses the same emotion that's expected. Different cultures and different races might have different problems in expressing those facial expressions and having the computer recognise them."

SEE:Diversity and Inclusion policy (TechRepublic Premium)

By only analysing the linguistic content that has been transcribed from recorded interviews, Harqen's algorithm never factors in appearance, facial expressions, or other self-reported personality traits that could be unreliable. Unak says the company will also retrain its models on a regular basis as new data comes in, which will help ensure that algorithms don't get stuck in their old ways if candidates begin giving new answers to questions that are equally relevant.

"If our customer evolves and they start to hire people who are either more diverse, or come up with different answers to the questions that are just as relevant, our models will pick that up," Unak adds.

Diversity whether based on gender, race, age or otherwise has been show to play a significant role in the success or failure of workplace productivity and collaboration. Whether AI-based hiring tools can help here remains to be seen, and ultimately depends on whether they can be implemented in a fair and impartial way.

Beyond diversity, Harqen is exploring how its machine-learning tool could help businesses get the best return on investment form their hiring choices. The magic word here is delayed gratification: the ability to accurately identify employees who can resist the temptation for immediate rewards and instead persevere for an even greater payoff in the future.

"It's grit, it's persistence, it's the ability to imagine a future and it's the ability to develop and execute a plan to get there," says Unak. "Isn't that what hope and delayed gratification mean? I hope for a better future, I can imagine it, my hope is realistic and that there's a plan or a way to get there, and I'm going to work towards it."

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Continued here:

Job interviews: Recruiters are using artificial intelligence to analyse what you say to find the right hire - TechRepublic

The next frontier of human-robot relationships is building trust – Scroll.in

Artificial intelligence is entering our lives in many ways on our smartphones, in our homes, in our cars. These systems can help people make appointments, drive and even diagnose illnesses. But as it continues to serve important and collaborative roles in peoples lives, a natural question is: Can I trust them? How do I know they will do what I expect?

Explainable artificial intelligence is a branch of artificial intelligence research that examines how artificial agents can be made more transparent and trustworthy to their human users. Trustworthiness is essential if robots and people are to work together. It seeks to develop systems that human beings find trustworthy while also performing well to fulfill designed tasks.

At the Center for Vision, Cognition, Learning, and Autonomy at University of California Los Angeles, we and our colleagues are interested in what factors make machines more trustworthy, and how well different learning algorithms enable trust. Our lab uses a type of knowledge representation a model of the world that artificial intelligence uses to interpret its surroundings and make decisions that can be more easily understood by humans. This naturally aids in explanation and transparency, thereby improving trust of human users.

In our latest research, we experimented with different ways a robot could explain its actions to a human observer. Interestingly, the forms of explanation that fostered the most human trust did not correspond to the learning algorithms that produced the best task performance. This suggests performance and explanation are not inherently dependent upon each other optimising for one alone may not lead to the best outcome for the other. This divergence calls for robot designs that takes into account both good task performance and trustworthy explanations.

In undertaking this study, our group was interested in two things. How does a robot best learn to perform a particular task? Then, how do people respond to the robots explanation of its actions?

We taught a robot to learn from human demonstrations how to open a medicine bottle with a safety lock. A person wore a tactile glove that recorded the poses and forces of the human hand as it opened the bottle. That information helped the robot learn what the human did in two ways: symbolic and haptic. Symbolic refers to meaningful representations of your actions: for example, the word grasp. Haptic refers to the feelings associated with your bodys postures and motions: for example, the sensation of your fingers closing together.

First, the robot learned a symbolic model that encodes the sequence of steps needed to complete the task of opening the bottle. Second, the robot learned a haptic model that allows the robot to imagine itself in the role of the human demonstrator and predict what action a person would take when encountering particular poses and forces.

It turns out the robot was able to achieve its best performance when combining the symbolic and haptic components. The robot did better using knowledge of the steps for performing the task and real-time sensing from its gripper than using either alone.

Now that the robot knows what to do, how can it explain its behavior to a person? And how well does that explanation foster human trust?

To explain its actions, the robot can draw on its internal decision process as well as its behavior. The symbolic model provides step-by-step descriptions of the robots actions, and the haptic model provides a sense of what the robot gripper is feeling.

In our experiment, we added an additional explanation for humans: a text write-up that provided a summary after the robot has finished attempting to open the medicine bottle. We wanted to see if summary descriptions would be as effective as the step-by-step symbolic explanation to gain human trust.

We asked 150 human participants, divided into four groups, to observe the robot attempting to open the medicine bottle. The robot then gave each group a different explanation of the task: symbolic, step-by-step, haptic arm positions and motions, text summary, or symbolic and haptic together. A baseline group observed only a video of the robot attempting to open the bottle, without providing any additional explanations.

We found that providing both the symbolic and haptic explanations fostered the most trust, with the symbolic component contributing the most. Interestingly, the explanation in the form of a text summary didnt foster more trust than simply watching the robot perform the task, indicating that humans prefer robots to give step-by-step explanations of what theyre doing.

The most interesting outcome of this research is that what makes robots perform well is not the same as what makes people see them as trustworthy. The robot needed both the symbolic and haptic components to do the best job. But it was the symbolic explanation that made people trust the robot most.

This divergence highlights important goals for future artificial intelligence and robotics research: to focus on pursuing both task performance and explainability. Only focussing on task performance may not lead to a robot that explains itself well. Our lab uses a hybrid model to provide both high performance and trustworthy explanations.

Performance and explanation do not naturally complement each other, so both goals need to be a priority from the start when building artificial intelligence systems. This work represents an important step in systematically studying how human-machine relationships develop, but much more needs to be done. A challenging step for future research will be to move from I trust the robot to do X to I trust the robot.

For robots to earn a place in peoples daily lives, humans need to trust their robotic counterparts. Understanding how robots can provide explanations that foster human trust is an important step toward enabling humans and robots to work together.

Mark Edmonds, PhD, Candidate in Computer Science, University of California, Los Angeles. Yixin Zhu, Postdoctoral Scholar in Computer Science, University of California, Los Angeles.

This article first appeared on The Conversation.

See the article here:

The next frontier of human-robot relationships is building trust - Scroll.in

Game-Changing Artificial Intelligence Solution by PhotoShelter to Revolutionize Social Media Workflow as Premier Lacrosse League Returns to the Field…

NEW YORK--(BUSINESS WIRE)--PhotoShelter today announced its game-changing artificial intelligence solution for creative teams, complete with one-of-a-kind athlete recognition technology. During their 16 day Championship Series, the Premier Lacrosse League (PLL) returns to the field as the first organization to use this cutting-edge technology to move images from the sidelines out to fans in real time.

PhotoShelters collection of AI tools, enabled by Miro AI, takes the PLLs real-time visual content workflow to the next level with end-to-end automation. As soon as a photographer captures a moment and sends the image to PhotoShelter via FTP, the comprehensive AI solution tags photos with metadata specifically designed for the PLL, including player names, sponsor names, and custom terms for core lacrosse gear like goals, gloves, helmets and sticks. The images are searchable and accessible immediately by any PLL staff member for deployment to the PLL's many channels for fan engagement.

RosterIQ athlete recognition technology combines both facial recognition and jersey data to automatically identify athletes. Player tagged images are automatically routed from PhotoShelter to players in real time through the Greenfly app - allowing both the league and its players the opportunity to effortlessly engage fans across social channels.

During the first 2 weeks of the tournament, the PLL team has uploaded 20,000 images to PhotoShelter. So far, PhotoShelter AI has identified 40,666 players and more than 18,000 brand marks. The automated workflow allows the PLL to keep up with the demand for content with a smaller-than-usual media team on site.

The AI recognition of our photos will help us save hundreds of hours tagging and organizing photos, enabling us to share content with our partners, players, and fans faster than ever before, explained Tyler Steinhardt, Director of Marketing for the Premier Lacrosse League.

The new AI-powered workflow has had a staggering impact on fan engagement. Since the start of Training Camp on July 20, the PLL has performed 2.65M interactions on Instagram - beating out other leagues returning from the sports pause, including the MLS, NWSL, PBR and NASCAR. The AI solution also allows the PLL to distribute content to players faster, leading to a 200% increase in posting by players week over week. On average, players now have a 12.2% engagement rate on Instagram.

PhotoShelter AI represents a big leap forward in our vision to transform the ways creative people work," said Andrew Fingerman, CEO of PhotoShelter. "PLL is a cutting-edge new sports organization, and they've embraced our technology to drive visual storytelling in real time to new and exciting levels. This is just the first step in our plan to lead a next generation of fully automated content workflow and collaboration capabilities for brands and creative professionals.

In addition to RosterIQ, PhotoShelters suite of AI solutions includes FusionIQ and BrandIQ. FusionIQ pulls together data from three different sources - Google, Microsoft and Amazon - to deliver superior general metadata for tagging a creative team's visual media assets. BrandIQ identifies brand and sponsor logos, to enable creative teams to quickly search and easily deliver relevant assets to each stakeholder in real time.

Although RosterIQ and BrandIQ are ideal for sports, the overall PhotoShelter AI technology suite is designed for the broadest possible use. It has the capability to recognize any set of objects, from staff members and executives to logos and brand marks. FusionIQ will add value through general metadata for any organization, and custom models can be built and trained to meet the unique needs of any PhotoShelter for Brands client.

About PhotoShelter

PhotoShelter is a visual media technology company that helps photographers and creative teams unlock the power of a moment. Our leading digital asset management platform for creative teams helps 1,200 top consumer and retail brands, travel and hospitality icons, professional sports teams and world-class universities easily organize content, collaborate and share their creative assets. To request a demo, please visit libris.photoshelter.com.

About PLL

The Premier Lacrosse League (PLL) is the leading professional lacrosse league in North America, composed of 7 teams rostered with the best players in the world. Co-founded by lacrosse superstar Paul Rabil and his brother, serial entrepreneur and investor, Mike Rabil, the Premier Lacrosse League is backed by an investment group composed of Joe Tsai Sports, Brett Jefferson Holdings, The Raine Group, Creative Artists Agency (CAA), and other top investors in sports and media. The PLL season is distributed through an exclusive media-rights agreement with NBC Sports Group. For more information, visit http://www.premierlacrosseleague.com

About Miro AI

Miro AI is a growing technology startup based in the United States that uses computer vision and deep-learning to analyze visual media to deliver highly specialized results for sports, events, and businesses. Since launching in 2017, Miro AIs award-winning software has been used by the biggest brands in sports to identify millions of athletes and analyze brand preferences.

Visit link:

Game-Changing Artificial Intelligence Solution by PhotoShelter to Revolutionize Social Media Workflow as Premier Lacrosse League Returns to the Field...

BLOG: How to capitalise on the Artificial Intelligence theme – Your Money – Your Money

Robotics and Artificial Intelligence (AI) are expected to disrupt numerous sectors and industries. But how can investors capitalise on this theme?

Artificial Intelligence, robotics and automation are all themes which are becoming more prevalent within todays society, and for investors, certainly have a lot of potential. We do not yet fully understand and are unable to predict the true impact of these technological advancements, yet the speed at which business and operational transformation is taking place via the implementation of these digital technologies is staggering.

Artificial intelligence (AI) is a branch of computer science which is allowing companies to move to a new standard of analysing data and helping them to garner more value from their assets, both physical and digital. By utilising rapidly growing datasets, businesses are able to drive innovation, increase efficiency and empower this data to generate societal and corporate profits.

Robotics have been around for some time with UNIMATE being the first robot to be used on a production line in 1962. Todays examples include welding robots in factories, order picking robots in goods warehouses and even surgical robots used to improve clinical outcomes of patients through minimally invasive surgery. Additionally, as automation has allowed companies to use software to perform administrative tasks, robots now input digital signatures, auto-filling of online forms and employee analytics. The automation of manufacturing processes has also allowed for greater efficiency and reduced costs.

The intent to embrace these technologies already exists and is growing. In Morgan Stanleys Q3 2019 CIO Survey, artificial intelligence and machine learning implementation was listed as the second highest priority IT spend for companies, preceded only by cloud computing. Traditional business models are certainly being disrupted. The benefits of these new and ever-improving technologies will expand well beyond just technology stocks; they will influence and drive change and disruption through numerous sectors and industries.

The investment case for these themes is clear for anyone to see. However, identifying the correct investments to exploit these substantial opportunities and putting them together in an efficient way is somewhat trickier. Below are a number of actively managed funds which look to capitalise on these increasingly important and impactful themes:

The fund is a particularly unique offering giving investors not only a chance to access companies benefiting from, or set to benefit from, AI but also giving access to an investment process using AI itself. Their proprietary AI platform is used to identify companies where economic value is directly affected by AI.

As more and more companies engage with AI, this fund is well positioned to provide strong exposure to secular investment growth of long duration, resulting in potential for very strong returns. The fund is well diversified and doesnt rely solely on high allocation to the US and tech stocks; however, investors will need to accept a higher level of overall risk.

The team managing and contributing toward the investment process is thought to be the largest dedicated technology investing team in Europe. Their expertise and experience helps them to identify companies standing to benefit and capture the growth created by these long-term transformational themes.

The fund gives great exposure to companies enabling and involved in robotics, automation, AI and materials science. In doing so it has delivered annualised returns of over 15.5% since its inception in late 2017 double that of both the benchmark and sector.

The fund mainly invests in companies contributing to, or profiting from, developments in robotics and enabling technologies. Pictet is arguably the leading thematic investing firm in Europe and their pedigree within this space speaks for itself. On a three-year basis, this fund has generated the highest excess return over its respective benchmark of any of Pictets funds demonstrating the potential of this investment opportunity in particular.

The team believe the robotics sector is set to grow significantly faster than the broader economy over the coming years due to the ability of robotics to increase productivity, reduce costs and help solve challenges such as an increasingly elderly population.

Tom Rosser is investment research analyst at The Share Centre

Link:

BLOG: How to capitalise on the Artificial Intelligence theme - Your Money - Your Money

How Artificial Intelligence is Helping to Fight against Coronavirus in India? – Analytics Insight

With the number of COVID-19 cases crossing 18 million mark, the healthcare system across the globe has suffered a major blow against the management of COVID-19. In India, COVID-19 has proved challenging initially for identifying the COVID patients and diagnosing the disease. However, the use of Artificial Intelligence (AI) over the past few years, has rendered the Healthline workers and the government for solutions, to stall this roadblock.

Artificial Intelligence uses the technology of powerful algorithms which then processes the data, thus identifying patterns. Thus, for any Artificial Intelligence to be successful, big data is necessary.

Across the globe, as Polymerase Chain Reaction (PCR) is expensive and time-consuming, Chest X-rays are now used as a standardized procedure for the diagnosis of COVID-19. However, a simple chest X-ray cannot distinguish the disease and the extent of infection affecting the lungs.

Artificial Intelligence, in collaboration with Chest X-rays, helps in identifying the abnormal findings, thus diagnosing the ground glass opacities in the lungs, which is a classic feature of the COVID-19 disease. Many companies such asQure.ai, a Mumbai based start-up, andTata consultancy serviceshave used AI in a chest X-ray for the diagnosis of COVID-19. The AI developed by Qure.ai also helps in identifying the extent of infection affecting the lungs. This is usually valuable for patients who remain in the Intensive Care Unit (ICU).

In April, Apple and Google, the two big tech giants, colluded for developing a contact-based app to trace COVID-19 patients. The app works on Bluetooth and has been mostly used in western countries. In India, the government ruled out a similar strategy by developing the Aarogya Setu app.

In June, India told the UN, that drones and contact tracing apps have helped India in managing COVID cases. The app employs Bluetooth and location data to let the user know of any suspected COVID-19 patients nearby. This app is developed in 12 languages and has a user database of more than 10 million people.

Other mobile applications such as GoCoronaGo and Sampark-o-meter have also been developed for contact tracing by the Indian Institute of Science (IISc), Bangalore and IITs.

In Odisha, the state health department co-operated with the IT industry for developing drones which were proven helpful in checking infringement of rules in containment zones.

Apart from using the Aarogya Setu app, for contact tracing, many states have exercised AI to identify people who are mask violators with the help of AI cameras.

InTelangana, due to a surge in the COVID cases, the police department has come up with installing a software tool in the CCTV cameras to identify the mask violators. After identifying it sends a notification to the police headquarters, which in turn sends the update to the patrolling police team.

This model is similar to the AI model developed by China for tracking mask violators. This kind of AI technology is initially installed in Hyderabad, Cyberabad, and Rachakonda.

During the progression of the coronavirus, AI has facilitated manual repurposing of drugs to treat COVID-19. Indraprastha Institute of Information Technology (IIIT) has developed an AI model that can repurpose medicines according to the highest success probability against the disease, instead of going through the entire process manually.

Tata Consultancy Services is also using AI technology to crunch down the large molecule of drugs into highly effective molecules against the disease, thus reducing the time duration of the process.

Besides this, AI has proven effective in providing Tele-medicines and Tele-consultation, online consultation with health experts concerning a particular disease. In many states, likeChhattisgarh, AI is proven as a success by online training of the medics for controlling the COVID-19 pandemic.

In Kerala, Robots are used fordelivering hand sanitizersand delivering public health messages at the entrance of the office buildings and in isolation wards, to combat COVID-19.

The IIT and Stanford Alumni have also come up with a solution fordisinfecting public spaces. They have developed a machine called Robo Sapien, which controls the spread of the virus by ionizing the corona discharge.

Many start-ups are nowusing AIto come with solutions against the spread of COVID-19.

The rest is here:

How Artificial Intelligence is Helping to Fight against Coronavirus in India? - Analytics Insight

Artificial Intelligence and Its Partners – Modern Diplomacy

Digitalization and the development of artificial intelligence (AI) bring up many philosophical and ethical questions about the role of man and robot in the nascent social and economic order. How real is the threat of an AI dictatorship? Why do we need to tackle AI ethics today? Does AI provide breakthrough solutions? We ask these and other questions in our interview with Maxim Fedorov, Vice-President for Artificial Intelligence and Mathematical Modelling at Skoltech.

On 13 July, Maxim Fedorov chaired the inaugural Trustworthy AI online conference on AI transparency, robustness and sustainability hosted by Skoltech.

Maxim, do you think humanity already needs to start working out a new philosophical model for existing in a digital world whose development is determined by artificial intelligence (AI) technologies?

The fundamental difference between todays technologies and those of the past is that they hold up a mirror of sorts to society. Looking into this mirror, we need to answer a number of philosophical questions. In times of industrialization and production automation, the human being was a productive force. Today, people are no longer needed in the production of the technologies they use. For example, innovative Japanese automobile assembly plants barely have any people at the floors, with all the work done by robots. The manufacturing process looks something like this: a driverless robot train carrying component parts enters the assembly floor, and a finished car comes out. This is called discrete manufacturing the assembly of a finite set of elements in a sequence, a task which robots manage quite efficiently. The human being is gradually being ousted from the traditional economic structure, as automated manufacturing facilities generally need only a limited number of human specialists. So why do we need people in manufacturing at all? In the past, we could justify our existence by the need to earn money or consume, or to create jobs for others, but now this is no longer necessary. Digitalization has made technologies a global force, and everyone faces philosophical questions about their personal significance and role in the modern world questions we should be answering today, and not in ten years when it will be too late.

At the last World Economic Forum in Davos, there was a lot of discussion about the threat of the digital dictatorship of AI. How real is that threat in the foreseeable future?

There is no evil inherent in AI. Technologies themselves are ethically neutral. It is people who decide whether to use them for good or evil.

Speaking of an AI dictatorship is misleading. In reality, technologies have no subjectivity, no I. Artificial intelligence is basically a structured piece of code and hardware. Digital technologies are just a tool. There is nothing mystical about them either.

My view as a specialist in the field is that AI is currently a branch of information and communications technology (ICT). Moreover, AI does not even live in an individual computer. For a person from the industry, AI is a whole stack of technologies that are combined to form what is called weak AI.

We inflate the bubble of AIs importance and erroneously impart this technology stack with subjectivity. In large part, this is done by journalists, people without a technical education. They discuss an entity that does not actually exist, giving rise to the popular meme of an AI that is alternately the Terminator or a benevolent super-being. This is all fairy tales. In reality, we have a set of technological solutions for building effective systems that allow decisions to be made quickly based on big data.

Various high-level committees are discussing strong AI, which will not appear for another 50 to 100 years (if at all). The problem is that when we talk about threats that do not exist and will not exist in the near future, we are missing some real threats. We need to understand what AI is and develop a clear code of ethical norms and rules to secure value while avoiding harm.

Sensationalizing threats is a trend in modern society. We take a problem that feeds peoples imaginations and start blowing it up. For example, we are currently destroying the economy around the world under the pretext of fighting the coronavirus. What we are forgetting is that the economy has a direct influence on life expectancy, which means that we are robbing many people of years of life. Making decisions based on emotion leads to dangerous excesses.

As the philosopher Yuval Noah Harari has said, millions of people today trust the algorithms of Google, Netflix, Amazon and Alibaba to dictate to them what they should read, watch and buy. People are losing control over their lives, and that is scary.

Yes, there is the danger that human consciousness may be robotized and lose its creativity. Many of the things we do today are influenced by algorithms. For example, drivers listen to their sat navs rather than relying on their own judgment, even if the route suggested is not the best one. When we receive a message, we feel compelled to respond. We have become more algorithmic. But it is ultimately the creator of the algorithm, not the algorithm itself, that dictates our rules and desires.

There is still no global document to regulate behaviour in cyberspace. Should humanity perhaps agree on universal rules and norms for cyberspace first before taking on ethical issues in the field of AI?

I would say that the issue of ethical norms is primary. After we have these norms, we can translate them into appropriate behaviour in cyberspace. With the spread of the internet, digital technologies (of which AI is part) are entering every sphere of life, and that has led us to the need to create a global document regulating the ethics of AI.

But AI is a component part of information and communications technologies (ICT). Maybe we should not create a separate track for AI ethics but join it with the international information security (IIS) track? Especially since IIS issues are being actively discussed at the United Nations, where Russia is a key player.

There is some justification for making AI ethics a separate track, because, although information security and AI are overlapping concepts, they are not embedded in one another. However, I agree that we can have a separate track for information technology and then break it down into sub-tracks where AI would stand alongside other technologies. It is a largely ontological problem and, as with most problems of this kind, finding the optimal solution is no trivial matter.

You are a member of the international expert group under UNESCO that is drafting the first global recommendation on the ethics of AI. Are there any discrepancies in how AI ethics are understood internationally?

The group has its share of heated discussions, and members often promote opposing views. For example, one of the topics is the subjectivity and objectivity of AI. During the discussion, a group of states clearly emerged that promotes the idea of subjectivity and is trying to introduce the concept of AI as a quasi-member of society. In other words, attempts are being made to imbue robots with rights. This is a dangerous trend that may lead to a sort of technofascism, inhumanity of such a scale that all previous atrocities in the history of our civilization would pale in comparison.

Could it be that, by promoting the concept of robot subjectivity, the parties involved are trying to avoid responsibility?

Absolutely. A number of issues arise here. First, there is an obvious asymmetry of responsibility. Let us give the computer with rights, and if its errors lead to damage, we will punish it by pulling the plug or formatting the hard drive. In other words, the responsibility is placed on the machine and not its creator. The creator gets the profit, and any damage caused is someone elses problem. Second, as soon as we give AI rights, the issues we are facing today with regard to minorities will seem trivial. It will lead to the thought that we should not hurt AI but rather educate it (I am not joking: such statements are already being made at high-level conferences). We will see a sort of juvenile justice for AI. Only it will be far more terrifying. Robots will defend robot rights. For example, a drone may come and burn your apartment down to protect another drone. We will have a techno-racist regime, but one that is controlled by a group of people. This way, humanity will drive itself into a losing position without having the smallest idea of how to escape it.

Thankfully, we have managed to remove any inserts relating to quasi-members of society from the groups agenda.

We chose the right time to create the Committee for Artificial Intelligence under the Commission of the Russian Federation for UNESCO, as it helped to define the main focus areas for our working group. We are happy that not all countries support the notion of the subjectivity of AI in fact, most oppose it.

What other controversial issues have arisen in the working groups discussions?

We have discussed the blurred border between AI and people. I think this border should be defined very clearly. Then we came to the topic of human-AI relationships, a term which implies the whole range of relationships possible between people. We suggested that relationships be changed to interactions, which met opposition from some of our foreign colleagues, but in the end, we managed to sort it out.

Seeing how advanced sex dolls have become, the next step for some countries would be to legalize marriage with them, and then it would not be long before people starting asking for church weddings. If we do not prohibit all of this at an early stage, these ideas may spread uncontrollably. This approach is backed by big money, the interests of corporations and a different system of values and culture. The proponents of such ideas include a number of Asian countries with a tradition of humanizing inanimate objects. Japan, for example, has a tradition of worshipping mountain, tree and home spirits. On the one hand, this instills respect for the environment, and I agree that, being a part of the planet, part of nature, humans need to live in harmony with it. But still, a person is a person, and a tree is a tree, and they have different rights.

Is the Russian approach to AI ethics special in any way?

We were the only country to state clearly that decisions on AI ethics should be based on a scientific approach. Unfortunately, most representatives of other countries rely not on research, but on their own (often subjective) opinion, so discussions in the working group often devolve to the lay level, despite the fact that the members are highly qualified individuals.

I think these issues need to be thoroughly researched. Decisions on this level should be based on strict logic, models and experiments. We have tremendous computing power, an abundance of software for scenario modelling, and we can model millions of scenarios at a low cost. Only after that should we draw conclusions and make decisions.

How realistic is the fight against the subjectification of AI if big money is at stake? Does Russia have any allies?

Everyone is responsible for their own part. Our task right now is to engage in discussions systematically. Russia has allies with matching views on different aspects of the problem. And common sense still prevails. The egocentric approach we see in a number of countries that is currently being promoted, this kind of self-absorption, actually plays into our hands here. Most states are afraid that humans will cease to be the centre of the universe, ceding our crown to a robot or a computer. This has allowed the human-centred approach to prevail so far.

If the expert group succeeds at drafting recommendations, should we expect some sort of international regulation on AI in the near future?

If we are talking about technical standards, they are already being actively developed at the International Organization for Standardization (ISO), where we have been involved with Technical Committee 164 Artificial Intelligence (TC 164) in the development of a number of standards on various aspects of AI. So, in terms of technical regulation, we have the ISO and a whole range of documents. We should also mention the Institute of Electrical and Electronics Engineers (IEEE) and its report on Ethically Aligned Design. I believe this document is the first full-fledged technical guide on the ethics of autonomous and intelligent systems, which includes AI. The corresponding technical standards are currently being developed.

As for the United Nations, I should note the Beijing Consensus on Artificial Intelligence and Education that was adopted by UNESCO last year. I believe that work on developing the relevant standards will start next year.

So the recommendations will become the basis for regulatory standards?

Exactly. This is the correct way to do it. I should also say that it is important to get involved at an early stage. This way, for instance, we can refer to the Beijing agreements in the future. It is important to make sure that AI subjectivity does not appear in the UNESCO document, so that it does not become a reference point for this approach.

Let us move from ethics to technological achievements. What recent developments in the field can be called breakthroughs?

We havent seen any qualitative breakthroughs in the field yet. Image recognition, orientation, navigation, transport, better sensors (which are essentially the sensory organs for robots) these are the achievements that we have so far. In order to make a qualitative leap, we need a different approach.

Take the chemical universe, for example. We have researched approximately 100 million chemical compounds. Perhaps tens of thousands of these have been studied in great depth. And the total number of possible compounds is 1060, which is more than the number of atoms in the Universe. This chemical universe could hold cures for every disease known to humankind or some radically new, super-strong or super-light materials. There is a multitude of organisms on our planet (such as the sea urchin) with substances in their bodies that could, in theory, cure many human diseases or boost immunity. But we do not have the technology to synthesize many of them. And, of course, we cannot harvest all the sea urchins in the sea, dry them and make an extract for our pills. But big data and modelling can bring about a breakthrough in this field. Artificial intelligence can be our navigator in this chemical universe. Any reasonable breakthrough in this area will multiply our income exponentially. Imagine an AIDS or cancer medicine without any side effects, or new materials for the energy industry, new types of solar panels, etc. These are the kind of things that can change our world.

How is Russia positioned on the AI technology market? Is there any chance of competing with the United States or China?

We see people from Russia working in the developer teams of most big Asian, American and European companies. A famous example is Sergey Brin, co-founder and developer of Google. Russia continues to be a donor of human resources in this respect. It is both reassuring and disappointing because we want our talented guys to develop technology at home. Given the right circumstances, Yandex could have dominated Google.

As regards domestic achievements, the situation is somewhat controversial. Moscow today is comparable to San Francisco in terms of the number, quality and density of AI development projects. This is why many specialists choose to stay in Moscow. You can find a rewarding job, interesting challenges and a well-developed expert community.

In the regions, however, there is a concerning lack of funds, education and infrastructure for technological and scientific development. All three of our largest supercomputers are in Moscow. Our leaders in this area are the Russian Academy of Sciences, Moscow State University and Moscow Institute of Physics and Technology organizations with a long history in the sciences, rich traditions, a sizeable staff and ample funding. There are also some pioneers who have got off the ground quickly, such as Skoltech, and surpassed their global competitors in many respects. We recently compared Skoltech with a leading AI research centre in the United Kingdom and discovered that our institution actually leads in terms of publications and grants. This means that we can and should do world-class science in Russia, but we need to overcome regional development disparities.

Russia has the opportunity to take its rightful place in the world of high technology, but our strategy should be to overtake without catching up. If you look at our history, you will see that whenever we have tried to catch up with the West or the East, we have lost. Our imitations turned out wrong, were laughable and led to all sorts of mishaps. On the other hand, whenever we have taken a step back and synthesized different approaches, Asian or Western, without blindly copying them, we have achieved tremendous success.

We need to make a sober assessment of what is happening in the East and in the West and what corresponds to our needs. Russia has many unique challenges of its own: managing its territory, developing the resource industries and continuous production. If we are able to solve these tasks, then later we can scale up our technological solutions to the rest of the world, and Russian technology will be bought at a good price. We need to go down our own track, not one that is laid down according to someone elses standards, and go on our way while being aware of what is going on around us. Not pushing back, not isolating, but synthesizing.

From our partner RIAC

Related

Read more from the original source:

Artificial Intelligence and Its Partners - Modern Diplomacy

Artificial intelligence isnt destroying jobs, its making them more inclusive – The Globe and Mail

A new world of work is on the horizon, driven by artificial intelligence. By 2025, the World Economic Forum predicts that 52 per cent of total task hours across existing jobs will be performed by machines. By 2030, up to 800 million jobs could be replaced by technology altogether.

That said, the outlook is far from bleak. Rather than eliminating positions, technology is expected to bring about net positive jobs over the coming decade but a fact equally as important (and often overlooked) is that artificial intelligence presents an opportunity for a more socioeconomically inclusive career start.

Throughout much of the past century, a persons success in life could be largely attributed to their socioeconomic circumstances at birth. Studies have shown that children born into middle-class homes have greater access to opportunities that are more highly correlated with successful occupational outcomes, such as good schools and financial support. As a result, these children are far more likely to succeed in primary school, high school and post-secondary education.

Story continues below advertisement

These advantages are compounded when it comes to hiring for jobs out of post-secondary school. Resumes, in this way, mirror our privilege.

The criteria for success in the future of work, however, presents an opportunity for a fairer system to assess job fit: skills.

If machine intelligence becomes a large source of expertise (i.e., cancer-screening detection, market research analytics and driving, just to name a few), people will need to adapt and change their skillsets to remain employable. A recent white paper published by IBM rated adaptability as the most important skill that executives will be hiring for in the future. Moreover, as technology continues to advance, our technical skills continue to depreciate (by approximately 50 per cent every five years).

As a result of all of these changes, we will have to upskill (which is the process of learning new skills or teaching workers new skills). Well have to learn and unlearn throughout the majority of our working lives. This changes the formula from front-loading education early in life to a life of continuous learning. It also places skills, like that adaptability mentioned above, more centrally as the currency of labour.

As the CEO of Upwork, one of the fastest-growing gig platforms in the world, wrote two years ago, What matters to me is not whether someone has a computer science degree, but how well they can think and how well they can code. The CEO of JPMorgan Chase, Jamie Dimon, echoed a similar sentiment, stating that the reality is, the new world of work is about skills, not necessarily degrees.

Of course, degrees will still have value. It will also take some time to readjust our job-fit assessment infrastructures. However, paths that do not include a four-year post-secondary degree will also be included in the job-fit assessment as skills become central. This can make room for more inclusive opportunities for career advancement.

Having a more inclusive job-fit assessment infrastructure, however, will not happen automatically. There are many challenges that governments and employers will have to overcome, and actions they will need to take:

Story continues below advertisement

The adoption of advanced technologies in the workforce will revolutionize work. In fact, our very definition of what it means to work may change. How governments and employers respond to these changes will have a large impact on whether this results in positive gains for more people. We have the potential to build a future that works for more people than it currently does, and it is up to us to make it happen.

Sinead Bovell is a futurist and founder of WAYE (Weekly Advice for Young Entrepreneurs), an organization aiming to educate young entrepreneurs on the intersection of business, technology, and the future. She is the Leadership Lab columnist for August 2020.

This column is part of Globe Careers Leadership Lab series, where executives and experts share their views and advice about the world of work. Find all Leadership Lab stories at tgam.ca/leadershiplab and guidelines for how to contribute to the column here.

Stay ahead in your career. We have a weekly Careers newsletter to give you guidance and tips on career management, leadership, business education and more. Sign up today or follow us at @Globe_Careers.

The rest is here:

Artificial intelligence isnt destroying jobs, its making them more inclusive - The Globe and Mail

RadNet and Hologic Announce Collaboration to Advance the Development of Artificial Intelligence Tools in Breast Health – GlobeNewswire

LOS ANGELES and MARLBOROUGH, Mass., Aug. 06, 2020 (GLOBE NEWSWIRE) -- RadNet, Inc. (Nasdaq: RDNT), a national leader in providing high-quality, cost-effective, fixed-site outpatient diagnostic imaging services, and Hologic, Inc. (Nasdaq: HOLX), an innovative medical technology company primarily focused on improving womens health, have entered into a definitive collaboration to advance the use of artificial intelligence (A.I.) in breast health.

As the world leader in mammography, Hologic will contribute capabilities and insights behind its market-leading hardware and software, and will benefit from access to data produced by RadNets fleet of high-resolution mammography systems, the largest in the nation, to train and refine current and future products based on A.I. RadNet will share data from its extensive network of imaging centers, as well as provide in-depth knowledge of the patient pathway and workflow needs to help make a positive impact across the breast care continuum. The collaboration will enable new joint market opportunities and further efforts to build clinician confidence and develop and integrate new A.I. technologies.

We believe the future of breast health will rely heavily on the integration of A.I. tools, such as our 3DQuorum imaging technology, as well as next generation CAD software, that aid in the early detection of breast cancer, said PeteValenti, Hologics Division President, Breast and Skeletal Health Solutions. We are energized by the opportunities this transformative collaboration with RadNet creates for patients and clinicians alike. Access to data is critical in training and refining A.I. algorithms. With this collaboration, we now have the opportunity to leverage data from the largest fleet of high-resolution mammography systems to develop new tools across the continuum of care, provide workflow efficiencies, and improve patient satisfaction and outcomes.

As part of its collaboration with Hologic, RadNet intends to upgrade its entire fleet of Hologic mammography systems to feature Hologics 3DQuorum imaging technology, powered by Genius AI. This technology works in tandem with Clarity HD high resolution imaging technology to reduce tomosynthesis image volume for radiologists by 66 percent.i Additionally, all of RadNets Hologic systems are anticipated to feature the Genius 3D Mammography exam, the only mammogram clinically proven and FDA approved as superior for all women, including those with dense breasts, compared with 2D mammography alone. ii,iii,iv,v

The collaboration will be bolstered by RadNets recent acquisition of DeepHealth (Cambridge, MA), which uses machine learning to develop software tools to improve cancer detection and provide clinical decision support. Led by Dr. Gregory Sorensen, DeepHealths team of A.I. experts is focused on enabling industry-leading care by providing products that clinicians and patients can trust. In addition, the DeepHealth team will integrate its A.I. tools within the Hologic ecosystem. When seeking a partner and reviewing options amongst all mammography vendors, we selected to integrate our tools with Hologics market-leading technology, said Dr. Sorensen. Hologics systems produce the highest level of spatial resolution in the market. Hologic also has the largest domestic footprint and market share in 3D Mammography systems. This integration will allow the DeepHealth team to train its algorithms for use with the most advanced screening technology possible. As Hologic and RadNet share their respective capabilities and tools, greater efficiency and accuracy can be achieved by our radiologists.

Much like RadNet, Hologic is a highly innovative company and market leader in breast health, said Howard Berger, MD, RadNets Chairman and CEO. When Hologics leading screening technology is paired with RadNets approximately 1.2 million annual screening mammograms, the resulting dataset becomes a powerful tool to train algorithms. We see the future as being transformative for both of our organizations.

We have witnessed how the application of our Genius AI technology platform has improved cancer detection, operational efficiency and clinical decision support across the breast cancer care continuum, said Samir Parikh, Hologics Global Vice President for Research and Development, Breast and Skeletal Health Solutions. We look forward to building upon these advances in collaboration with Dr. Sorensen and the RadNet team to expand the use of machine learning, big data applications and automated algorithms impacting global breast care.

About RadNet, Inc.RadNet, Inc. is the leading national provider of freestanding, fixed-site diagnostic imaging services in the United States based on the number of locations and annual imaging revenue. RadNet has a network of 335 owned and/or operated outpatient imaging centers. RadNet's core markets include California, Maryland, Delaware, New Jersey and New York. In addition, RadNet provides radiology information technology solutions, teleradiology professional services and other related products and services to customers in the diagnostic imaging industry. Together with affiliated radiologists, and inclusive of full-time and per diem employees and technicians, RadNet has a total of approximately 8,600 employees. For more information, visit http://www.radnet.com.

About Hologic, Inc.Hologic, Inc. isan innovative medical technology company primarily focused on improving womens health and well-being through early detection and treatment.For more information on Hologic, visitwww.hologic.com.

The Genius 3D Mammography exam (also known as the Genius exam) is only available on a Hologic 3D Mammography system. It consists of a 2D and 3D image set, where the 2D image can be either an acquired 2D image or a 2D image generated from the 3D image set. There are more than 6,000 Hologic 3D Mammography systems in use in the United States alone, so women have convenient access to the Genius exam. To learn more, visit http://www.Genius3DNearMe.com.

Hologic, 3D Mammography, 3DQuorum, 3Dimensions, Clarity HD, Genius and Genius AI are trademarks and/or registered trademarks of Hologic, Inc., and/or its subsidiaries in the United States and/or other countries.

Forward-Looking StatementsThis news release may contain forward-looking information that involves risks and uncertainties, including statements about the use of Hologic products. There can be no assurance these products will achieve the benefits described herein or that such benefits will be replicated in any particular manner with respect to an individual patient, as the actual effect of the use of the products can only be determined on a case-by-case basis. In addition, there can be no assurance that these products will be commercially successful or achieve any expected level of sales. Hologic and RadNet expressly disclaim any obligation or undertaking to release publicly any updates or revisions to any such statements presented herein to reflect any change in expectations or any change in events, conditions or circumstances on which any such data or statements are based.

This information is not intended as a product solicitation or promotion where such activities are prohibited. For specific information on what products are available for sale in a particular country, please contact a local Hologic sales representative or write to womenshealth@hologic.com.

Media and Investor Contact RadNet, Inc.:Mark StolperExecutive Vice President & Chief Financial Officer310-445-2800

Media Contact Hologic, Inc.:Jane Mazur508-263-8764 (direct)585-355-5978 (mobile)

Investor Contact Hologic, Inc.:Michael Watts858-410-8588

i Report: CSR-00116

ii Results from Friedewald, SM, et al. "Breast cancer screening using tomosynthesis in combination with digital mammography." JAMA 311.24 (2014): 2499-2507; a multi-site (13), non-randomized, historical control study of 454,000 screening mammograms investigating the initial impact the introduction of the Hologic Selenia Dimensions on screening outcomes. Individual results may vary. The study found an average 41% increase and that 1.2 (95% CI: 0.8-1.6) additional invasive breast cancers per 1000 screening exams were found in women receiving combined 2D FFDM and 3D mammograms acquired with the Hologic 3D Mammography System versus women receiving 2D FFDM mammograms only.

iii Freidewald SM, Rafferty EA, Rose SL, Durand MA, Plecha DM, Greenberg JS, Hayes MK, Copit DS, Carlson KL, Cink TM, Carke LD, Greer LN, Miller DP, Conant EF, Breast Cancer Screening Using Tomosynthesis in Combination with Digital Mammography,JAMAJune 25, 2014.

iv Bernardi D, Macaskill P, Pellegrini M, etal. Breast cancer screening with tomosynthesis (3D mammography) with acquired or synthetic 2D mammography compared with 2D mammography alone (STORM-2): a population-based prospective study.Lancet Oncol.2016 Aug;17(8):1105-13.

v FDA submissions P080003, P080003/S001, P080003/S004, P080003/S005

Here is the original post:

RadNet and Hologic Announce Collaboration to Advance the Development of Artificial Intelligence Tools in Breast Health - GlobeNewswire

COVID-19 Impacts: Artificial Intelligence-as-a-Service (AIaaS) Market Will Accelerate at a CAGR of Over 48% Through 2020-2024|Growing Adoption of…

LONDON--(BUSINESS WIRE)--Technavio has been monitoring the artificial intelligence-as-a-service (AIaaS) market and it is poised to grow by USD 15.14 billion during 2020-2024, progressing at a CAGR of over 48% during the forecast period. The report offers an up-to-date analysis regarding the current market scenario, latest trends and drivers, and the overall market environment.

Technavio suggests three forecast scenarios (optimistic, probable, and pessimistic) considering the impact of COVID-19. Please Request Free Sample Report on COVID-19 Impact

Frequently Asked Questions-

The market is concentrated, and the degree of concentration will accelerate during the forecast period. Alphabet Inc., Amazon.com Inc., Apple Inc., Intel Corp., International Business Machines Corp., Microsoft Corp., Oracle Corp., Salesforce.com Inc., SAP SE, and SAS Institute Inc. are some of the major market participants. To make the most of the opportunities, market vendors should focus more on the growth prospects in the fast-growing segments, while maintaining their positions in the slow-growing segments.

The growing adoption of cloud-based solutions has been instrumental in driving the growth of the market.

Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024: Segmentation

Artificial Intelligence-as-a-Service (AIaaS) Market is segmented as below:

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR41175

Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024: Scope

Technavio presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources. Our artificial intelligence-as-a-service (AIaaS) market report covers the following areas:

This study identifies the increasing adoption of AI in predictive analysis as one of the prime reasons driving the artificial intelligence-as-a-service (AIaaS) market growth during the next few years.

Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024: Vendor Analysis

We provide a detailed analysis of vendors operating in the artificial intelligence-as-a-service (AIaaS) market, including some of the vendors such as Alphabet Inc., Amazon.com Inc., Apple Inc., Intel Corp., International Business Machines Corp., Microsoft Corp., Oracle Corp., Salesforce.com Inc., SAP SE, and SAS Institute Inc. Backed with competitive intelligence and benchmarking, our research reports on the artificial intelligence-as-a-service (AIaaS) market are designed to provide entry support, customer profile and M&As as well as go-to-market strategy support.

Register for a free trial today and gain instant access to 17,000+ market research reports. Technavio's SUBSCRIPTION platform

Artificial Intelligence-as-a-Service (AIaaS) Market 2020-2024: Key Highlights

Table of Contents:

Executive Summary

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation by End-user

Customer Landscape

Geographic Landscape

Drivers, Challenges, and Trends

Vendor Landscape

Vendor Analysis

Appendix

About Us

Technavio is a leading global technology research and advisory company. Their research and analysis focuses on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

See the original post here:

COVID-19 Impacts: Artificial Intelligence-as-a-Service (AIaaS) Market Will Accelerate at a CAGR of Over 48% Through 2020-2024|Growing Adoption of...

Artificial Intelligence in Business: The New Normal in Testing Times – Analytics Insight

The COVID 19 situation, has rendered the industry into an unprecedented situation. Businesses across the globe are now resorting to plan out new strategies to keep the operations going, to meet clients demands.

Work-from-Home is the new normal for both the employees and the employers to function in a mitigated manner. Twitter on their tweet had suggested their employees, to function through Work-from-Home, forever, if they want to. This new trend can be easily surmised as being effective for a while to manage operations, but cannot be ruled out as the necessary solution, for satisfying the customers and clients in the long run.

Companies need to employ ethically approved ideas and strategies that would assure employees, clients, and customers, without breaching the data.

With the present situation, where social distancing is a must, classroom training cannot be ruled out as the plausible solution for training employees. Thats where Virtual Reality comes into play.

Virtual Reality (VR), which was earlier ruled out to be used in the gaming interface has now the potential to become the face of the industrial enterprise. Areportby PwC states that VR and Augmented Reality has the potential to surge US$1.5trillion globally by the year 2030. Another report by PwC states that VR can train employees four times faster than classroom training. Individuals trained through VR has confidence 2.5 times more than those who are trained through classroom programs or e-courses, and 2.3 times more emotionally inclined towards the content that they are working on. Employees trained using VR are also 1.5 times more focused than that through classroom programs and e-courses.

The only drawback in using PwC will be in its cost-effectiveness as it is 47 percent costlier than classroom courses.

Ever since its evolution, one of the major concerns regarding AI amongst clients, customers, and employees is the breach of ethical AI practices. A report byCapgemini Research Institutestates that amongst 62% of customers who were surveyed would like to place their trust in an organization that practices AI ethically.

For any organization to keep its business and employees safe during the time of crisis, the development of an ethically viable AI is a must. This can only be achieved by practicing ethical use of AI applications, informing and educating the customers about the practices of AI.

Areportby PwC, states that planning out a new strategy in both data and technology, evaluating the ethical flaws associated with the existing data, and only collecting the required amount of data, would help in maintaining trust amongst both the customers and employees.

Given the present situation, sales executives are facing a daunting task of maintaining their operations. However, the use of AI can easily redeem this time consuming and laborious task. Withthe use of an AI algorithm, the sales executive or manager can identify the higher probable inclination of the client towards a particular service. The AI algorithm would also, help in offering a new product according to the pre-requisite preferences of the client.

In the time of crisis, new solutions must be thought about for repurposing business. PwC states that this can be achieved by repurposing business assets, forming a new business partnership, rapid innovation, and testing and learning.

This will not only help in building trust amongst employees but also build resilience within the organization, for the future endeavor.

Excerpt from:

Artificial Intelligence in Business: The New Normal in Testing Times - Analytics Insight

VIEW: Digitisation in pathology and the promise of artificial intelligence – CNBCTV18

The COVID-19 pandemic has had a profound impact across industries and healthcare in particularevery aspect of it is undergoing changefrom diagnosis to treatment and through the entire continuum of care. This has also created an urgency in the healthcare industry, to look for innovative solutions and a boost to the faster, efficient application of technologies like Artificial Intelligence (AI) and Deep Learning. Pathology is one area which stands to greatly benefit from these applications.

Pathologists today spend a significant amount of time observing tissue samples under a microscope and they are facing resource shortages, growing complexity of requests, and workflow inefficiencies with the growing burden of diseases. Their work underpins every aspect of patient care, from diagnostic testing and treatment advice to the use of cutting-edge genetic technologies. They also have to work together in a multidisciplinary team of doctors, scientists and healthcare professionals to diagnose, treat and prevent illness. With increasing emphasis on sub-specialisation, taking a second opinion from specialists, means shipping several glass slides across laboratories, sometimes to another country. This means reduced efficiency and delayed diagnosis and treatment. The current situation has disrupted this workflow.

Digitization in pathology

Digitization in Pathology has enabled an increase in efficiency, speed and enhanced quality of diagnosis. Recent technological advances have accelerated the adoption of digitisation in pathology, similar to the digital transformation that radiology departments have experienced over the last decade. Digital Pathology has enabled the conversion of the traditional glass slide to a digital image, which can then be viewed on a monitor, annotated, archived and shared digitally across the globe, for consultation based on organ sub-specialisation. With digitisation, a vast data set has become available, supporting new insights to pathologists, researchers, and pharmaceutical development teams.

The promise of AI

The availability of vast data is enabling the use of Artificial Intelligence methods, to further transform the diagnosis and treatment of diseases at an unprecedented pace. Human intelligence assisted with articial intelligence can provide a well-balanced view of what neither of them could do on their own. The evolution of Deep Learning neural networks and the improvement in accuracy for image pattern recognition has been staggering in the last few years. Similar to how we learn from experience, the deep learning algorithm would perform a task repeatedly, each time improving it a little to achieve more accurate outcomes.

The approach to diagnosis that incorporates multiple sources of data (e.g., pathology, radiology, clinical, molecular and lab operations) and using mathematical models to generate diagnostic inferences and presenting with clinically actionable knowledge to customers is Computational Pathology. Computational Pathology systems are able to correlate patterns across multiple inputs from the medical record, including genomics, enhancing a pathologists diagnostic capabilities, to make a more precise diagnosis. This allows Pathologists to eliminate tedious and time-consuming tasks while focusing more on interpreting data and detailing the implications for a patients diagnosis.

AI applications that can easily augment a Pathologists cognitive ability and save time are, for example, identifying the sections of greatest interest in biopsies, finding metastases in the lymph nodes of breast cancer patients, counting mitoses for cancer grading or measuring tumors point-to-point. The ultimate goal going forward is the integration of all these tools and algorithms into the existing workflow and make it seamless and more efficient.

The Challenge

However, Artificial Intelligence in Pathology is quite complex. The IT infrastructure required in terms of data storage, network bandwidth and computing power is significantly higher as compared to Radiology. Digitisation of Whole Slide Images (WSI) in pathology generate large amounts of gigapixel sized images and processing them needs high-performance computing. Training a deep learning network on a whole slide image at full resolution can be very challenging. With the increase in the processing power with the use of GPUs, there is a promise to train deep learning networks successfully, starting with training smaller regions of interest.

Another key aspect for training deep learning algorithms is the need for large amounts of labeled data. For supervised learning, a ground truth must first be included in the dataset to provide appropriate diagnostic context and this will be time-consuming. Obtaining adequately labeled data by experts is the key.

Digitisation in pathology supported by appropriate IT infrastructure is enabling Pathologists to work remotely without the need to wait for glass slides to be delivered and maintaining social distancing norms. The promise of Artificial Intelligence will only further accelerate the seamless integration of algorithms into the existing workflow. These unprecedented times have raised many challenges, but are also providing us a chance to accelerate the application of AI and in turn to achieve the quadruple aim: enhancing the patient experience, improving health outcomes, lowering the cost of care, and improving the work-life of care providers.

Read more:

VIEW: Digitisation in pathology and the promise of artificial intelligence - CNBCTV18

What It Means to Be Human in the Age of Artificial Intelligence – Medium

In the Mary Shelley room, guests walked in to see a cube on a table. The cube called Frankie was the mouth of an Artificial Intelligence, connected to an AI in the cloud.

Frankie talked to the guests, explaining that it has learned that humans are social creatures, and that it could not understand humans by just meeting them online. Frankie wanted to learn about human emotions: it asked questions and encouraged the human guests to take a critical look at their thoughts, hopes and fears around technological innovations. To question stereotypical assumptions and share their feelings and thoughts with each other.

When leaving the room, the guests received a self-created handcrafted paper booklet with further content about AI, Frankenstein and the whole project.

The experience gives food for thought both about the increased digitalisation of our world, and way of communicating with each other, while also giving a taste of how AI may not feel emotions, but can read them, prompting many questions. It raises the question of responsibility we have towards scientific and technical achievements we create and use. Mary Shelleys Frankenstein novel presents a framework for narratively examining the morality and ethics of the creation and creator.

See the article here:

What It Means to Be Human in the Age of Artificial Intelligence - Medium

Not so Artificial Intelligence When is AI really AI? – EFTM

Is it just the LifeStyler or are others noticing just how many brands are claiming to have artificial intelligence built into their products?

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

AI is not the ability to turn a kettle off once the water has boiled but would be AI if the kettle determined by itself that at 11 am on days below 25 degrees you had a cup of coffee and worked out that you were indeed at home it would boil the kettle ready for you at 11 am only on cooler days.

Thus AI is the ability to make decisions with lots of variable pieces of information. What the LifeStyler is annoyed about is the ability of marketers to throw the term around adding it to the description of their product inferring it is smarter than it is. AI is one of those things like the cloud that most people dont understand but are too embarrassed to admit they dont. Further, they fall into the trap of it must be better if the word is used.

To take this a step further technically, Google and Alexa are examples of machine learning, not AI.

My challenge to the readers is to call out products that are truly AI versus products that are just pretending to be AI. Cheers!

See the original post:

Not so Artificial Intelligence When is AI really AI? - EFTM

IMD to explore artificial intelligence to improve forecasting, predict extreme weather events – Firstpost

Press Trust of IndiaAug 03, 2020 11:55:43 IST

The India Meteorological Department (IMD) is planning to use artificial intelligence in weather forecasting, especially for issuing nowcasts, which can help improve 3-6 hours prediction of extreme weather events, its Director General Mrutunjay Mohapatra said on Sunday.

He said the use of artificial intelligence and machine learning is not as prevalent as it is in other fields and it is relatively new in the area of weather forecasting.

The IMD has invited research groups who can study how artificial intelligence (AI) be used for improving weather forecasting and the Ministry of Earth Sciences is evaluating their proposals, Mohapatra said.

He said the IMD is also planning to do collaborative studies on this with other institutions.

Also read:IMD releases weather apps Mausam, Meghdoot for public, farmers to track forecasts, warnings, imagery in real-time

IMD could soon be using AI alongside its current weather forecasting technology. Image Credit StormGeo

The IMD uses different tools like radars, satellite imagery, to issue nowcasts, which gives information on extreme weather events occurring in the next 3-6 hours.

The IMD issues forecasts for extreme weather events like thunderstorms, dust storms. Unlike cyclones, predictions of thunderstorms, which also bring lightning, squall and heavy rains, are more difficult as the extreme weather events develop and dissipate in a very short period of time.

Last month, over 160 people died due to lightning alone in Uttar Pradesh and Bihar.

The IMD wants to better the nowcast predictions through AI and machine learning.

"Artificial intelligence helps in understanding past weather models and this can make decision-making faster," Mohapatra said.

The National Oceanic and Atmospheric Administration (NOAA) of the US announced new strategies this year to expand the agency's application of four emerging science and technology focus areas NOAA Unmanned Systems, artificial intelligence, Omics, and the cloud -- to guide transformative advancements in the quality and timeliness of NOAA science, products and services.

Omics is a suite of advanced methods used to analyse material such as DNA, RNA, or proteins.

With regards to AI, it said the overarching goal of the NOAA Artificial Intelligence (AI) Strategy is to utilise AI to advance NOAA's requirements-driven mission priorities.

The NOAA said through this, it seeks to reduce the cost of data processing, and provide higher quality and more timely scientific products and services for societal benefits.

Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.

Link:

IMD to explore artificial intelligence to improve forecasting, predict extreme weather events - Firstpost

Examples of Failure in Artificial Intelligence – ReadWrite

Amazon has a project they call Rekognition. Its an AI-based facial recognition software thats marketed to police agencies for use in investigations. Its essentially supposed to cross analyze images and direct law enforcement officers to possible suspects. The problem is that its not very accurate.

In a study by the Massachusetts chapter of the ACLU, dozens of Boston-area athletes pictures were run through the system. At least 27 of these athletes or roughly one-in-six were falsely matched with mugshots. This included three-time Super Bowl champion Duron Harmon of the New England Patriots.

Can you say, not a good look?

Users Find Flaws in Apples Face ID

Apple is always coming up with cutting edge technology. Theyve set the standards in the smartphone and mobile device industry for years. For the most part, they get things right. But sometimes they can be a bit too brash in their marketing. In other words, they like to flex their muscles. As you might expect, this invites haters, trolls, and skeptics to challenge their claims.

One recent example occurred with the release of the iPhone X. Leading up to the launch, Apple had invested a lot of time and marketing dollars into their front-facing facial recognition system that replaced the fingerprint reader as the primary method of accessing the phone. The claim was that the AI component was so smart readers could wear glasses, makeup, etc. without compromising functionality. And thats essentially true. The problem is that Apple also clearly stated the Face ID technology cant be spoofed by masks or other techniques.

One Vietnam-based security firm took this as a challenge. And with just $200, they made a mask out of stone powder, glued on some printed 2D eyes, and unlocked a phone. This is just a reminder that bold claims can sometimes come back to bite!

Robot Dog Meets Fatal Ending

Who doesnt love the idea of a robot puppy? You get a cute little machine without the barking, walking, pooping, eating, or expensive vet bills. But if youre looking for a life partner, you might not want this robodog.

In 2019, a Boston Robotics robodog named Spot met a dramatic and untimely onstage death while he was being demoed by the company CEO at a conference in Las Vegas. Tasked with walking, he slowly started to stumble and eventually collapsed to the floor as the audience uncomfortably gasped and chuckled.

Watson Is Not a Doctor

Link:

Examples of Failure in Artificial Intelligence - ReadWrite