Page 93«..1020..92939495..100110..»

Category Archives: Ai

Building better data foundations to make the most of AI – ComputerWeekly.com

Posted: October 26, 2021 at 5:16 pm

In September 2021, the government published its inaugural National AI Strategy, an ambitious 10-year vision for the UKs digital future. As well as the requirement to improve governance and regulation of artificial intelligence (AI), the strategy recognises that increasing resilience, productivity and economic growth will require businesses across the UK to use data much more effectively than they are now.

To achieve this, companies in every region and sector need to ensure their organisations data is fit for purpose, recorded in standardised formats on modern, future-proof systems and stored in a way that makes it findable, accessible, interoperable and reusable. If these basic data foundations are not constructed properly, then efforts made to build new AI services and products will inevitably be undermined. Ultimately, poor data foundations will compromise trust in data and may mean that the UK fails to deliver on its AI vision.

What are the barriers holding back businesses from building better data foundations, and how can these be overcome? To answer these questions, EY was appointed by the Department for Digital, Culture, Media and Sport (DCMS) to gather evidence to assess the extent of data foundations and the adoption of AI in the UK.

The research was based on surveys, interviews and a rapid review of other high-quality reports and evidence. Businesses were asked about the extent to which data foundations and AI were being adopted and the barriers to their use. To ensure that the research was as representative as possible, it included points of view from across the UK economy from large businesses, to small and medium-sized enterprises (SMEs), and organisations in the third sector.

Although our survey showed that 80% of organisations believed that improving data foundations would increase productivity, there is still a sizeable gap between the potential and actual impact of data to drive long-term economic value.

We concluded that, despite accessibility, interoperability and usability of data generally being regarded by organisations as good, it was clear that many data-related initiatives have yet to reach full maturity. Indeed, businesses of all sizes and from all sectors believed they could be more successful if improvements were made in the following areas:

Quality of data 41% of organisations in the survey selected quality as the most challenging characteristic of their data, and 90% said they had a dedicated data strategy or data-improvement initiatives in place.

Skills 14% of organisations said that recruiting and retaining appropriately skilled personnel was their biggest current challenge, and 79% said the governments main priority should be investment in developing relevant data skills.

Infrastructure 14% of organisations cited challenges with access to suitable technology and data infrastructure, and fixing issues with legacy IT systems.

EYs research also highlighted a divide between large businesses and SMEs (and the third sector). SMEs are more likely to be profoundly impacted by issues linked to data foundations. For example, they tend to struggle more than large companies to access suitable technology and data infrastructure, and they cant always compete in what is already a difficult jobs market for data skills.

This means that SMEs tend to lag behind larger companies in their adoption of AI. This is borne out in the results of our survey: although 90% of large organisations have already adopted AI or plan to do so soon, this drops to less than half (48%) for SMEs.

By drawing on a mix of technological and cultural approaches, businesses across all regions and sectors, and of all sizes, can improve their ability to use their data more effectively. For example, businesses can:

Build trust in data: By adopting robust data governance processes coupled with a data fabric a set of independent software services from the field of AI that create a consistent and expanded data experience across the enterprise businesses can track data throughout its life. This, in turn, ensures its secure ingestion, processing and use, reducing excessive movement, copying and inconsistent use, eliminating duplication and implementing data quality and control protocols to ensure that all data and AI they support can be trusted.

Tap into new pools of talent: By activating the apprenticeship levy, businesses can access young talent and shape the digital skills needed for them to remain competitive. Also, upskilling and reskilling existing employees helps introduce new skills into an organisation and enables people to shift from routine activities to value-adding work. Businesses can also build and maintain more flexible access to skills through contractors and freelancers.

Switch to platform thinking: By adopting a cloud-based approach, particularly when combined with a data fabric, businesses can simplify application integration so that data can move around an organisation to be enriched, processed and visualised at any point of need rather than remaining locked up in legacy silos or being replicated across multiple applications.

It is difficult to predict just how much value can be unlocked in the economy by having better data foundations to increase the adoption and effectiveness of AI. But EYs research suggests that businesses do understand the correlation between better use of data and the increased economic and social value that can be realised using AI. The reality, though, is that they have little choice no other resource, natural or artificial, provides the same degree of potential as data.

Harvey Lewis is associate partner and chief data scientist in EYs tax practice

Go here to read the rest:

Building better data foundations to make the most of AI - ComputerWeekly.com

Posted in Ai | Comments Off on Building better data foundations to make the most of AI – ComputerWeekly.com

Faces as the Future of AI – insideBIGDATA

Posted: at 5:16 pm

Humans are hardwired to look at each others faces. Three-month-old infants prefer looking at faces when given a chance. We have a separate brain region devoted to facial recognition, and a human can fail to recognize faces while all the rest of the visual processing functions perfectly well (a condition known as prosopagnosia). We are much better at recognizing faces and emotions than virtually anything else; in 1973, Hermann Chernoff even suggested using drawings of faces for multivariate data visualization.

For us humans, it makes sense to specialize on faces. We are social animals whose brains had probably evolved for social reasons and who have an urgent need not only to distinguish individuals but to recognize variations in emotions: the difference between fear and anger in a fellow primate might mean life or death. But it turns out that in artificial intelligence, problems related to human faces are also coming to the forefront of computer vision. Below, we consider some of them, discuss the current state of the art, and introduce a common solution that might advance it in the near future.

Common Issues in Computer Vision

First, face recognition itself has obvious security-related applications from unlocking your phone to catching criminals with CCTV cameras. Usually face recognition is an added layer of security, but as the technology progresses, it might rival fingerprints and other biometrics. Formally, it is a classification problem: choose the correct answer out of several alternatives. But there are a lot of faces, and we need to add new people on the fly. Therefore, face recognition systems usually operate by learning to extract features, i.e., map the picture of a face to a much smaller space of features and then perform information retrieval in this feature space. Feature learning is almost invariably done with deep neural networks. While modern face recognition systems achieve excellent results and are widely used in practice, this problem still, to this day, gives rise to new fundamental ideas in deep learning.

Emotion recognition (classifying facial expressions) is another human forte, but automating it is important. AI assistants can be more helpful if they recognize emotions, and a car might recognize whether the driver is about to fall asleep at the wheel (this technology is close to production). There are also numerous medical applications: emotions (or lack of such) are important in diagnosing Parkinsons disease, strokes and cortical lesions, and much more. Again, emotion recognition is a classification problem, and the best results are achieved by rather standard deep learning architectures, although medical applications usually augment images with other modalities such as respiration or electrocardiograms.

Gaze estimation, i.e., predicting where a person is looking, is important for smartphones, AR/VR, and various eye tracking applications such as, again, car safety. This problem does not require large networks because the input images are rather small, but results keep improving, lately, e.g., with few-shot adaptation to a specific person. The current state of gaze estimation is already sufficient to create AR/VR software fully controlled by gaze, and we expect this market to grow very rapidly.

Segmentation, a classical computer vision problem, is important for human faces as well, mostly for video editing and similar applications. If you want to cut a person out really well, say add a cool background to your video conferencing app, segmentation turns into background matting, a much harder problem where the segmentation mask is not binary but can also be semi-transparent to a degree. This is important for object boundaries, hair, glasses, and the like. Background matting has only very recently started getting satisfactory solutions, and there is a lot to be done yet.

Many specialized face-related problems rely on facial keypoint detection, the problem of finding characteristic points on a human face. A common keypoint scheme includes several dozen (68 in the popular IBUG scheme) points that all need to be labeled on a face. Facial keypoints can serve as the first step for tracking faces in images and video, recognizing faces and facial expressions, and numerous biometric and medical applications. There exist state-of-the-art solutions both based on deep neural networks and ensembles of classical models.

The Limitations of Manually Labeled Data

Face-related problems represent an important AI frontier. Interestingly, most of them struggle with the same obstacle: lack of labeled training data. There exist datasets with millions of faces, but a face recognition system has to add a new person by just 1-2 photos. In many other problems, manually labeled data is challenging and costly to obtain. Imagine how much work it is to manually draw a segmentation mask for a human face, and then imagine that you have to make this mask soft for background matting. Facial keypoints are also notoriously difficult to label: in engineering practice, researchers even have to explicitly account for human biases in labeling that vary across datasets. Lack of representative training data has also led to bias in deployed models resulting in poor performance with certain ethnicities.

Moreover, significant changes in conditions often render existing datasets virtually useless: you might need to recognize faces from an infrared camera of a smartphone that users hold below their chins, but the datasets only provide frontal RGB photos. This lack of data can impose a hard limit on what AI researchers can do.

Synthetic Data Presents a Solution

Fortunately, a solution is already presenting itself: many AI models can be trained on synthetic data. If you have a CGI-based 3D human head crafted with sufficient fidelity, this head can be put in a wide variety of conditions, including lighting, camera angles, camera modalities, backgrounds, occlusions, and much more. Even more importantly, since you control everything going on in your virtual 3D scene, you know where every pixel is coming from and can get perfect labeling for all of these problems for free, even hard ones like background matting. Every 3D model of a human head can give you an endless stream of perfectly labeled highly varied data for any face-related problemwhats not to like?

Synthetic data appears to be a key solution, but it raises questions. First, synthetic images cannot be perfectly photorealistic, leading to the domain shift problem. Models are trained on the synthetic domain to be used on real images. Second, creating a new 3D head from scratch is a lot of manual labor, and variety in synthetic data is essential, so (at least semi-) automatic generation of synthetic data will probably see much more research in the nearest future. However, in practice, synthetic data is already proving itself for human faces even in its most straightforward form: creating hybrid synthetic+real datasets and training standard models on this data.

Let us summarize. Several important computer vision problems related to human faces are increasingly finding real-world applications in security, biometrics, AR/VR, video editing, car safety, and more. Most of them are far from solved, and the amount of labeled data for such problems is limited because real data is expensive. Fortunately, it appears that synthetic data is picking up the torch. Human faces may well be the next frontier for modern AI, and it looks like we are well-positioned to get there.

About the Author

Sergey I. Nikolenko is Head of AI at Synthesis AI. Sergey is a computer scientist specializing in machine learning and analysis of algorithms. Synthesis AI is a San Francisco based company specializing on the generation and use of synthetic data for modern machine learning models. He also serves as the Head of the Artificial Intelligence Lab at the Steklov Mathematical Institute at St. Petersburg, Russia. Sergeys interests include synthetic data in machine learning, deep learning models for natural language processing, image manipulation, and computer vision, and algorithms for networking. Sergey has authored a seminal text in field, Synthetic Data for Deep Learning, published by Springer.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Originally posted here:

Faces as the Future of AI - insideBIGDATA

Posted in Ai | Comments Off on Faces as the Future of AI – insideBIGDATA

This AI predicts how old children are. Can it keep them safe? – Wired.co.uk

Posted: at 5:16 pm

Predicting how old someone is based on only how they look is incredibly hard to get right, especially in those awkward early teen years. And yet bouncers, liquor store owners, and other age-restricted goods gatekeepers make that quick estimation all the time.

Their predictions are often wrong. Now London-based digital identity company Yoti believes its AI-powered age estimation can predict how old someone is if theyre aged anywhere from six to 60. For the first time, it claims, it can accurately determine whether children are under or over 13, the minimum age many social media firms require their users to be.

Yotis image technology may be increasingly appealing as Big Tech and internet services have faced increasing scrutiny over how children use their products. However, privacy advocates say automatically analysing peoples faces normalizes surveillance, is largely unregulated, and has the potential to show bias.

Yoti says its age estimation technology, which it has developed over the last three years, has a margin of error of 2.79 years across its total 45-year age range. For under 25s the margin of error drops below 1.5 years. In the next few weeks, it will get brick-and-mortar tests at five major supermarket chains in the UK. The company hasnt named the supermarket brands but says a number of unnamed pornography and gaming websites are also trialing the tech to stop underage visitors. It adds that its age estimation technology is already being used by childrens streaming social network Yubo and healthy living app Smash.

Point a camera running Yotis software at your face it can work through the web on your phone, laptop, or tablet, or at a self-checkout terminal and the system estimates your age range. On multiple tests using a browser-based staging environment on my phone, the system correctly put me at between 27-31 and 28-32 years old. The company says neither it nor its clients store the images it captures, and you dont need to register to use it. It's not identifying, its not authenticating any individual, says Julie Dawson, director of regulatory and policy at Yoti. The company claims its not facial recognition as it cant identify individuals. When it sees a new face, it just spits out the estimated age of that individual, Dawson says.

Yoti clients can also use thresholds for age estimations: for instance, setting an estimation limit of 25 when someone in the UK buying alcohol has to be over 18 by law. Anyone flagged as under that threshold could then be asked to provide an ID. The system also lets its customers know how confident it is in any given estimate.

The company has trained its neural networks with hundreds of thousands of pictures of peoples faces, says Yoti cofounder and CEO Robin Tombs. It has mostly collected those faces itself, through its standalone Yoti app that lets people verify their ID with governments and other bodies by uploading official documents like passports and drivers licenses. When people upload their details to the Yoti app they have the option to opt-out of the data being used to train Yotis AI. The company itself is unsure what facial features its AI uses to determine peoples age. We have to be honest, Tombs says, we don't really know whether it's to do with wrinkles, or saggy eyes, or quite what. It has just done so many that it is now very good at it.

See original here:

This AI predicts how old children are. Can it keep them safe? - Wired.co.uk

Posted in Ai | Comments Off on This AI predicts how old children are. Can it keep them safe? – Wired.co.uk

AI Experts Discuss the Relationship Between AI and its Users at Radcliffe Symposium | News – Harvard Crimson

Posted: at 5:16 pm

Experts on artificial intelligence discussed the future of AI, its ethical implications, and its practical applications at a virtual symposium hosted by Harvards Radcliffe Institute for Advanced Study on Friday.

The symposium, titled Decoding AI: The Science, Politics, Applications, and Ethics of Artificial Intelligence, was divided into four speaker sessions.

In the first session, Fernanda Vigas, a professor of computer science at Harvard, unpacked what machine learning means, explaining that machine learning eliminates the need for rule-based programming. Programmers instead program a goal, known as an objective function.

The system creates its own sets of rules to understand that space, Vigas said. This is really great because what it does is that it unlocks the possibility of us trying to solve problems for which we dont know the rules.

Vigas said machine learning can be useful in fields such as medicine, where AI can diagnose diseases based on large amounts of data.

However, Vigas said the use of AI can pose challenges with regards to how data is used. She cited the example of an AI used to diagnose patients with retinal disease which unexpectedly used the same health information to predict patients cardiovascular risk factors.

This dynamic puzzled physicians who did not understand how the system had determined that information. The example is indicative of the need for greater transparency and explainability in machine learning, Vigas said.

One of the reasons why everybody tends to talk about AI interpretability and explainability is because you want to make sure that these machines are not doing anything incorrectly or not causing harm, Vigas said.

Vigas also recommended that developers work with a diverse range of users and stakeholders when designing their products.

If this is something that is going to impact communities, you have to take their perspective into account, she said. This is incredibly important to be including their perspective in the development not only afterwards, but in the development of your technology.

In the third session, Rana el Kaliouby, an executive fellow at the Harvard Business School, discussed the connection between AI and people. She said a primary goal of future AI development is bringing emotional intelligence to our machines and technology.

Its important that these technologies, and these AI systems have IQ, but its also important that we take a very human-centric approach to build empathy and emotional intelligence into these machines, El Kaliouby explained.

Developers can attempt to humanize their technology by incorporating non-verbal forms of communication, such as facial expressions, posture, and vocal intonation, that they use to communicate their emotions, according to El Kaliouby.

If you combine all of these nonverbal signals, that gives you a really good picture of the emotional and cognitive state of a person, and youre then able to leverage this information to make all sorts of decisions, El Kaliouby added. Humans do that all the time to build empathy, and to build trust, which is needed in our human machine interfaces.

The automotive industry seeks to leverage this technology in smart cars that evaluate whether an individual is driving while distracted, El Kaliouby said.

El Kaliouby said the advanced technologies of current AI enable developers to attain a more complete understanding of a drivers mental state.

We can understand combined face detection with body key point detection, she said. We tie all of that with an understanding of the emotional state of the individuals in the vehicle.

Ultimately, Vigas said, future developers need to invest in ensuring that the technology being designed is controlled by the user, rather than the other way around.

We can learn from these systems and we should be controlling them, Vigas said. They shouldnt control us, we should be controlling them.

Staff writer Christie K. Choi can be reached at christie.choi@thecrimson.com.

Staff writer Jorge O. Guerra can be reached at jorge.guerra@thecrimson.com. Follow him on Twitter @jorgeoguerra_.

Originally posted here:

AI Experts Discuss the Relationship Between AI and its Users at Radcliffe Symposium | News - Harvard Crimson

Posted in Ai | Comments Off on AI Experts Discuss the Relationship Between AI and its Users at Radcliffe Symposium | News – Harvard Crimson

Legislative Approaches to AI: European Union v. United Kingdom – JD Supra

Posted: at 5:16 pm

Following an initial announcement in early 2021, the UK government has recently launched its first National Artificial Intelligence (AI) Strategy. This new strategy indicates that the United Kingdom may be planning on diverging from the legislative approach taken by the EU Commission in its AI package.

The EU Commission published a proposed EU-wide AI legislative framework (the EU Regulation) which forms part of the Commissions overall AI package. The legal framework for AI addresses the risks generated by specific uses of AI and focuses on imposing prescribed obligations with respect to such high-risk use cases, including obligations to undertake relevant risk assessments, have in place mitigation systems such as human oversight, and provide transparent information to users.

The intention of the EU Regulation is to have a single set of complementary rules, with extra-territorial application. This means that AI providers who make their systems available in the European Union, or whose systems affect people in the European Union or have an output in the European Union, irrespective of their country of establishment, will be required to comply with the EU Regulation. Non-compliance could lead to General Data Protection Regulation-style fines for companies and providers, with proposed fines of up to the greater of 30 million euros ($34.8 million) or 6% of worldwide turnover.

The National AI Strategy does not provide a UK legislative framework for AI, but it does provide some signs that the United Kingdoms approach will differ from that taken by the EU Commission. Currently, the United Kingdom regulates AI through cross-sector legislation. In 2018, the UK government agreed with the House of Lords view that "blanket AI-specific regulation (like the EU), at this stage, would be inappropriate and that "existing sector-specific regulators are best placed to consider the impact on their sector."

The National AI Strategy outlines four key reasons why a sector-led approach, rather than a European-style overarching approach, is logical:

In its strategy, the UK government acknowledges that there are challenges to be addressed as part of these sector specific regulations. These include

These challenges raise the question of whether the United Kingdoms current approach is adequate. An upcoming White Paper by the Office for Artificial Intelligence will address this, along with consideration of alternative approaches.

In the European Union, the European Parliament and EU member states need to adopt the EU Commissions proposals on AI for the EU Regulation to become effective.

In the United Kingdom, the upcoming White Paper from the Office for Artificial Intelligence should detail the proposed UK position on governing and regulating AI, as well as the challenges of the sector-specific approach. This is expected to be published in early 2022.

In response to the National AI Strategy and the EU Regulation, the Department for Culture, Media and Sports in the UK is running a consultation on potential AI-related reforms to the data protection framework. This is due to close on November 19, 2021.

[View source.]

Read more here:

Legislative Approaches to AI: European Union v. United Kingdom - JD Supra

Posted in Ai | Comments Off on Legislative Approaches to AI: European Union v. United Kingdom – JD Supra

Aidoc and ScreenPoint Medical Announce Partnership to Provide Complete AI Solution for Breast Imaging – WCTV

Posted: at 5:16 pm

Aidoc's comprehensive AI continues to expand to cover the majority of radiology adult subspecialties

Published: Oct. 26, 2021 at 9:00 AM EDT|Updated: 8 hours ago

NEW YORK, Oct. 26, 2021 /PRNewswire/ -- Aidoc, the leading provider of AI for medical imaging, andScreenPoint Medical, leader of deep learning AI for 2D and 3D mammography, today announced a collaboration which will incorporate ScreenPoint's capabilities into Aidoc's platform. Breast imaging specialists will now be able to benefit from one point of access to AI within the existing physician workflow, providing for early detection and diagnosis of breast cancer.

Delivering value to nearly 600 medical centers across the globe, Aidoc's suite of AI solutions enable radiologists to expedite patient treatment and improve quality of care by flagging acute anomalies such as pulmonary embolism, intracranial hemorrhage, and stroke - in real time. Aidoc's inclusion of a full breast imaging AI solution with mammography and tomosynthesis is a natural next step in the company's radiology practice-wide offering.

"Taking into consideration IT security, speed, and ease of deployment, we found ScreenPoint Medical to be the ideal partner for Aidoc in the women's health space," said Tom Valent, Aidoc's VP of Business Development. "We look forward to including ScreenPoint products within our robust AI platform, providing value for key clinical use cases as part of a unified and seamless cross-specialty AI experience."

ScreenPoint's AI algorithm, Transpara, assists breast radiologists in improving their accuracy for both 2D and 3D mammography, which can help detect cancers earlier and reduce overall reading time. Transpara's analysis of 2D and 3D mammography images produces a unique exam score, alerting radiologists when an elevated risk of a malignancy is present in the scan and ensuring that the radiologist will be alerted to cases with elevated risk.

"At ScreenPoint Medical, we focus on early detection in order to improve breast cancer survival rates," said Nicki Bryan, ScreenPoint's VP of Sales. "Aidoc has the technical expertise that we need to seamlessly deliver our results at scale to the radiologists that we both serve. We are proud to be included in Aidoc's robust AI offering, expanding our reach and capabilities."

About ScreenPoint

ScreenPoint Medical is a world leader in the development of innovative machine learning solutions to improve breast cancer screening and diagnosis. Developing the market's leading AI solution for reading mammograms and breast tomosynthesis, Transpara is available in over 200 clinics in 25 countries. Visithttps://www.screenpoint-medical.comfor more information.

About Aidoc

Aidoc is the leading provider of artificial intelligence workflow solutions that support and enhance the impact of physician diagnostic power - helping them expedite patient treatment and improve quality of care. Visitwww.aidoc.com for more information.

Ariella ShohamVP Marketing ariella@aidoc.com

Logo- https://mma.prnewswire.com/media/1274791/Aidoc_Logo.jpg

View original content:

SOURCE Aidoc

The above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc.

Read more here:

Aidoc and ScreenPoint Medical Announce Partnership to Provide Complete AI Solution for Breast Imaging - WCTV

Posted in Ai | Comments Off on Aidoc and ScreenPoint Medical Announce Partnership to Provide Complete AI Solution for Breast Imaging – WCTV

The future landscape of AI and robotics in healthcare – Health Europa

Posted: at 5:16 pm

Artificial intelligence (AI) and robotics have shown themselves to be critical in aiding healthcare professionals in the ongoing fight against the COVID-19 pandemic. AI has enabled speedier and more efficient analysis of patient data, therefore allowing medical professionals to decipher medical conditions and the treatments as required.

Telehealth, or virtual appointments, have been utilised more widely throughout the pandemic. This service has been used for decades by those that live in remote areas but was regularly done by telephone, rather than videoconferencing. With the pandemic and the need for social distancing, telehealth has become an essential part of healthcare services, and has therefore been improved greatly throughout the pandemic as a result of necessity.

Telesurgery is the next endeavor which is being researched and could be used in the provision of urgent care.

AI and robotics have the potential to provide a range of services for patients, and will likely become more widely adopted in the future. One person championing the use of AI and robotics in healthcare is Paul Kostek, IEEE Senior Member and Principal Systems Engineer with Air Direct Solutions LLC. Here, he speaks toHealth Europa about the impacts that AI and robotics have had upon healthcare and how this technology could be developed in the future.

Throughout the pandemic, AI has proven to be an important resource for assessing data from patient scans and identifying treatment options. It has also been used to improve administrative operations of hospitals and medical centres. We may see more uses on the business side of the medical providers before the wider use in medical procedures.

The current use of robotics in surgery allows doctors to perform minimally invasive surgery and limits the impact of a procedure while improving outcomes. Expansion of surgery automation will continue, incorporating AR and VR to improve performance. Telesurgery is the next step being studied and in theory provides access to a surgeon with a specialty not in the patients local area. This would eliminate the need for a patient to travel and could also be used when a patient needs immediate care. Challenges would include latency and the need for a surgical team to support the procedure in the event a problem arises.

AI can help identify a patients condition and recommend possible care options and treatments. This can save doctors doing the research and, in turn, more time can be spent assessing the options presented by AI and discussing these with the patient. Robotic surgery can expand the options for a patient and improve outcomes by providing access to surgery in a local or versus travelling.

With the pandemic, AI has been used to assess data from patients lung scans, to improve treatment options, identify variants and make changes to the treatments in use.

The ongoing challenge with AI is ensuring that the developers consider a wide range of subjects such as race and sex, so that the resulting diagnosis and treatment options fit the persons needs. This requires AI developers to come from a wide range of groups and backgrounds, so they are not limited.

AI and robotics are a key resource for health professionals, helping them focus on patients with research symptoms and treatments. Robotics have already been successfully limiting the impact of surgery, resulting in improved outcomes. We can expect the next step to be telesurgery, which will allow surgeons to provide treatment without the need for patient or doctor travel. In fact, in a few years there may well be surgery performed without the need for a surgeon. COVID-19 has demonstrated that the adoption of technology can happen much faster than expected and we will likely continue to see this occur.

Paul KostekIEEE Senior Member and Principal Systems EngineerAir Direct Solutions LLCwww.ieee.org/https://www.linkedin.com/in/paulkostek

This article is from issue 19 ofHealth Europa Quarterly.Clickhereto get your free subscription today.

Recommended Related Articles

Read more here:

The future landscape of AI and robotics in healthcare - Health Europa

Posted in Ai | Comments Off on The future landscape of AI and robotics in healthcare – Health Europa

Navigating ethics in AI today to avoid regrets tomorrow – Help Net Security

Posted: at 5:16 pm

As artificial intelligence (AI) programs become more powerful and more common, organizations that use them are feeling pressure to implement ethical practices in the development of AI software. The question is whether ethical AI will become a real priority, or whether organizations will come to view these important practices as another barrier standing in the way of fast development and deployment.

A cautionary tale could be the EU General Data Protection Regulation (GDPR). Enacted with good intentions and hailed as a major step toward better, more consistent privacy protections, GDPR soon became something of an albatross for organizations trying to adhere it. The GDPR and other privacy regulations that followed were often seen as just adding more work that kept them from focusing on projects that really mattered. Organizations that attempt to solve for each new regulation in a silo end up adding significant overhead and making themselves vulnerable to competition in form of agility and cost effectiveness.

Could an emphasis on ethics in AI go the same route? Or should organizations realize the risksas well as their responsibilitiesin putting powerful AI applications into use without addressing ethical concerns? Or is there another way to deal with yet another area of quality without the excessive burden?

AI programs are undoubtedly smart, but theyre still programs; theyre only as smart as the thoughtand the programmingput into them. Their ability to process information and draw conclusions on their own adds layers to the programming that isnt necessary with more traditional computing programs in which accounting for obvious factors is relatively simple.

When, for example, an insurance company is determining the cost of a yearly policy for a driver, they typically take data like gender and ethnicity out of the equation to come up with a quote. Thats easy. But with AI, it gets complicated. You dont micro-control AI. You give it all the information, and the AI decides what to do with it. AI starts out with no understanding of the impact of factors such as race, so if programmers havent limited how data can be used by the AI, you can wind up with racial data being used, thus creating AI bias.

There are many examples of how bias creeps into AI programs, often because of incomplete data. One of the most infamous examples involved the Correctional Offender Management Profiling for Alternative Sanctions, known as COMPAS, an algorithm used in some U.S. state court systems to generate sentencing recommendations. COMPAS used a regression model to predict whether someone convicted of a crime would become a repeat offender. Based on the data sets put into the system, the model predicted twice as many false positives for recidivism for Black offenders.

In another example, a health care risk-prediction algorithm used on more than 200 million U.S. patients to determine which ones needed advanced care was found to be preferential toward white patients. Race wasnt a factor in the algorithm, but health care cost history was, and it tended to be lower for Black patients with the same conditions.

Compounding the problem is that AI programs arent good at explaining how they reached a conclusion. Whether an AI program is determining the presence of cancer or simply recommending a restaurant, its thought processes are inscrutable. And that adds to the burden on programming in ethics up front.

Continued improvements in AI have potentially far-reaching consequences. The Department of Defense, for one, has launched a slew of AI-based initiatives and centers of excellence focused on national security. Seventy-six percent of business enterprises are prioritizing AI and machine learning in their budgeting plans, according to a recent survey.

Alongside the ethical concerns of AIs role in decision-making is the inescapable issue of privacy. Should an AI scanning social media be able to contact authorities if it detects a pattern of suicide? Apple, as an example, is considering a plan to scan users iPhone data for signs of child abuse. Considering the ethical and potential legal implications, it makes sense that privacy and ethics get folded into the same security process as organizations plan on how to address ethics. The two should not be treated separately.

As these and other programs move forward, new guidelines on ethics in AI are inevitable. This will create even more work for teams trying to get new products or capabilities into production, but it also raises issues that cant be ignored.

Successful AI ethics policies will likely depend on how well they are integrated with existing programs. Organizations experience with GDPR can offer a good example. Where it once was seen primarily as a burden, some organizations that are integrating it into their security processes have gained a lot more maturity by treating privacy and security as one bucket.

Ultimately, it comes down to programmers baking in certain guidelines and rules on how to treat various types of data differently, and how to make sure that data segregation is not happening. Integrating these guidelines into overall operations and software development will depend on an organizations leaders making ethics a priority.

Enterprises should be addressing ethics and security together, leveraging systems and tools they use for security for ethics. This will ensure effective management of the software development lifecycle. I would go so far as to say that ethics should be considered an essential part of a threat modeling process.

The question organizations should ask themselves is: Five years down the road, looking back at how you handled the question of ethics in AI, what could be your regrets?

Considering the history of how the impact of other game-changing technologies (e.g., Facebook) were overlooked until legal issues arose, the potential for regret may well be in not taking it seriously and not acting proactively until it becomes a pressing priority.

People tend to address the loudest problem at the time, the squeaky wheel getting the most attention. But thats not the most effective way of handling things. The ethical implications of AI need to be confronted now, in tandem with security.

Link:

Navigating ethics in AI today to avoid regrets tomorrow - Help Net Security

Posted in Ai | Comments Off on Navigating ethics in AI today to avoid regrets tomorrow – Help Net Security

Over 100 organizations will sign Code on AI ethics by end of 2021 Deputy PM – TASS

Posted: at 5:16 pm

MOSCOW, October 26. /TASS/. More than 100 governmental, commercial and scientific organizations will sign the code of ethics for artificial intelligence (AI) by the end of 2021, Russian Deputy Prime Minister Dmitry Chernyshenko said on Tuesday. He was speaking at the first international forum "Ethics of Artificial Intelligence: The Beginning of Trust", which was held at TASS.

"More than 100 organizations from the state, commercial and scientific sector will sign the Code by the end of 2021," Chernyshenko said.

According to the Deputy Prime Minister it is necessary to maintain a focus on values such as human rights and well-being, the trust and reliability of AI systems, their safety and service to humans. The main priority is to protect the interests of the people, he added.

The Deputy Prime Minister clarified that during the work on this code, a large series of expert discussions were held. More than 500 experts were involved, more than 300 different proposals regarding the text of the document were received and worked out.

Chernyshenko also expressed hope that the forum on artificial intelligence would be held on a regular basis.

"We expect regular activity. This is the first international forum on AI, we would like to participate in the second and third one, to rejoice at the successes, to sum up some results, to openly discuss the problems we face together," Chernyshenko said.

The authors of the Russian Code of Ethics for Artificial Intelligence (AI) are the Alliance for Artificial Intelligence, together with the Analytical Center under the Government of the Russian Federation and the Ministry of Economic Development. The Code will become part of the Artificial Intelligence federal project and the Strategy for the Development of the Information Society for 2017-2030.

It establishes general ethical principles and standards of conduct to guide those involved in activities using artificial intelligence.

Read the original post:

Over 100 organizations will sign Code on AI ethics by end of 2021 Deputy PM - TASS

Posted in Ai | Comments Off on Over 100 organizations will sign Code on AI ethics by end of 2021 Deputy PM – TASS

IBM Announces Advances and New Collaborations in AI-Powered Automation, 5G Connectivity and Security at Mobile World Congress Los Angeles – Yahoo…

Posted: at 5:16 pm

IBM collaborates with Boston Dynamics, Cisco, Palo Alto Networks and Turnium Technology Group to help equip businesses in next phase of digital transformation

IBM AI-powered automation software, including IBM Cloud Pak for Network Automation, and services from IBM Consulting, to help drive industry innovation and 5G adoption

ARMONK, N.Y., Oct. 26, 2021 /PRNewswire/ -- Today IBM (NYSE: IBM) announced new collaborations and expanded partner relationships to further the company's capabilities in hybrid cloud, AI, network automation and security at Mobile World Congress Los Angeles (MWC LA). These innovations highlight IBM's role in helping the telecommunications industry evolve as 5G and Edge Computing redefine how business and consumers connect.

IBM Corporation logo. (PRNewsfoto/IBM)

IBM continues to make major strides in helping CSPs adopt AI and automation on open hybrid cloud platforms, as well as standards to remain in control of where and how they deploy their network services, edge computing, and enterprise offerings. By leveraging IBM's AI-powered automation software, such as IBM Cloud Pak for Network Automation, and services through IBM Consulting, IBM will help drive innovation for CSPs through its systems integration capabilities; the application of technology to create ever more intelligent workflows; and support modernizing applications so enterprises can deliver at scale in a world of hybrid cloud environments.

At MWC LA, IBM is announcing the following innovations, which are designed to equip businesses for the next stage of their digital journeys:

A new collaboration between Boston Dynamics and IBM is focused on delivering data analysis at the edge to help companies address worker safety, optimize field operations, and boost maintenance productivity in industrial environments such as manufacturing facilities, power plants and warehouses. Enabled by AI and hybrid cloud innovations from IBM Research, IBM Consulting will develop edge payloads that integrate with Spot, the agile, mobile robot from Boston Dynamics. Boston Dynamics and IBM will announce these new innovations during the joint keynote at Mobile World Congress LA. For more information, read the blog: https://newsroom.ibm.com/Boston-Dynamics-and-IBM-Join-Forces-to-Bring-Mobile-Edge-Analytics-to-Industrial-Operations

Cisco and IBM are expanding their relationship, and will integrate key offerings, including IBM Cloud Pak for Network Automation and Cisco Crosswork Network Automation software, to enable orchestration and management of virtual 5G networks. For more information, read the blog: https://newsroom.ibm.com/IBM-and-Cisco-Collaborate-to-Help-Enable-Orchestration-and-Management-of-5G-Networks

Palo Alto Networks and IBM are extending their relationship to help address the unique security requirements for telecom operators deploying 5G Networks and Edge services. The companies are working to deliver joint security solutions and services designed for 5G networks and ecosystems. The collaboration provides automation and orchestration to help create secure 5G network slices that are designed to enable new revenue streams for Network Operators. Leveraging Palo Alto Networks containerized NGFW (CN-Series), container security solution Prisma Cloud Compute Edition, IBM Cloud Pak for Network Automation and IBM Security Services, the joint solution is being designed to enable agility and optimal threat detection based on deep visibility of 5G traffic. IBM and Palo Alto Networks will demonstrate a 5G Network Slice with Validation, Security Orchestration & Response at Mobile World Congress LA.

Turnium Technology Group is announcing a commitment to bring Technology Assurance Group's (TAG) network of managed technology service providers to IBM Cloud for Telecommunications. The collaboration between Turnium and IBM helps Technology Assurance Group (TAG) extend the reach of their managed technology solutions and partner network to new customers across the United States. For more information, read the press release: https://www.einnews.com/pr_news/554547851/turnium-announces-commitment-to-bring-technology-assurance-group-s-tag-network-to-ibm-cloud-for-telecommunications

"A recent study from the IBM Institute for Business Value on "The end of communications services as we know them" revealed that 59% of high performing CSPs surveyed agree they must become secure clouds infused with AI and automation. The study also says, that Communications Service Providers (CSPs) are thinking more strategically about 5G-enabled edge computing more for its ability to building more revenues as 5G and edge computing usher in a new reality for businesses," said Andrew Coward, General Manager, Software Defined Networking, IBM. "We are continuing to help CSPs embrace secured technologies like automation, AI and hybrid cloud, and we believe IBM is uniquely positioned to provide the software and consulting needed to evolve their digital architecture."

Story continues

IBM will have a strong presence as the show's Network Automation Partner, co-leading a keynote session with Boston Dynamics, speaking in breakout sessions on the New Age of Automation and IoT in Healthcare, and hosting live in-booth solution demonstrations.

Statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

About IBM

For more information about IBM at Mobile World Congress Los Angeles, please visit: https://www.ibm.com/industries/telecom-media-entertainment/events/mwc

To learn more about IBM Consulting, please visit: https://www.ibm.com/consulting

For more information about IBM Cloud Pak for Network Automation, please visit: https://www.ibm.com/cloud/cloud-pak-for-network-automation

Media Contacts Jamee Nelsonjamee.nelson@ibm.com

Charlotte Bergmanncharlotte.bergmann@ibm.com

Marisa Conwayconwaym@us.ibm.com

Hanna Smigala smigala@ibm.com

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/ibm-announces-advances-and-new-collaborations-in-ai-powered-automation-5g-connectivity-and-security-at-mobile-world-congress-los-angeles-301409036.html

SOURCE IBM

Continue reading here:

IBM Announces Advances and New Collaborations in AI-Powered Automation, 5G Connectivity and Security at Mobile World Congress Los Angeles - Yahoo...

Posted in Ai | Comments Off on IBM Announces Advances and New Collaborations in AI-Powered Automation, 5G Connectivity and Security at Mobile World Congress Los Angeles – Yahoo…

Page 93«..1020..92939495..100110..»