...10...1819202122...304050...


Discover the Power of VUNO’s AI Solutions at ECR 2020 – WFMZ Allentown

SEOUL, South Korea, July 14, 2020 /PRNewswire/ -- VUNO Inc., South Korean artificial intelligence (AI) developerannounced that it will attend the European Congress of Radiology 2020 (ECR 2020)to be held fromJuly 15 to July 19, 2020 to showcase the power of its flagship AI radiology solutions that have recently received the CE mark.As part of its ambitious plan to go global, VUNO is set to seize this opportunity to further expand its networks with sales prospects and business partners from all around the world.

ECR, one of the leading events in the field of radiology, will be joined by industry experts, medical & healthcare professionals, modality manufacturers and solutions developers, etc. In the light of the COVID 19 epidemic, the congress organizers have decided to opt for an online-only event. With about 2,100 leading industry representatives onboard, the exhibition of ECR 2020 can be accessed from 08:00 a.m. July 15 to 21, until 11:55 p.m. CEST. Free registration & participation is available on the ECR 2020 Virtual Exhibition website. (https://ecr2020.expo-ip.com/).

VUNO's exhibition at the event will include VUNO MedLungCT AI that detects, locates, and quantifies pulmonary lung nodules on CT images, VUNO Med-Chest X-Ray that assists in the chest X-ray readings of common thoracic abnormalities on chest radiographs. VUNO Med-DeepBrain is a diagnostic supporting tool for degenerative brain diseases through brain parcellation & quantification on brain MR images.

On top of the three solutions to be showcased at this event, VUNO has two other solutions that have gained CE certifications recently- VUNO Med-BoneAge and VUNO Med Fundus AI. All these five products are now available to be marketed in countries where CE marking is acceptable.

VUNO Med solutions are designed to be agnostic to any devices and any environments offering seamless integration with any PACS and/or EMR systems. They are offered via cloud servers allowing users to analyze images anytime, anywhere with access to the Internet. They are also available through on-premise installations as well.

VUNO has the highest number of clients with more than120 medical institutions in Korea alone. With its huge successes under its belt rooted in the proven effectiveness and safety through clinical trials and practices, the company is embarking on a new endeavor to proclaim its technical prowess in overseas markets now by signing partnerships with global healthcare giants like M3, a SONY subsidiary and Japan's largest data platform company.

For more detailed information on VUNO, visit https://www.vuno.co/.

Read the original:

Discover the Power of VUNO's AI Solutions at ECR 2020 - WFMZ Allentown

Posted in Ai

The journey that organizations should embark on to realize the true potential of AI – The Indian Express

New Delhi | Updated: July 13, 2020 4:33:10 pm

Implementing Artificial Intelligence (AI) in an organization is a complex undertaking as it involves bringing together multiple stakeholders and different capabilities. Many companies make the mistake of treating AI as a pure play technology implementation project and hence end up encountering many challenges and complexities peculiar to AI. There are three big reasons for increased complexity in an AI program implementation (1) AI is a portfolio based technology (example, comprising sub-categories such as Natural Language Processing (NLP), Natural Language Generation (NLG), Machine Learning) as compared to many standalone technology solutions (2) These sub-category technologies (example, NLP) in turn have many different products and tool vendors with their own unique strengths and maturity cycles (3) These sub-category technologies (example, NLG) are specialists in their functionality and can solve certain specific problems only (example, NLG technology helps create written texts similar to how a human would create it). Hence, organizations need to do three important things Define Ambitious and Achievable Success Criteria, Develop the Right Operating Rhythm, and Create and Celebrate Success Stories to realize the true potential of AI.

Most companies have very narrow or ambiguous success criteria definition of their AI program. These success criteria are not defined holistically and hence may end up providing sub-optimal benefits to the organization. We suggest that the success criteria of an AI program need to not only be ambitious, achievable, and actionable but also tightly integrated with the overall key strategic objectives and priorities of the organization. For example, a bank which is trying to reduce the number of customer complaints and improve the customer experience as key strategic goals can benefit immensely from integrating AI program goals with the goals of this important program (example, leverage machine learning and analytics to analyze past complaints data and better understand customer complaints patterns and journeys and decision points). This interlocking of success criteria will help AI program leaders with the right yardsticks to align and measure their progress and contribution. Additionally, it also helps them get the right visibility and sponsorship at the senior leadership levels in the organization that further improves the chances of success of the AI program.

Also Read: Can Humans and AI coexist to create a hyper-productive HumBot organisation?

A successful AI program requires four key ingredients Right Data, Diverse Skills, Scalable Infrastructure and Seamless Stakeholder Alignment. It is said that Data is the food of an AI program and hence having the right data (example, the volume of data, type of data, and quality of data) at the right time is critical to ensure AI programs have the required fuel and energy to complete their intended journey. While good AI skills are in short supply, leveraging constructs such as having a Nimble CoE (Centers of Expertise) increases chances of optimal utilization of these rare and expensive skills across the organization. Finally, getting various important stakeholders (example, Global Process Owners, IT Leaders, Internal Control & Risks, Continuous Improvement, and HR) seamlessly work together is important to reduce friction and increase AI program velocity.

Also Read:With the power of AI, India can reimagine delivery of public services

It is said that success breeds more success. While AI programs typically focus a lot on efficiency and productivity improvements, many AI programs also generate significant non-direct-quantifiable benefits (for example, improvement in stakeholder experience, improvement in employee engagement and morale). A recent Deloitte survey indicates that 44 per cent organizations felt AI has increased the quality of their products/services while 35 per cent organizations found that AI has enabled them to make better decisions in their organizations. Successful companies find a way to identify these simple, holistic stories and narrate them compellingly and consistently in multiple forums at all levels in their organizations. Humans, by design, are inspired better by stories (than by just numbers) and hence creating a powerful story that combines the quantifiable (example, number of hours saved) with other benefits (example, better decision making ability) can galvanize the entire organization and facilitate rapid and increased adoption of AI at all levels and in all units of the organization.

The revered Chinese saint, Lao Tzu, once famously remarked that A journey of thousand miles begins with a single step. The AI journey in an organization is no exception. While AI implementations are typically more complex and nuanced, companies can leverage the 3-pronged approach mentioned above to realize the true and full potential of AI. While a successful AI program implementation can bestow significant financial benefits on an organization, it also activates the divine journey of freeing up humans to do what they do best leverage their sophisticated brains to introspect, explore, learn, love, empathize and solve the most intricate and defining problems of our generations.

The authors are Ravi Mehta, Partner; Sushant Kumaraswamy, Director; Sudhi. H, Associate Director; and Prashant Kumar, Senior Consultant, Deloitte India.

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Technology News, download Indian Express App.

IE Online Media Services Pvt Ltd

Continue reading here:

The journey that organizations should embark on to realize the true potential of AI - The Indian Express

Posted in Ai

AI.Reverie wins contract to improve navigation capabilities for USAF – Airforce Technology

]]> AI.Reverie has secured SBIR Phase 2 contract from AFWERX. Credit: Markus Spiske on Unsplash.

Sign up here for GlobalData's free bi-weekly Covid-19 report on the latest information your industry needs to know.

AI.Reverie has secured Phase 2 Small Business Innovation Research (SBIR) contract from AFWERX for the US Air Force (USAF).

Under the $1.5m contract, AI.Reverie will build artificial intelligence (AI) algorithms and improve navigation capabilities supporting the 7th Bomb Wing at Dyess Air Force Base (AFB).

The company will use synthetic data to train and improve the accuracy of vision algorithms for navigation through its Rapid Capabilities office.

Synthetic data or computer-generated images are economical and can be generated faster than hand-labelled photos, solving the limitations associated with real data.

The advanced technology creates vision algorithms needed to save lives during operations.

Phase 2 SBIR contract awarded to AI.Reverie follows its co-publication with the IQT Lab CosmiQ Works that highlighted the value of synthetic data to train computer vision algorithms.

Furthermore, the research partners released RarePlanes for academic and commercial use with open dataset of real and synthetic overhead imagery.

USAF Major Anthony Bunker said: As the world has gotten smaller, the ability to navigate based on visual terrain features has become an ever-increasing challenge.

Computer vision algorithms can be trained to recognise these world-wide terrain features by ingesting large amounts of diverse data.

We are excited to collaborate with AI.Reverie to improve navigation capabilities given the companys ability to generate fully annotated data at scale with its synthetic data platform.

In May this year, AI.Reverie and Green Revolution Cooling (GRC) secured AFWERX SBIR Phase I contract from the USAF. The contract was for enhancing computer vision models for the US Department of Defense (DoD).

See the article here:

AI.Reverie wins contract to improve navigation capabilities for USAF - Airforce Technology

Posted in Ai

[PULSE] Why we need diversity in AI development – HousingWire

Trust is a universal concept and like love, can be fickle, fleeting and fuzzy. Some trust is implicit when I board a bus, I trust the driver to drive safely. I know nothing about the driver or the bus, I could be in a foreign country, but I trust. I am shocked if they prove me wrong.

Some trust takes work. I have learned to trust a certain hairdresser, a particular sandwich maker, my husband. This took time and effort from both parties repeated execution towards a set goal and consistently meeting them.

And of course, there are situations that require contextual trust. I trust my sandwich maker to give me a great sandwich, but can I trust him to do my eyebrows? I trust my husband, but not necessarily with cheesecake.

It is these fuzzy situations that tend to trip up even the smartest artificial intelligence (AI) workforce. We have established tried-and-true AI workflows back-office operations, analytics and advanced computing. We have successfully tackled using AI in newer areas, such as tiered contextual responses, voice recognition, biometrics and natural language processing.

The fuzziness increases in emerging areas of AI use, including one where its especially common in mortgage banking customer engagement using sentiment analysis and advanced contextual cues.

Its been repeatedly proven over the last few decades that if you focus an AI on a definitive task chess, calculus and soil management it excels. But contextual knowledge remains an enigma, mainly because of our limitations. The code we write and the logic we create is influenced by the bubbles we live in.

If you have a Google news feed, or a social media account, you have experienced AI working on a definitive task. A news and media feeds that provides content with similar attributes keeps you engaged.

As New York Times Tech Columnist Kevin Roose uncovers in his chilling Rabbit Hole podcast, what it does not do is provide you with diversity of thought.

The AI driving those feeds does not show you news and media feeds that might expand your view. Nothing you see is going to challenge your beliefs. That is a critical point because our experiences and beliefs both firsthand and secondhand through books, friends, movies, and feeds all play a significant role in the knowledge we hold and impart.

The idea that diversity is needed in any AI development to ensure a roundness, a level of acceptable morality and humanity may not be new, but it continues to be an issue.

Every iteration from Nikons failure of recognizing Asian faces to Microsofts disastrous troll-taught bot Tay, has widened our perspective. AI needs interaction with people, but individuals bring positive and negative experiences and beliefs to those interactions.

How do we capture the many cultural social norms of the American melting pot?We know to use larger datasets, look for fringe cases, diversify our focus groups and refine through larger and larger testing.

In mortgage banking, AI does its best work answering questions with exact answers for questions such as, Whats the most I can borrow if I use Fannie Maes HomeReady?

Where AI still falls short is in areas that our mortgage loan originators excel, the fuzzy areas where beliefs, experience and social norms influence our financial decisions. AI can offer pros and cons to answer the question, Is it cheaper to use Fannie Maes HomeReady or my state bond program, or both?

What it cannot advise on is how your relationship may change after using your father-in-laws income to qualify for HomeReady.

Ray Kurzweil waxes eloquently about human historys law of accelerating returns advanced societies have the ability to progress faster since they are more advanced. Marty McFly was shocked when he went back 30 years from 1985 to 1955 in the movie Back to the Future. But if he were to go back 30 years today from 2020 to 1990 his mind would be blown! Smart phones, internet, social media, mumble rap, K Pop the pace of change would be even more evident.

History dictates that we are fast on our way to a truly intelligent bot. There are many schools of thought on how we will reach this goal evolutionary algorithms where we mimic natural selection by combining logic that we deem accurate. Maybe self-learning algorithms that enable the AI to code and improve its own architecture. Whatever the path, the goal is in sight.

While we await the intelligent bot, perhaps the easiest integration for AI in mortgage banking is into our workforce. The value proposition of using AI to increase efficiency and optimize the current workforce has been established.

Many lenders who tout accelerated close times leverage varying degrees of AI to supplement their teams. Any activity that may be defined as business rules (no matter how nested) can be seamlessly executed. The most common tried-and-tested AI assisted workflows now include ordering disclosures, integrations with third-party services, AUS/DU, and QA/QC to verify values across systems and documents.

Newer workflows like intelligent NLP chatbots are getting smarter and better able to provide contextual advice. The scenarios they respond to still need to be defined by a human but the bot is able to retain context through multiple levels and understand slang, emojis, and some levels of content.

The difference of providing not just an animated version of a FAQ, but an actual advisor who can walk a customer through multiple scenarios is incredible. The customer experiences hyper-personalized, focused attention delivered on their schedule. The gains a huge lift in productivity from having a trained team member who works 24/7 for a fraction of the cost.

Careful consideration needs to be made on how the bot is integrated. Focused tasks with clear business rules are easy solutions and the industry is filled with potential providers.

Contextual situation requires more thought and grooming how does the bot fit your representation of your company? It is also important that the bot align with your corporate values and mission. You should pay attention to how it reacts with not only your core demographic but every customer and potential prospect. Also, companies need to determine how to manage interactions and how bots will update as customer beliefs, views and social norms change.

With where bots are right now, we are really looking for trust in our teams to design, nurture, and tailor a bot to have the best interests of our companies, our customers, and our business partners in mind. By focusing on the fuzzy, we can create a contextual experience that builds the trust we want our end-users to have in us.

The views and opinions expressed in this article are those of the author and do not necessarily reflect or represent the views, policy, or position of Planet Home Lending, LLC.

Read more from the original source:

[PULSE] Why we need diversity in AI development - HousingWire

Posted in Ai

Comcast credits AI software for handling the pandemic internet traffic crush – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

Comcast said investments in artificial intelligence software and network capacity have helped it meet internet traffic demand during the pandemic.

Elad Nafshi, senior vice president for next-generation access networks at Comcast Xfinity, said in an interview with VentureBeat that the nations internet network has held up during the surge of residential internet traffic from people working at home. But this success wasnt just because of capital spending on fiber-optic networks. Rather, it has depended on a suite of AI and machine-learning software that gives the company visibility into its network, adds capacity quickly when needed, and fixes problems before humans notice them.

Comcasts network is accessible to more than 59 million U.S. homes via 800,000 miles of cable (about 3 times the distance to the moon). Back in March, Comcast said internet traffic had risen 32% because of COVID-19 but assured everyone it had the capacity to handle peak traffic demands in the U.S. The company also saw a 36% increase in mobile data use over Wi-Fi on Xfinity mobile.

The first part of the growth was because of work from home, Jan Hofmeyr, chief network officer at the Comcast Technology Center in Philadelphia, said in an interview with VentureBeat. Things like video conferencing started to drive a lot of traffic. The consumption of video went up significantly. And then with kids being home, you could see playing games going upward. We saw it go up across the board.

But since March and April, the traffic from Comcasts 21 million subscribers has hit a plateau. People are getting out of their homes more and the initial surge of work-from-home has normalized, Hofmeyr said.

The company normally adds capacity 12 to 18 months ahead of time, with typical plans targeting 45% a year increases in traffic. Since 2017, Comcast has invested $12 billion in the network and added 33,331 new route miles of fiber optic cable. Those investments have enabled the company to double capacity every 2.5 years, Hofmeyr said.

Above: Comcast executive vice president and chief network officer Jan Hofmeyr.

Image Credit: Comcast

With COVID-19, we obviously saw a massive surge in the network, and looking back in retrospect the network was highly reliable, Hofmeyr said. We were able to respond quickly as we saw the spike in traffic. We were able to add capacity without having to take the network down. It was designed for that.

During the initial stages of the pandemic, the new technologies were able to handle regional surges while internet traffic spiked as much as 60%. Nafshi told VentureBeat the network cant handle surges just by getting bigger. In March and April, Comcast added 35 terabits per second of peak capacity to regional networks. And the company added 1,700 100-gigabit links to the core network, compared to 500 in the same months a year earlier.

The companys software, called Comcast Octave, helps manage traffic complexity, working behind the scenes where customers dont notice it. The AI platform was developed by Comcast engineers in Philadelphia. It checks 4,000-plus telemetry data points (such as external network noise, power levels, and other technical issues that can add up to a big impact on performance) on more than 50 million modems across the network every 20 minutes. While invisible, the AI and machine learning tech has played a valuable role over the past several months.

COVID-19 was a very unique experience for us, said Nafshi. When youre building networks, you never build for the situation where everyone gets locked up in their room in their homes and suddenly they jump online. Now, thats the new normal. The challenge we are presented with is how to enable our customers to shelter in place and work and be entertained.

Octave is programmed to detect when modems arent using all the bandwidth available to them as efficiently as possible. Then it automatically adjusts them, delivering substantial increases in speed and capacity. Octave is a new technology, so when COVID-19 hit, Comcast had only rolled it out to part of the network.

To meet the sudden demand, a team of about 25 Octave engineers worked seven-day weeks to reduce the deployment process from months to weeks. As a result, customers experienced a nearly 36% increase in capacity just as they were using more bandwidth than ever before for working, streaming, gaming, and videoconferencing.

Weve had a fair amount of experience already looking at data patterns and acting on it, Nafshi said. We had an interactive platform deployed that we were leaning on. We looked at the data network conditions and decided what knobs we need to turn on our infrastructure in order to really optimize how packets get delivered to the home.

Comcast took the data it had collected and put it into algorithmic solutions to predict where interference could disrupt networks or trouble points might appear.

We have to turn the knobs so that we optimize delivery to your house, which would not be the same as the delivery to my home, Nafshi said. We provide you with much more reliable service by detecting the patterns that lead up to breakage and then have the network self-heal based on those patterns. Were making that completely transparent to the customer. The network can self-heal autonomously in a self-feedback loop. Its a seamless platform for the customer.

Above: The Comcast Technology Center in Philadelphia.

Image Credit: Comcast

Before introducing Comcast Octave, the company also deployed its Smart Network Platform. Developed by Comcast engineers, this suite of software tools automates core network functions. As a result of this investment, Comcast was able to dramatically cut down the number of outages customers experience and their duration. The outages are now lasting a matter of minutes sometimes, compared to hours before, said Noam Raffaelli, senior vice president of network and communications engineering at Comcast Xfinity, in an interview with VentureBeat.

We are trying to benefit from innovation on software to basically drive our outcomes and our operational key performance indicators (KPIs) down so things like outage minutes or minutes to repair go down, said Raffaelli. We look at data across our network and use data science to understand trends and do correlations between events we see on the network. We have telemetry and automation, so we can operate the equipment without the manual interference of our engineers. We mitigate issues before there is any degradation in the networks.

On top of that, the equipment is more secure and more automated, Raffaelli said. Comcast has also been able to figure out how to build redundancies into the network so it can hold up in the case of accidents, such as a backhoe operator cutting a fiber-optic cable.

This gives us an unprecedented real-time view of our network and unprecedented insights into what the customer experience is, Raffaelli said. Weve had a double-digit improvement in outage minutes and repair. We are building redundant links across the network.

A tool called NetIQ uses machine learning to scan the core network continuously, making thousands of measurements every hour. Before NetIQ, Comcast would often find out about a service-impacting issue like a fiber cut when it started seeing service degradation or getting customer calls.

With NetIQ in place, Comcast can see an outage instantly. The company has reduced the average amount of time it takes to detect a potentially service-impacting issue on the core network from 90 minutes to less than five minutes, which has paid off during COVID-19.

I witnessed some of this firsthand, as Im a Comcast subscriber. In four months, Ive had only one outage. I logged into my service account via the phone and got a message saying my area was experiencing an outage that was expected to last for 90 minutes. After that, the network was fixed and I have stayed on it since.

Above: Comcast manages its network from the CTC in Philadelphia.

Image Credit: Comcast

Gamers are among the hardest internet users to please, as they want to download a new game as soon as its available. They also want low latency, or no interaction delays, which is important in things like multiplayer shooting games like Call of Duty: Warzone, where you dont want confusion over who pulled a trigger first.

We are laser-focused on latency across our network. Its an extremely important metric that we track very closely across the entire network, Hofmeyr said. We feel very bullish and very excited about what we are able to deliver from a business perspective. I dont believe that we have a negative perspective, any impact on gaming from a latency perspective.

He added, Gaming is writing two things for us. One is the game downloads are just becoming bigger and bigger. This is very common today that a game download is multi-gig. And when they are released, you see massive expansion and growth in terms of downloads. On the latency side, we continuously invest. We are looking at AI. We are looking at software and tools to help improve it over time.

Game companies invest in low-latency game servers and improving the connections between specific gamers who are in the same match or the same region so latency doesnt affect them as much. But infrastructure companies like Comcast can also improve latency.

Content delivery networks are an integral part of making video delivery more efficient. Comcast video is delivered through the companys own CDNs, which position videos throughout the network so they can be delivered in as short a distance as possible to the viewer. The company constantly monitors peaks in traffic and designs the network for those peaks. Having a lot of people playing a game or watching a video at the same time establishes new peaks. But the 1,700 100-gig links allow the company to deal with those peaks by helping each region deal with peaks in specific parts of the network.

Above: Inside Comcasts CTC in Philadelphia.

Image Credit: Comcast

While its still early in the process, Comcast is moving to a virtualized, cloud-based network architecture so it can manage accelerating demand and deliver faster, more reliable service. Virtualization means taking functions that were once performed by large, purpose-built pieces of hardware hardware that required manual upgrades to deliver innovation and moving them into the cloud.

Transitioning into web-based software is helping us self-heal much faster and build our capabilities faster, Nafshi said. If there is a failure point, you fail at a container level rather than an appliance level, and that greatly reduces the time to repair and mitigate.

By doing this, Comcast will reduce the innovation cycles on those functions from years down to months. One example of this is the virtual CMTS initiative. (A CMTS is a large piece of hardware that serves an entire neighborhood, delivering traffic between the core network and homes.) Increasingly, Comcast has been making those devices virtual by transitioning their functions into software that runs in data centers.

This not only allows Comcast to innovate faster, it also provides two key benefits for customers. First, it allows the firm to introduce much smaller failure points into the system, grouping customers into smaller groups so if one part of the network environment experiences an issue, it affects far fewer people. Second, the virtual architecture lets Comcast leverage other AI tools to have far greater visibility into the health of the network and to self-heal issues without human intervention.

Upload speeds increased somewhat during COVID-19, but not nearly as much as downloading did. Uploads are driven by things such as livestreamers, who share their video across a network of fans. In the future, Comcast is promising symmetrical download and upload speeds at 10 gigabits a second. It hasnt said when that will happen, but Cable Labs, the research arm of the cable industry, is working on the technology.

Its something that is very much in development, Hofmeyr said. Its going to be remarkable. We can deploy on top of existing infrastructure by leveraging AI software and the evolving DOCSIS protocol.

Read the original post:

Comcast credits AI software for handling the pandemic internet traffic crush - VentureBeat

Posted in Ai

Jobvite Acquires Predictive Partner Team to Accelerate AI Innovation – Business Wire

INDIANAPOLIS--(BUSINESS WIRE)--Jobvite (www.jobvite.com), the leading end-to-end talent acquisition suite, today announced that it has acquired the artificial intelligence (AI) and data science team at Predictive Partner. Morgan Llewellyn, CEO of Predictive Partner, will serve as Jobvites Chief Data Scientist and oversee a team leveraging AI through automation, predictive analytics, data science, machine learning, natural language processing, and optical character recognition.

As the first provider to introduce both machine vision to generate Magic Resumes and candidate de-identification technology to reduce screening bias in chat transcripts, we understand the potential AI holds for talent acquisition professionals, said Aman Brar, CEO of Jobvite. The addition of Morgan and the Predictive Partner team to our ranks will help our customers derive even more value from the Jobvite Talent Acquisition Suite. By weaving native AI into all aspects of our software, we will deliver more than mere featureswe will deliver the future of smart automation, intelligent messaging, candidate matching, and data-driven hiring decisions for talent organizations of all sizes.

Today, many companies treat AI and analytics as bolt-on features within a specific offering, said Llewellyn. These siloed attempts fail to understand and account for the complex relationships between different workflows, from sourcing to applications, interviews, hiring, and internal mobility. The future of AI in talent acquisition rests in a unified approach that learns across the entire candidate journey from prospect to employee. Jobvite will use this unified approach to deliver more transparency, increase automation, mitigate bias, and improve the candidate experience. Predictive Partner is excited to join the Jobvite team and help recruiters improve their processes and outcomes while delivering a better candidate experience.

Asked for comment, Madeline Laurano, Founder and Chief Analyst of Aptitude Research Partners remarked, In an industry with many fragmented startups, Jobvite's acquisition of the Predictive Partner team and making AI an inherent part of the Jobvite Talent Acquisition Suite is great for its customers.

Coinciding with the acquisition, Jobvite has also announced the launch of enhanced candidate engagement scoring and intelligent candidate matching capabilities. Enhanced candidate engagement scoring will help talent acquisition teams better gauge candidate interest through at-a-glance engagement metrics for every candidate. Intelligent candidate matching will enable recruiters to scale their efforts by reducing the time it takes to identify a qualified candidate from a large volume of candidates. With intelligent candidate matching, recruiters can focus on talent with the skills and experience needed to succeed while quickly identifying candidates who may be better suited for other open roles.

To learn more about the application of AI and analytics in talent acquisition, recruiters, HR, and TA professionals are encouraged to register for The Summer to Evolve presented by Jobvite. To learn more about Jobvite, visit http://www.jobvite.com.

About Jobvite

Jobvite is leading the next wave of talent acquisition innovation with a candidate-centric recruiting model that helps companies engage candidates with meaningful experiences at the right time, in the right way, from first look to first day. The Jobvite Talent Acquisition Suite weaves together automation and intelligence in order to increase recruiting speed, quality, and cost-effectiveness. Jobvite is proud to serve thousands of customers across a wide range of industries including Ingram Micro, Schneider Electric, Premise Health, Zappos.com, and Blizzard Entertainment. To learn more, visit http://www.jobvite.com or follow the company on social media @Jobvite.

About Predictive Partner

Predictive Partner is a leading data science firm that solves critical business problems. Leveraging predictive analytics, data science, machine learning, and artificial intelligence, Predictive Partner achieves transformational business results for its clients. A team-based model with experienced Ph.D. data scientists allows clients to deploy and scale their data strategies with low risk and high dependability. To learn more, visit https://predictivepartner.com.

Read more from the original source:

Jobvite Acquires Predictive Partner Team to Accelerate AI Innovation - Business Wire

Posted in Ai

AMP Robotics Named to Forbes AI 50 | RoboticsTomorrow – Robotics Tomorrow

Company recognized among rising stars of artificial intelligence for its AI-guided robots transforming the recycling industry

Forbes has named AMP Robotics Corp. ("AMP"), a pioneer and leader in artificial intelligence (AI) and robotics for the recycling industry, one of America's most promising AI companies. The publication's annual "AI 50" list distinguishes private, U.S.-based companies that are wielding some subset of artificial intelligence in a meaningful way and demonstrating real business potential from doing so. To be included on the list, companies needed to show that techniques like machine learning, natural language processing, or computer vision are a core part of their business model and future success.

AMP's technology recovers plastics, cardboard, paper, metals, cartons, cups, and many other recyclables that are reclaimed for raw material processing. AMP's AI platform uses computer vision to visually identify different types of materials with high accuracy, then guides high-speed robots to pick out and recover recyclables at superhuman speeds for extended periods of time. The AI platform transforms images into data to recognize patterns, using machine learning to train itself by processing millions of material images within an ever-expanding neural network of robotic installations.

"We consider AMP a category-defining business and believe its artificial intelligence and robotics technology are poised to solve many of the central challenges of recycling," said Shaun Maguire, partner at Sequoia Capital and AMP board member. "The opportunity for modernization in the industry is robust as the demand for recycled materials continues to swell, from consumers and the growing circular economy."

AMP's "AI 50" recognition comes on the heels of receiving a 2020 RBR50 Innovation Award from Robotics Business Review for the company's Cortex Dual-Robot System. Earlier this year, Fast Company named AMP to its "World's Most Innovative Companies" list for 2020, and the company captured a "Rising Star" Company of the Year Award in the 2020 Global Cleantech 100.

Since its Series A fundraising in November, AMP has been on a major growth trajectory as it scales its business to meet demand. The company announced a 50% increase in revenue in the first quarter of 2020, a rapidly growing project pipeline, a facility expansion in its Colorado headquarters, and a new lease program that makes its AI and robotics technology even more attainable for recycling businesses.

About AMP Robotics Corp.

AMP Robotics is applying AI and robotics to help modernize recycling, enabling a world without waste. The AMP Cortex high-speed robotics system automates the identification and sorting of recyclables from mixed material streams. The AMP Neuron AI platform continuously trains itself by recognizing different colors, textures, shapes, sizes, patterns, and even brand labels to identify materials and their recyclability. Neuron then guides robots to pick and place the material to be recycled. Designed to run 24/7, all of this happens at superhuman speed with extremely high accuracy. With deployments across the United States, Canada, Japan, and now expanding into Europe, AMP's technology recycles municipal waste, e-waste, and construction and demolition debris. Headquartered and with manufacturing operations in Colorado, AMP is backed by Sequoia Capital, Closed Loop Partners, Congruent Ventures, and Sidewalk Infrastructure Partners ("SIP"), an Alphabet Inc. (NASDAQ: GOOGL) company.

Go here to see the original:

AMP Robotics Named to Forbes AI 50 | RoboticsTomorrow - Robotics Tomorrow

Posted in Ai

Exploring the transformational impact of AI and advanced analytics – Information Age

As part of Information Age's blockchain and emerging technology month, we explore the transformational impact of advanced AI and analytics

AI and advanced analytics can transform industries, but they have to be implemented properly.

AI and advanced analytics can have a transformational impact on every aspect of a business, from the contact centre or supply chain to the overall business strategy.

With the new challenges caused by coronavirus, companies are in a growing need of more advice, more data and visibility to minimise the business impact of the virus.

However, long before the disruption caused by Covid-19, data was recognised as an essential asset in delivering improved customer service. And yet, businesses of all sizes have continued to struggle with gaining more tangible value from their vast hoards of data to improve the employee and customer experience.

Data silos, creaking legacy systems and fast-paced, agile competitors have made the need to harness an organisations data to drive value of paramount importance.

The challenge is huge and many traditional and new companies are waking up to the use of the partner ecosystem and the need to utilise various technologies, like AI and advanced analytics, to stave off disruption and innovate by taking advantage of data.

From adopting industry standards to the use of graph databases and a real-life use case of AI and advanced analytics in action, six experts explore the transformational impact of AI and advanced analytics, while explaining how to implement the technologies.

Patrick Smith, Field CTO EMEA at Pure Storage, understands the value of data. Its the most valuable form of modern currency, he says.

However, he points out that vast swathes of business data is only actionable if it can be processed, read and understood fast. In this sense, advanced analytics is datas unsung hero. It does all the heavy-lifting and underpins business transformation efforts and helps companies both big and small to increase their results and performance.

Despite this knowledge, Smith highlights that most organisations lack the infrastructure and analytical software or the know-how to implement AI and advanced analytics effectively.

He explains that to overcome this, companies must be laser-focused on aligning their data strategy to their business goals, and work with technology partners to provide a modern data experience based on infrastructure that is lightning fast, scale-out and easy to use.

Casertas CEO and principal data strategist discusses the data and analytics predictions and priorities for the year ahead. Read here

For the last decade, business intelligence has been used to gain insight from historical data, but until recently, these analytical techniques have been mainly manual.

This is changing and Wayne Butterfield, director at global technology research and advisory firm, ISG, explains thatbusiness leaders are welcoming the promise of artificial intelligence (AI) to both remove the manual process and improve the quality of insight.

He says: Data-driven insights using historical data to predict future outcomes combine data, advanced analytics and AI to transform decision making, based on predictive insights in areas like revenue, demand and supply.

Its still early days, but auto machine learning (AutoML) technologies are lowering the barrier to entry for organisations that may not have large teams of data scientists, but that still see the value in looking forward and not backwards with their data.

Pointing to AutoML tools, like Kortical.io and Data Robot, Butterfield explains that these are becoming more popular in automation centres of excellence, as advanced AI models are plunged into the relatively simple robotic process automation-type processes, to take action based on these predictions.

Kerrie Heath, European sales director, AI at OpenText, says that extracting value from data shouldnt be a daunting task.

By adopting advanced AI-powered analytics, organisations can drive value real-time and deliver it in a visual, interactive format that lets users easily make predictions about products, topics, events, trends, and even themes and emotions, she says.

Only with a complete view of this unstructured data and combining it with structured data from enterprise systems in real-time, will organisations be able to analyse, understand and manage their enterprise digital ecosystem more efficiently. In turn, organisations are providing themselves with the tools to ensure and enforce data governance, adds Heath.

Greg Hanson of Informatica explains the importance of an end-to-end data engineering approach in achieving success with AI initiatives. Read here

Alejandro Saucedo, engineering director at Seldon, believes theimplementation of advanced AI and analytics is having a tremendous impact on society.

He says: Both of these technologies lead to massive surges in productivity, and huge reductions in costs both opportunity costs, and actual costs. Efficiency improvements from advanced AI will transform certain sectors over the next few years, notably transport, energy, and infrastructure.

However, Saucedo points out that if not implemented properly, AI can bring about undesirable outcomes for organisations, particularly when it comes to compromised cybersecurity, privacy, and trust.

To optimally implement AI and ensure it provides a net gain for our economy and society, we need to develop general and industry-specific standards, and fit-for-purpose regulatory frameworks. Transparent and enforceable frameworks are key, and we need to guarantee that relevant experts, technical and non-technical, are continuously involved in developing and updating them, he advises.

Amy Hodler, analytics and AI programme manager at graph database firm Neo4j, says that the logical extension of analytics is to use the relationships and network structures that are held in all data and are proven to be extremely predictive. This will transform analytics and AI as connectivity-based learning is necessary to address complex questions, including those about system dynamics and group behaviour, with less data.

Businesses can leverage connected data insights held in a graph database with efficiency and flexibility otherwise unattainable using a relational database. Because a graph database is built to preserve and compute over relationships, it enables valuable and often nuanced predictions, such as pinpointing interactions that indicate fraud, identifying similar entities or individuals, finding the most influential elements in a patient or customer journey, or even ameliorate the spread of IT or phone outages.

She continues: Data scientists gain a force multiplier when they use graph algorithms to understand the natural shape of complex systems through data patterns and increase predictive accuracy. When used in a framework that automatically transforms a stored graph into a computational graph, they benefit from a flexible data structure that provides better predictions, more automation and contextually responsive AI.

Life insurance is just one of many industries that can be transformed using AI and advanced analytics.

Paul Donnelly, executive vice president of EMEA at Munich Re Automation Solutions, explains how AI and advanced analytics are used in his industry, life insurance.

He says: Insurance is rife with manual processes and back office procedural steps, leading to poor customer experiences. And while purchasing life insurance isnt something we look forward to anyway, complex processes certainly dont help entice modern digital-savvy customers.

This is where AI and data analytics come in. Such advanced technology optimises the end-customers journey for many reasons. For example, harnessing AI techniques means that we can bypass the need to ask customers endless, repeated, personal questions and instead route them through the questions which are relevant to them. Because in a world where we can easily buy most products we want in minutes with a few clicks, a drawn-out life insurance process simply is not appealing.

Furthermore, analytics allow insurers to take advantage of vast amounts of applicant data and transform it into actionable insight. These insights allow insurers to amend underwriting rules in real-time, resulting in technologies that design, evolve and streamline interview processes for customer convenience and faster time to underwrite the customer. An insurer that doesnt do so is being careless with its customers time.

Go here to read the rest:

Exploring the transformational impact of AI and advanced analytics - Information Age

Posted in Ai

China Everbright Limited and Terminus Technologies Launch A 10-Billion AI Economy Fund – PRNewswire

BEIJING, July 12, 2020 /PRNewswire/ -- China Everbright Limited ("CEL"), and Terminus Technologies (Terminus), recently announced the joint launch of "CEL AI Economy Fund", aiming to raise RMB 10 billion and operate in both Renminbi and US dollars.

So far, the fund has received RMB 7 billion in Phase 1 from institutional investors, with the launch of the US dollar tranche to launch soon. CEL AI Economy Fund focuses on the application of AIoT+ strategy and its ecology in the entire industry. The fund aims at developing the next-generation ICT-enabled industrial chain through equity investments including Smart City projects, autonomous driving solutions, smart healthcare systems, intelligent transportation, and smart retail. The establishment of this series of funds reflect Everbright Limited's investment philosophy of making investments around the industry and combining industry with finance. CEL AI Economy Fund opens the new chapter of Everbright Limited's investment strategy plans, enabling a smooth and flawless transition from new economy into AI economy.

The establishment of the CEL AI Economy Fund will accelerate the global deployment of Terminus' AI CITY network. By closely connecting financing and industries, it will facilitate the construction of intelligent new infrastructure, the growth of AI, 5G, Internet of Things, cloud computing and other technology-based infrastructure, and the formation of the new model of smart city. This is another major move taken by China Everbright Limited following its new "One-Four-Three" strategy last year. China Everbright Limited's "One-Four-Three" strategy will focus on investing in 4 key industries (AIoT, the entire aircraft industry, real estate management, and retirement management) to incubate and cultivate 4 leading companies (Terminus Technologies, CALC, EBA Investments, and China Everbright Senior Healthcare Company Limited), and to raise funds for each of the four leading companies in 3 years.

According to Dr. Zhao Wei, Executive Director and Chief Executive Officer of China Everbright Limited, "As China's central government proposed to accelerate the new infrastructure construction, its featured industries such as 5G communication technology, AI, Internet of Things and data centers have become the new essential production factors. This will drive social and economic development and innovation in many aspects, and empower the industries in the intelligent new economy. Especially, when China is undergoing an economic transformation and replacing old growth drivers with new ones, industrial investment cannot blindly seek scale and completeness. Instead, investors shall depend on their own advantages and focus on real economy to serve the best interest of overall economic development. The establishment of CEL AI Economy Fund is an innovative breakthrough for China Everbright Limited. It carries out the 'Three Big and One New', the strategic blueprint for the new technology sector laid out by China Everbright Group, marking an important move to drive strong the four core strategic platforms of China Everbright Limited."

Victor Ai, founder and CEO of Terminus Technologies, said: "CEL AI Economy Fund finds industrial development is the fostering land for its new economy mindset. Through industrial agglomeration and urban function upgrade, the fund will take big data, AI and Internet of Things as the leading power to realize the intelligent economy and the coordinated development. Terminus' AI CITY strategy aims to combine innovative design and cutting-edge technology to drive the development of next generation cities, and create a new paradigm for urban construction and operation. With its rich practice and leading experience in the AI CITY field, and its core technologies and product capabilities on AI and Internet of Things, Terminus will be able to fundamentally accelerate the development and innovation of the intelligent new economy. And CEL AI Economy Fund is another effective support for Terminus to achieve its global strategic layout of AI CITY.

On July the 2nd, Expo 2020 Dubai announced Terminus Technologies as its 12th Official Premier Partner. Leveraged by the company's cutting-edge AIoT technologies and its global competitive advantages in the field of AI City, Terminus Technologies will empower Expo 2020 Dubai with its technological innovations together with Cisco, Siemens, SAP, Accenture, and other widely known tech giants, making it possible for the city-level smart solutions to exploit their full potential in the age of AI. Meanwhile, Terminus Technologies is actively engaged in establishing its first overseas branch in District 2020 and further developing its innovative AI City ecosystem.

Prior to this, in April, Terminus Technologies' first AI industrial base was launched in Chongqing. The project is a milestone achieved in the direction of "the new infrastructure" development, being the part of the Chinese national strategy of Smart Cities' construction. As will be supported by high-quality strategic resources at home and abroad, this model is going to see its replication and expansion in multiple core cities of various countries and regions across the world in the future, establishing a global technology network of Terminus Technologies' AI CITY, and achieving its goal of international construction and development.

SOURCE China Everbright Limited

Originally posted here:

China Everbright Limited and Terminus Technologies Launch A 10-Billion AI Economy Fund - PRNewswire

Posted in Ai

Analyzing Impacts of Covid-19 on Cognitive System and Artificial Intelligence (AI) Systems Market Effects, Aftermath and Forecast To 2026 – Cole of…

The global Cognitive System and Artificial Intelligence (AI) Systems market focuses on encompassing major statistical evidence for the Cognitive System and Artificial Intelligence (AI) Systems industry as it offers our readers a value addition on guiding them in encountering the obstacles surrounding the market. A comprehensive addition of several factors such as global distribution, manufacturers, market size, and market factors that affect the global contributions are reported in the study. In addition the Cognitive System and Artificial Intelligence (AI) Systems study also shifts its attention with an in-depth competitive landscape, defined growth opportunities, market share coupled with product type and applications, key companies responsible for the production, and utilized strategies are also marked.

This intelligence and 2026 forecasts Cognitive System and Artificial Intelligence (AI) Systems industry report further exhibits a pattern of analyzing previous data sources gathered from reliable sources and sets a precedented growth trajectory for the Cognitive System and Artificial Intelligence (AI) Systems market. The report also focuses on a comprehensive market revenue streams along with growth patterns, analytics focused on market trends, and the overall volume of the market.

Download PDF Sample of Cognitive System and Artificial Intelligence (AI) Systems Market report @ https://hongchunresearch.com/request-a-sample/25074

The study covers the following key players:BrainasoftBrighterionAstute SolutionsKITT.AIIFlyTekGoogleMegvii TechnologyNanoRep(LogMeIn)IDEAL.comIntelSalesforceAlbert TechnologiesMicrosoftAda SupportIpsoftSAPYseopIBMWiproH2O.aiBaidu

Moreover, the Cognitive System and Artificial Intelligence (AI) Systems report describes the market division based on various parameters and attributes that are based on geographical distribution, product types, applications, etc. The market segmentation clarifies further regional distribution for the Cognitive System and Artificial Intelligence (AI) Systems market, business trends, potential revenue sources, and upcoming market opportunities.

Market segment by type, the Cognitive System and Artificial Intelligence (AI) Systems market can be split into,On-PremiseCloud-based

Market segment by applications, the Cognitive System and Artificial Intelligence (AI) Systems market can be split into,Voice ProcessingText ProcessingImage Processing

The Cognitive System and Artificial Intelligence (AI) Systems market study further highlights the segmentation of the Cognitive System and Artificial Intelligence (AI) Systems industry on a global distribution. The report focuses on regions of North America, Europe, Asia, and the Rest of the World in terms of developing business trends, preferred market channels, investment feasibility, long term investments, and environmental analysis. The Cognitive System and Artificial Intelligence (AI) Systems report also calls attention to investigate product capacity, product price, profit streams, supply to demand ratio, production and market growth rate, and a projected growth forecast.

In addition, the Cognitive System and Artificial Intelligence (AI) Systems market study also covers several factors such as market status, key market trends, growth forecast, and growth opportunities. Furthermore, we analyze the challenges faced by the Cognitive System and Artificial Intelligence (AI) Systems market in terms of global and regional basis. The study also encompasses a number of opportunities and emerging trends which are considered by considering their impact on the global scale in acquiring a majority of the market share.

The study encompasses a variety of analytical resources such as SWOT analysis and Porters Five Forces analysis coupled with primary and secondary research methodologies. It covers all the bases surrounding the Cognitive System and Artificial Intelligence (AI) Systems industry as it explores the competitive nature of the market complete with a regional analysis.

Brief about Cognitive System and Artificial Intelligence (AI) Systems Market Report with [emailprotected] https://hongchunresearch.com/report/cognitive-system-and-artificial-intelligence-ai-systems-market-25074

Some Point of Table of Content:

Chapter One: Cognitive System & Artificial Intelligence(AI) Systems Market Overview

Chapter Two: Global Cognitive System & Artificial Intelligence(AI) Systems Market Landscape by Player

Chapter Three: Players Profiles

Chapter Four: Global Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue (Value), Price Trend by Type

Chapter Five: Global Cognitive System & Artificial Intelligence(AI) Systems Market Analysis by Application

Chapter Six: Global Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import by Region (2014-2019)

Chapter Seven: Global Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue (Value) by Region (2014-2019)

Chapter Eight: Cognitive System & Artificial Intelligence(AI) Systems Manufacturing Analysis

Chapter Nine: Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter Ten: Market Dynamics

Chapter Eleven: Global Cognitive System & Artificial Intelligence(AI) Systems Market Forecast (2019-2026)

Chapter Twelve: Research Findings and Conclusion

Chapter Thirteen: Appendix continued

Check [emailprotected] https://hongchunresearch.com/check-discount/25074

List of tablesList of Tables and Figures

Figure Cognitive System & Artificial Intelligence(AI) Systems Product PictureTable Global Cognitive System & Artificial Intelligence(AI) Systems Production and CAGR (%) Comparison by TypeTable Profile of On-PremiseTable Profile of Cloud-basedTable Cognitive System & Artificial Intelligence(AI) Systems Consumption (Sales) Comparison by Application (2014-2026)Table Profile of Voice ProcessingTable Profile of Text ProcessingTable Profile of Image ProcessingFigure Global Cognitive System & Artificial Intelligence(AI) Systems Market Size (Value) and CAGR (%) (2014-2026)Figure United States Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Europe Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Germany Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure UK Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure France Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Italy Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Spain Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Russia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Poland Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure China Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Japan Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure India Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Southeast Asia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Malaysia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Singapore Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Philippines Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Indonesia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Thailand Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Vietnam Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Central and South America Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Brazil Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Mexico Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Colombia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Middle East and Africa Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Saudi Arabia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure United Arab Emirates Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Turkey Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Egypt Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure South Africa Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Nigeria Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Status and Outlook (2014-2026)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production by Player (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production Share by Player (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Share by Player in 2018Table Cognitive System & Artificial Intelligence(AI) Systems Revenue by Player (2014-2019)Table Cognitive System & Artificial Intelligence(AI) Systems Revenue Market Share by Player (2014-2019)Table Cognitive System & Artificial Intelligence(AI) Systems Price by Player (2014-2019)Table Cognitive System & Artificial Intelligence(AI) Systems Manufacturing Base Distribution and Sales Area by PlayerTable Cognitive System & Artificial Intelligence(AI) Systems Product Type by PlayerTable Mergers & Acquisitions, Expansion PlansTable Brainasoft ProfileTable Brainasoft Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Brighterion ProfileTable Brighterion Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Astute Solutions ProfileTable Astute Solutions Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table KITT.AI ProfileTable KITT.AI Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table IFlyTek ProfileTable IFlyTek Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Google ProfileTable Google Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Megvii Technology ProfileTable Megvii Technology Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table NanoRep(LogMeIn) ProfileTable NanoRep(LogMeIn) Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table IDEAL.com ProfileTable IDEAL.com Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Intel ProfileTable Intel Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Salesforce ProfileTable Salesforce Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Albert Technologies ProfileTable Albert Technologies Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Microsoft ProfileTable Microsoft Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Ada Support ProfileTable Ada Support Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Ipsoft ProfileTable Ipsoft Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table SAP ProfileTable SAP Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Yseop ProfileTable Yseop Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table IBM ProfileTable IBM Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Wipro ProfileTable Wipro Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table H2O.ai ProfileTable H2O.ai Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Baidu ProfileTable Baidu Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production by Type (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production Market Share by Type (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Market Share by Type in 2018Table Global Cognitive System & Artificial Intelligence(AI) Systems Revenue by Type (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Revenue Market Share by Type (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Revenue Market Share by Type in 2018Table Cognitive System & Artificial Intelligence(AI) Systems Price by Type (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Growth Rate of On-Premise (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Growth Rate of Cloud-based (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption by Application (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption Market Share by Application (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption of Voice Processing (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption of Text Processing (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption of Image Processing (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption by Region (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption Market Share by Region (2014-2019)Table United States Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Europe Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table China Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Japan Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table India Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Southeast Asia Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Central and South America Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)continued

About HongChun Research:HongChun Research main aim is to assist our clients in order to give a detailed perspective on the current market trends and build long-lasting connections with our clientele. Our studies are designed to provide solid quantitative facts combined with strategic industrial insights that are acquired from proprietary sources and an in-house model.

Contact Details:Jennifer GrayManager Global Sales+ 852 8170 0792[emailprotected]

Original post:

Analyzing Impacts of Covid-19 on Cognitive System and Artificial Intelligence (AI) Systems Market Effects, Aftermath and Forecast To 2026 - Cole of...

Posted in Ai

Nearly quarter of lawyers fear impact of digitalisation and AI on legal profession, survey finds – The Global Legal Post

Claire Debney (left) and Emma Sharpe: 'The status quo we've been living with isn't working'

The MOSAIC Mood Index also shows the vast majority of lawyers are taking work-related stress home with them

Almost half of legal professionals dont feel positive about the future of the industry, according to a survey from the MOSAIC Collective.

The MOSAIC Mood Index, which surveyed nearly 1,500 lawyers across the world, showed that 49% of lawyers are concerned about the future of the profession, with almost a quarter of respondents fearful about the impact of digitalisation, technology and artificial intelligence.

The survey, which is supported by the Legal 500 directory, revealed that the stresses of the job were also taking a toll on lawyers wellbeing, with 94% saying the mood their job puts them in affects their personal life, with more than half saying they find it hard to talk about how they are feeling. A majority of lawyers said that being more active and engaged in conversations about their future prospects could help improve their mood.

Claire Debney and Emma Sharpe, co-founders of lawyer mentoring and training consultancy MOSAIC, said: This is a unique period of time, with multiple generations of workers in the workplace, each generation bringing different attitudes about work and wellbeing. Add to this the backdrop of living and working through a global pandemic, the first in living memory for any of us, and were seeing a rapid and seismic shift in the way we work, alongside some real challenges to wellbeing.

While roughly four out of every five respondents said they are content in their jobs, 39% said they have no career plan in place and only around one in 10 said they feel like their manager looks after their interests.

Lawyers said their happiness levels are mainly influenced by salary, their job title and recognition, the quality and meaningfulness of work, and the amount of flexibility and work-life balance their job offers. Respondents said loneliness and high levels of work-related stress are the main downsides. Some 70% said a lack of time prevents them from making positive changes to improve their happiness.

Debney and Sharpe added: We believe what [respondents] told us in the MOSAIC Mood Index remains relevant and is actually critical to understand, so that the learnings are not lost in a return to normal as we continue to live with the fallout and impact of the Covid-19 pandemic. It shows that the status quo weve been living with isnt working.

The surveys 1477 respondents included lawyers working in law firms and in-house legal teams; the largest geographic segments were the UK (31%), Western Europe (23%) and Central and South America (18%).

In May, a survey of law firm associates in the US, the UK and Asia conducted by Major Lindsey & Africa found just over a fifth of respondents worried about potential cost cutting measures due to the Covid-19 pandemic with 10% reporting mental health as a primary concern.

Sign up for daily email updates

Email your news and story ideas to:news@globallegalpost.com

See the original post here:

Nearly quarter of lawyers fear impact of digitalisation and AI on legal profession, survey finds - The Global Legal Post

Posted in Ai

Microsoft Upgrades Azure AI to Analyze Health Records and Streamline Voice App Creation – Voicebot.ai

on July 13, 2020 at 8:00 am

Microsofts artificial intelligence services are now able to mine electronic medical records for new insight and simplify building or improving voice apps after a spate of updates to the Azure AI platform. Azure Cognitive Services provides enterprise-level AI services to companies who want to apply artificial intelligence to their work.

The COVID-19 health crisis has accelerated the use of AI as a doctors assistant in record-keeping. Azure connects doctors notes and conversations with patients to electronic medical records both through the Project EmpowerMD Intelligent Scribe Service and as a host platform for Nuances virtual assistant for doctors after the two companies reached an agreement last fall. Now, Azure can help medical professionals glean new conclusions from that data using Text Analytics for health. Microsoft took the existing Text Analytics feature and trained it on medical data like clinical notes and protocols to let it understand how to find and to share insights from the huge amounts of medical data doctors normally have to pore through manually to find patterns. Though still in previews, Microsoft worked with research groups to create a search engine specifically about COVID-19 using both Text Analytics and Cognitive Search that should help those hunting for treatments for the virus. The updated Text Analytics feature is able to not only analyze facts, but to apply emotional tags to topics in any context, whether healthcare, sales, or another industry.

As the world adjusts to new ways of working and staying connected, we remain committed to providing Azure AI solutions to help organizations invent with purpose, Azure AI corporate vice president Eric Boyd wrote in the announcement. Building on our vision to empower all developers to use AI to achieve more, today were excited to announce expanded capabilities within Azure Cognitive Services.

Microsoft is also opening up the Form Recognizer feature it showcased a little over a year ago to all Azure users. Form Recognizer is designed to use the AI to grasp what a form full of data in tables and non-standard formats means, and to pull out that information for easier analysis. While likely applicable to some of the forms used in healthcare, Microsoft specifically cited financial organizations such as Capgemini Groups Sogeti and Wilson Allen as finding value in the feature for processing loan applications and other fiduciary paperwork.

Azure didnt neglect the voice facet of its AI in the update either. Most notably, it made Custom Commands universally available to developers. Custom Commands simplifies connecting voice apps to devices that can be controlled within straightforward parameters like light levels or the temperature on a thermostat. The AI comes with a wide range of commands it understands in its templates and the ability to switch among different topics and types of requests automatically.

People and organizations continue to look for ways to enrich customer experiences while balancing the transition to digital-led, touch-free operations, Boyd wrote. Advancements in voice technology are empowering developers to create more seamless, natural, voice-enabled experiences for customers to interact with brands. [Custom Commands] brings together Speech to Text for speech recognition, Language Understanding for capturing spoken entities, and voice response with Text to Speech, to accelerate the addition of voice capabilities to your apps with a low-code authoring experience.

Those capabilities include 15 new voices built with Azures Neural Text to Speech tech. The voices are designed to sound natural, using real peoples voices to teach the AI to sound like a human. The voices include a mix of new languages and dialects as well as new voices for languages already used by the AI. The new voices include two kinds of Arabic, Catalan, Cantonese, and Taiwanese Mandarin among others. Its the same technology used by the BBC to build its new Beeb voice assistant, but points to the global enterprises Microsoft hopes will use Azures technology.

Microsoft Adds New Speech Styles and Lyrical Emotion to Azure AI Toolkit

Microsoft Will Bring Nuance Clinical Voice Tech to Azure

Microsoft Adds New Language Options to Power Virtual Agents Platform

Eric Hal Schwartz is a Staff Writer and Podcast Producer for Voicebot.AI. Eric has been a professional writer and editor for more than a dozen years, specializing in the stories of how science and technology intersect with business and society. Eric is based in New York City.

Go here to see the original:

Microsoft Upgrades Azure AI to Analyze Health Records and Streamline Voice App Creation - Voicebot.ai

Posted in Ai

China Wants to Lead the World on AI. What Does That Mean for America? – The National Interest

Years ago, the thought of using software to fight a deadly pathogen might have seemed far-fetched. Today, its a reality. The Coronavirus pandemic has caused monumental shifts in the use and deployment of artificial intelligence (AI) around the world.

Of those now using AI to fight Coronavirus, none are more prominent than China. From software that diagnoses the symptoms of Coronavirus to algorithms that identify and compile data on individuals with high temperatures vis--vis infrared cameras, China is showcasing the potential applications of AI. But Beijing is also demonstrating its willingness to leverage the technology to solve many of its problems.

To understand the potential benefits and perils, we need to delve a bit deeper into the subject of AI itself. Artificial intelligence essentially falls into two categories: narrow and general. Narrow AI is a type of machine learning that is limited to specifically defined tasks, while general AI refers to totally autonomous intelligence akin to human cognition. General AI remains a distant dream for many, but the real-world implications of narrow AI exist in the presentand China is working diligently to become a world leader in it.

In his book AI Superpowers: China, Silicon Valley, and the New World Order, former Microsoft executive and Google China president Kai-Fu Lee describes how the country began rapid development of AI as a response to AlphaGo, a software program that successfully bested the worlds top player in the ancient game of Go back in 2017. That victory, Lee explains, showcased China's Communist Party (CCP) research and technology with infinite potential.

The revelation was a sea-change. In its 2019 Annual Report, the U.S.-China Economic and Security Review Commission noted that the Next Generation AI Development Plan released in 2017 by Chinas State Council marked a shift in Chinas approach to AI, from pursuing specific applications to prioritizing AI as foundational to overall economic competitiveness.

The results have been rapidand pronounced. China is still considered to be second in the race to AI (behind the U.S.), but it is quickly gaining traction. As the United Nations World Intellectual Property Organization (WIPO) noted last year, China leads in AI-related publications and patent applications originating from public research institutions, and the gap is shrinking between the U.S. and China in patent requests originating from the private sector.

And because the aggregation of vast swathes of data is what drives the most effective artificial intelligence, China is in a unique position to persevere. With the worlds largest population and close to no data privacy protections, the PRC has the potential to develop the worlds best AI products.

Beijing is also working hard to maintain its freedom of action in this domain. Back in March, China triedand nearly succeededin installing its candidate as head of the WIPO, a move that would essentially have assured that its lengthy track record of violating intellectual property rights, theft and espionage would not come with any consequences.

Those practices are already raising international hackles. In April of 2020, Bloomberg reported that electric carmaker Tesla is now seeking further legal action to analyze the source code of a competitors product in China after a former Tesla employee allegedly left the company in 2018 for the Chinese startup, carrying with him secrets from Teslas self-driving AI, AutoPilot.

But the CCP is also harnessing AI to strengthen its authoritarian state. Against the backdrop of the coronavirus pandemic, the Chinese government has stepped up its repressive domestic practices, including its persecution and detention of Uyghur Muslims in Western China and a broad crackdown on Hong Kong. Worryingly, Chinese advances in AI seem to be empowering these practices, as well as making them more effective.

These dynamics should matter a great deal to the United States, which has stepped up its strategic competition with China in earnest in recent months. Chinas activism on the AI front, and its attention to this emerging technology, has made abundantly clear that the PRC places tremendous value on dominating the field of AI. Washington should think deeply about what that would mean, in both a political and a technological sense. And then it should get just as serious in this sphere as well.

Ryan Christensen is a researcher at the American Foreign Policy Council in Washington, DC.

See the original post:

China Wants to Lead the World on AI. What Does That Mean for America? - The National Interest

Posted in Ai

10 Jobs That Should Emerge to Help Enterprises Advance AI and ML – ITPro Today

Here and elsewhere, youve likely read many articles and studies on the potentially transformative effect of artificial intelligence and machine learning on the workplace. Weve seen some of that transformation unfold more quickly during the ongoing COVID-19 pandemic, as workplaces across sectors explore automation to ensure essential processes, from security checks to invoice payments, keep happening.

There are concerns that AI and ML will cause significant job losses, and it seems inevitable that they will change or even eliminate some kinds of positions. But to unlock the potential of AI and ML for the enterprise, existing job roles must be filled and new ones must be created. Below are 10 workplace roles that could emerge as AI/ML continues to advance and organizations continue to integrate the technology into their operations.

1. Knowledge manager: As part of its Project Cortex rollout, Microsoft wants companies to hire knowledge managers. These employees would be responsible for the quality of knowledge shared across an organization and aggregating a companywide taxonomy.

2. AI scientist: Some organizations, of course, have just such a role in place already but as artificial intelligence becomes increasingly powerful and adopted by more and more organizations, these specialists will become essential to a growing number of companies.

3. AI manager: And of course, if you are adding AI scientists and other AI/ML experts to your team, someone with knowledge and experience in that field needs to manage them and help them work together. Management staffers with specific experience in artificial intelligence and machine learning could become increasingly important in integrating these technologies across an organization.

4. Subject matter expert: Also recommended by Microsoft in relation to Project Cortex, an organizations subject matter experts would have a deep understanding of how information is organized in the areas under their purview. As Microsoft imagines it, the one in this role would work closely with the knowledge manager.

5. Personality designer: Behind every AI architecture is a personality that someone had to design. Think of Siri, for example. Someone decided what the responses would be like, how the voice would sound, etc. As virtual assistants powered by machine learning become an increasingly common part of our work (and home) lives, the work of these designers and related workers, like writers and UI/UX professionals will be in even more demand.

6. AI trainer: The underlying structures of AI and ML products and services must be trained and trained well to be effective. That can be done with machines, but its likely to be far more effective if a human is selecting the information with an eye to effectiveness and bias.

7. Content services administrator: In some cases, a content services or knowledge administrator would represent an expansion of an existing role, like a SharePoint or Teams administrator. But this IT professional would set up and run knowledge product suites, like Cortex, Microsoft hopes.

8. Intelligence ethicist: The world is increasingly grappling with ethical issues brought forward by AI and ML, from built-in bias to spurious or even dangerous or illegal applications of technology. Large firms, in particular, will need intelligence ethicists to guide the decisions made by the products and services they are developing.

9. Data detective: Have you been impressed by the work done by COVID-19 contact tracers or intrigued by the possibilities (and pitfalls) of location tracing apps? Work as a data detective could be in your future. These employees could use data points for example, the locations someone has visited to solve problems, create datasets for AI/ML training, and develop new products and services.

10. Data broker: AI- and ML-driven technologies require reams of data to learn from, and that data has to come from somewhere. A data broker would be in charge of accessing, managing and deploying that data for an organization. Its a role likely to become increasingly complex as more and more jurisdictions add data-centric regulations like the California Consumer Privacy Act.

More:

10 Jobs That Should Emerge to Help Enterprises Advance AI and ML - ITPro Today

Posted in Ai

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

Researchers from Googles DeepMind and the University of Oxford recommend that AI practitioners draw on decolonial theory to reform the industry, put ethical principles into practice, and avoid further algorithmic exploitation or oppression.

The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary Generals High-level Panel on Digital Cooperation.

The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present, the paper reads. This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationshipsand employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that an indifferent field serves the powerful. VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.

The DeepMind paper interrogates how colonial features are found in algorithmic decision-making systems and what the authors call sites of coloniality, or practices that can perpetuate colonial AI. These include beta testing on disadvantaged communities like Cambridge Analytica conducting tests in Kenya and Nigeria or Palantir using predictive policing to target Black residents of New Orleans. Theres also ghost work, the practice of relying on low-wage workers for data labeling and AI system development. Some argue ghost work can lead to the creation of a new global underclass.

The authors define algorithmic exploitation as the ways institutions or businesses use algorithms to take advantage of already marginalized people and algorithmic oppression as the subordination of a group of people and privileging of another through the use of automation or data-driven predictive systems.

Ethics principles from groups like G20 and OECD feature in the paper, as well as issues like AI nationalism and the rise of the U.S. and China as AI superpowers.

Power imbalances within the global AI governance discourse encompasses issues of data inequality and data infrastructure sovereignty, but also extends beyond this. We must contend with questions of who any AI regulatory norms and standards are protecting, who is empowered to project these norms, and the risks posed by a minority continuing to benefit from the centralization of power and capital through mechanisms of dispossession, the paper reads. Tactics the authors recommend include political community action, critical technical practice, and drawing on past examples of resistance and recovery from colonialist systems.

A number of members of the AI ethics community, from relational ethics researcher Abeba Birhane to Partnership on AI, have called on machine learning practitioners to place people who are most impacted by algorithmic systems at the center of development processes. The paper explores concepts similar to those in a recent paper about how to combat anti-Blackness in the AI community, Ruha Benjamins concept of abolitionist tools, and ideas of emancipatory AI.

The authors also incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.

Go here to see the original:

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism - VentureBeat

Posted in Ai

New Study Attempts to Improve Hate Speech Detection Algorithms – Unite.AI

Social media companies, especially Twitter, have long faced criticism for how they flag speech and decide which accounts to ban. The underlying problem almost always has to do with the algorithms that they use to monitor online posts. Artificial intelligence systems are far from perfect when it comes to this task, but there is work constantly being done to improve them.

Included in that work is a new study coming out of the University of Southern California that attempts to reduce certain errors that could result in racial bias.

One of the issues that doesnt receive as much attention has to do with algorithms that are meant to stop the spread of hateful speech but actually amplify racial bias. This happens when the algorithms fail to recognize context and end up flagging or blocking tweets from minority groups.

The biggest problem with the algorithms in regard to context is that they are oversensitive to certain group-identifying terms like black, gay, and transgender. The algorithms consider these hate speech classifiers, but they are often used by members of those groups and the setting is important.

In an attempt to resolve this issue of context blindness, the researchers created a more context-sensitive hate speech classifier. The new algorithm is less likely to mislabel a post as hate speech.

The researchers developed the new algorithms with two new factors in mind: the context in regard to the group identifiers, and whether there are also other features of hate speech present in the post, like dehumanizing language.

Brendan Kennedy is a computer science Ph.D. student and co-lead author of the study, which was published on July 6 at ACL 2020.

We want to move hate speech detection closer to being ready for real-world application, said Kennedy.

Hate speech detection models often break, or generate bad predictions, when introduced to real-world data, such as social media or other online text data, because they are biased by the data on which they are trained to associate the appearance of social identifying terms with hate speech.

The reason the algorithms are oftentimes inaccurate is that they are trained on imbalanced datasets with extremely high rates of hate speech. Because of this, the algorithms fail to learn how to handle what social media actually looks like in the real world.

Professor Xiang is an expert in natural language processing.

It is key for models to not ignore identifiers, but to match them with the right context, said Ren.

If you teach a model from an imbalanced dataset, the model starts picking up weird patterns and blocking users inappropriately.

To test the algorithm, the researchers used a random sample of text from two social media sites that have a high-rate of hate speech. The text was first hand-flagged by humans as prejudiced or dehumanizing. The state-of-the-art model was then measured against the researchers own model for inappropriately flagging non-hate speech, through the use of 12,500 New York Times articles with no hate speech present. While the state-of-the-art models were able to achieve 77% accuracy in identifying hate vs non-hate, the researchers model was higher at 90%.

This work by itself does not make hate speech detection perfect, that is a huge project that many are working on, but it makes incremental progress, said Kennedy.

In addition to preventing social media posts by members of protected groups from being inappropriately censored, we hope our work will help ensure that hate speech detection does not do unnecessary harm by reinforcing spurious associations of prejudice and dehumanization with social groups.

See the original post here:

New Study Attempts to Improve Hate Speech Detection Algorithms - Unite.AI

Posted in Ai

Reducing bias in AI-based financial services – Brookings Institution

Artificial intelligence (AI) presents an opportunity to transform how we allocate credit and risk, and to create fairer, more inclusive systems. AIs ability to avoid the traditional credit reporting and scoring system that helps perpetuate existing bias makes it a rare, if not unique, opportunity to alter the status quo. However, AI can easily go in the other direction to exacerbate existing bias, creating cycles that reinforce biased credit allocation while making discrimination in lending even harder to find. Will we unlock the positive, worsen the negative, or maintain the status quo by embracing new technology?

This paper proposes a framework to evaluate the impact of AI in consumer lending. The goal is to incorporate new data and harness AI to expand credit to consumers who need it on better terms than are currently provided. It builds on our existing systems dual goals of pricing financial services based on the true risk the individual consumer poses while aiming to prevent discrimination (e.g., race, gender, DNA, marital status, etc.). This paper also provides a set of potential trade-offs for policymakers, industry and consumer advocates, technologists, and regulators to debate the tensions inherent in protecting against discrimination in a risk-based pricing system layered on top of a society with centuries of institutional discrimination.

AI is frequently discussed and ill defined. Within the world of finance, AI represents three distinct concepts: big data, machine learning, and artificial intelligence itself. Each of these has recently become feasible with advances in data generation, collection, usage, computing power, and programing. Advances in data generation are staggering: 90% of the worlds data today were generated in the past two years, IBM boldly stated. To set parameters of this discussion, below I briefly define each key term with respect to lending.

Big data fosters the inclusion of new and large-scale information not generally present in existing financial models. In consumer credit, for example, new information beyond the typical credit-reporting/credit-scoring model is often referred to by the most common credit-scoring system, FICO. This can include data points, such as payment of rent and utility bills, and personal habits, such as whether you shop at Target or Whole Foods and own a Mac or a PC, and social media data.

Machine learning (ML) occurs when computers optimize data (standard and/or big data) based on relationships they find without the traditional, more prescriptive algorithm. ML can determine new relationships that a person would never think to test: Does the type of yogurt you eat correlate with your likelihood of paying back a loan? Whether these relationships have casual properties or are only proxies for other correlated factors are critical questions in determining the legality and ethics of using ML. However, they are not relevant to the machine in solving the equation.

What constitutes true AI is still being debated, but for purposes of understanding its impact on the allocation of credit and risk, lets use the term AI to mean the inclusion of big data, machine learning, and the next step when ML becomes AI. One bank executive helpfully defined AI by contrasting it with the status quo: Theres a significant difference between AI, which to me denotes machine learning and machines moving forward on their own, versus auto-decisioning, which is using data within the context of a managed decision algorithm.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI. The foundation is a set of laws from the 1960s and 1970s (Equal Credit Opportunity Act of 1974, Truth in Lending Act of 1968, Fair Housing Act of 1968, etc.) that were based on a time with almost the exact opposite problems we face today: not enough sources of standardized information to base decisions and too little credit being made available. Those conditions allowed rampant discrimination by loan officers who could simply deny people because they didnt look credit worthy.

Today, we face an overabundance of poor-quality credit (high interest rates, fees, abusive debt traps) and concerns over the usage of too many sources of data that can hide as proxies for illegal discrimination. The law makes it illegal to use gender to determine credit eligibility or pricing, but countless proxies for gender exist from the type of deodorant you buy to the movies you watch.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI.

The key concept used to police discrimination is that of disparate impact. For a deep dive into how disparate impact works with AI, you can read my previous work on this topic. For this article, it is important to know that disparate impact is defined by the Consumer Financial Protection Bureau as when: A creditor employs facially neutral policies or practices that have an adverse effect or impact on a member of a protected class unless it meets a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact.

The second half of the definition provides lenders the ability to use metrics that may have correlations with protected class elements so long as it meets a legitimate business need,andthere are no other ways to meet that interest that have less disparate impact. A set of existing metrics, including income, credit scores (FICO), and data used by the credit reporting bureaus, has been deemed acceptable despite having substantial correlation with race, gender, and other protected classes.

For example, consider how deeply correlated existing FICO credit scores are with race. To start, it is telling how little data is made publicly available on how these scores vary by race. The credit bureau Experian is eager to publicize one of its versions of FICO scores by peoples age, income, and even what state or city they live in, but not by race. However, federal law requires lenders to collect data on race for home mortgage applications, so we do have access to some data. As shown in the figure below, the differences are stark.

Among people trying to buy a home, generally a wealthier and older subset of Americans, white homebuyers have an average credit score 57 points higher than Black homebuyers and 33 points higher than Hispanic homebuyers. The distribution of credit scores is also sharply unequal: More than 1 in 5 Black individuals have FICOs below 620, as do 1 in 9 among the Hispanic community, while the same is true for only 1 out of every 19 white people. Higher credit scores allow borrowers to access different types of loans and at lower interest rates. One suspects the gaps are even broader beyond those trying to buy a home.

If FICO were invented today, would it satisfy a disparate impact test? The conclusion of Rice and Swesnik in their law review article was clear: Our current credit-scoring systems have a disparate impact on people and communities of color. The question is mute because not only is FICO grandfathered, but it has also become one of the most important factors used by the financial ecosystem. I have described FICO as the out of tune oboe to which the rest of the financial orchestra tunes.

New data and algorithms are not grandfathered and are subject to the disparate impact test. The result is a double standard whereby new technology is often held to a higher standard to prevent bias than existing methods. This has the effect of tilting the field against new data and methodologies, reinforcing the existing system.

Explainability is another core tenant of our existing fair lending system that may work against AI adoption. Lenders are required to tell consumers why they were denied. Explaining the rationale provides a paper trail to hold lenders accountable should they be engaging in discrimination. It also provides the consumer with information to allow them to correct their behavior and improve their chances for credit. However, an AIs method to make decisions may lack explainability. As Federal Reserve Governor Lael Brainard described the problem: Depending on what algorithms are used, it is possible that no one, including the algorithms creators, can easily explain why the model generated the results that it did. To move forward and unlock AIs potential, we need a new conceptual framework.

To start, imagine a trade-off between accuracy (represented on the y-axis) and bias (represented on the x-axis). The first key insight is that the current system exists at the intersection of the axes we are trading off: the graphs origin. Any potential change needs to be considered against the status-quonot an ideal world of no bias nor complete accuracy. This forces policymakers to consider whether the adoption of a new system that contains bias, but less than that in the current system, is an advance. It may be difficult to embrace an inherently biased framework, but it is important to acknowledge that the status quo is already highly biased. Thus, rejecting new technology because it contains some level of bias does not mean we are protecting the system against bias. To the contrary, it may mean that we are allowing a more biased system to perpetuate.

As shown in the figure above, the bottom left corner (quadrant III) is one where AI results in a system that is more discriminatory and less predictive. Regulation and commercial incentives should work together against this outcome. It may be difficult to imagine incorporating new technology that reduces accuracy, but it is not inconceivable, particularly given the incentives in industry to prioritize decision-making and loan generation speed over actual loan performance (as in the subprime mortgage crisis). Another potential occurrence of policy moving in this direction is the introduction of inaccurate data that may confuse an AI into thinking it has increased accuracy when it has not. The existing credit reporting system is rife with errors: 1 out of every 5 people may have material error on their credit report. New errors occur frequentlyconsider the recent mistake by one student loan servicer that incorrectly reported 4.8 million Americans as being late on paying their student loans when in fact in the government had suspended payments as part of COVID-19 relief.

The data used in the real world are not as pure as those model testing. Market incentives alone are not enough to produce perfect accuracy; they can even promote inaccuracy given the cost of correcting data and demand for speed and quantity. As one study from the Federal Reserve Bank of St. Louis found, Credit score has not acted as a predictor of either true risk of default of subprime mortgage loans or of the subprime mortgage crisis. Whatever the cause, regulators, industry, and consumer advocates ought to be aligned against the adoption of AI that moves in this direction.

The top right (quadrant I) represents incorporation of AI that increases accuracy and reduces bias. At first glance, this should be a win-win. Industry allocates credit in a more accurate manner, increasing efficiency. Consumers enjoy increased credit availability on more accurate terms and with less bias than the existing status quo. This optimistic scenario is quite possible given that a significant source of existing bias in lending stems from the information used. As the Bank Policy Institute pointed out in its in discussion draft of the promises of AI: This increased accuracy will benefit borrowers who currently face obstacles obtaining low-cost bank credit under conventional underwriting approaches.

One prominent example of a win-win system is the use of cash-flow underwriting. This new form of underwriting uses an applicants actual bank balance over some time frame (often one year) as opposed to current FICO based model which relies heavily on seeing whether a person had credit in the past and if so, whether they were ever in delinquency or default. Preliminary analysis by FinReg Labs shows this underwriting system outperforms traditional FICO on its own, and when combined with FICO is even more predictive.

Cash-flow analysis does have some level of bias as income and wealth are correlated with race, gender, and other protected classes. However, because income and wealth are acceptable existing factors, the current fair-lending system should have little problem allowing a smarter use of that information. Ironically, this new technology meets the test because it uses data that is already grandfathered.

That is not the case for other AI advancements. New AI may increase credit access on more affordable terms than what the current system provides and still not be allowable. Just because AI has produced a system that is less discriminatory does not mean it passes fair lending rules. There is no legal standard that allows for illegal discrimination in lending because it is less biased than prior discriminatory practices. As a 2016 Treasury Department study concluded, Data-driven algorithms may expedite credit assessments and reduce costs, they also carry the risk of disparate impact in credit outcomes and the potential for fair lending violations.

For example, consider an AI that is able, with a good degree of accuracy, to detect a decline in a persons health, say through spending patterns (doctors co-pays), internet searches (cancer treatment), and joining new Facebook groups (living with cancer). Medical problems are a strong indicator of future financial distress. Do we want a society where if you get sick, or if a computer algorithm thinks you are ill, that your terms of credit decrease? That may be a less biased system than we currently have, and not one that policymakers and the public would support. Of all sudden what seems like a win-win may not actually be one that is so desirable.

AI that increases accuracy but introduces more bias gets a lot of attention, deservedly so. This scenario represented in the top left (quadrant II) of this framework can range from the introduction of data that are clear proxies for protected classes (watch Lifetime or BET on TV) to information or techniques that, on a first glance, do not seem biased but actually are. There are strong reasons to believe that AI will naturally find proxies for race, given that there are large income and wealth gaps between races. As Daniel Schwartz put it in his article on AI and proxy discrimination: Unintentional proxy discrimination by AIs is virtually inevitable whenever the law seeks to prohibit discrimination on the basis of traits containing predictive information that cannot be captured more directly within the model by non-suspect data.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered. Think about the potential to use whether or not a person uses a Mac or PC, a factor that is both correlated to race and whether people pay back loans, even controlling for race.

Duke Professor Manju Puri and co-authors were able to build a model using non-standard data that found substantial predictive power in whether a loan was repaid through whether that persons email address contained their name. Initially, that may seem like a non-discriminatory variable within a persons control. However, economists Marianne Bertrand and Sendhil Mullainathan have shown African Americans with names heavily associated with their race face substantial discrimination compared to using race-blind identification. Hence, it is quite possible that there is a disparate impact in using what seems like an innocuous variable such as whether your name is part of your email address.

The question for policymakers is how much to prioritize accuracy at a cost of bias against protected classes. As a matter of principle, I would argue that our starting point is a heavily biased system, and we should not tolerate the introduction of increased bias. There is a slippery slope argument of whether an AI produced substantial increases in accuracy with the introduction of only slightly more bias. Afterall, our current system does a surprisingly poor job of allocating many basic credits and tolerates a substantially large amount of bias.

Industry is likely to advocate for inclusion of this type of AI while consumer advocates are likely to oppose its introduction. Current law is inconsistent in its application. Certain groups of people are afforded strong anti-discrimination protection against certain financial products. But again, this varies across financial product. Take gender for example. It is blatantly illegal under fair lending laws to use gender or any proxy for gender in allocating credit. However, gender is a permitted use for price difference for auto insurance in most states. In fact, for brand new drivers, gender may be the single biggest factor used in determining price absent any driving record. America lacks a uniform set of rules on what constitutes discrimination and what types of attributes cannot be discriminated against. Lack of uniformity is compounded by the division of responsibility between federal and state governments and, within government, between the regulatory and judicial system for detecting and punishing crime.

The final set of trade-offs involve increases in fairness but reductions in accuracy (quadrant IV in the bottom right). An example includes an AI with the ability to use information about a persons human genome to determine their risk of cancer. This type of genetic profiling would improve accuracy in pricing types of insurance but violates norms of fairness. In this instance, policymakers decided that the use of that information is not acceptable and have made it illegal. Returning to the role of gender, some states have restricted the use of gender in car insurance. California most recently joined the list of states no longer allowing gender, which means that pricing will be more fair but possibly less accurate.

Industry pressures would tend to fight against these types of restrictions and press for greater accuracy. Societal norms of fairness may demand trade-offs that diminish accuracy to protect against bias. These trade-offs are best handled by policymakers before the widespread introduction of this information such as the case with genetic data. Restricting the use of this information, however, does not make the problem go away. To the contrary, AIs ability to uncover hidden proxies for that data may exacerbate problems where society attempts to restrict data usage on the grounds of equity concerns. Problems that appear solved by prohibitions then simply migrate into the algorithmic world where they reappear.

The underlying takeaway for this quadrant is one in which social movements that expand protection and reduce discrimination are likely to become more difficult as AIs find workarounds. As long as there are substantial differences in observed outcomes, machines will uncover differing outcomes using new sets of variables that may contain new information or may simply be statistically effective proxies for protected classes.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed. The data explosion coupled with the significant growth in ML and AI offers tremendous opportunity to rectify substantial problems in the current system. Existing anti-discrimination frameworks are ill-suited to this opportunity. Refusing to hold new technology to a higher standard than the status quo results in an unstated deference to the already-biased current system. However, simply opening the flood gates under the rules of can you do better than today opens up a Pandoras box of new problems.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed.

Americas fractured regulatory system, with differing roles and responsibilities across financial products and levels of government, only serves to make difficult problems even harder. With lacking uniform rules and coherent frameworks, technological adoption will likely be slower among existing entities setting up even greater opportunities for new entrants. A broader conversation regarding how much bias we are willing to tolerate for the sake of improvement over the status quo would benefit all parties. That requires the creation of more political space for sides to engage in a difficult and honest conversation. The current political moment in time is ill-suited for that conversation, but I suspect that AI advancements will not be willing to wait until America is more ready to confront these problems.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative, and Apple, Facebook, and IBM provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Original post:

Reducing bias in AI-based financial services - Brookings Institution

Posted in Ai

Beyond the AI hype cycle: Trust and the future of AI – MIT Technology Review

Theres no shortage of promises when it comes to AI. Some say it will solve all problems while others warn it will bring about the end of the world as we know it. Both positions regularly play out in Hollywood plotlines like Westworld, Carbon Black, Minority Report, Her, and Ex Machina. Those stories are compelling because they require us as creators and consumers of AI technology to decide whether we trust an AI system or, more precisely, trust what the system is doing with the information it has been given.

This content was produced by Nuance. It was not written by MIT Technology Review's editorial staff.

Joe Petro is CTO at Nuance.

Those stories also provide an important lesson for those of us who spend our days designing and building AI applications: trust is a critical factor for determining the success of an AI application. Who wants to interact with a system they dont trust?

Even as a nascent technology AI is incredibly complex and powerful, delivering benefits by performing computations and detecting patterns in huge data sets with speed and efficiency. But that power, combined with black box perceptions of AI and its appetite for user data, introduces a lot of variables, unknowns, and possible unintended consequences. Hidden within practical applications of AI is the fact that trust can have a profound effect on the users perception of the system, as well as the associated companies, vendors, and brands that bring these applications to market.

Advancements such as ubiquitous cloud and edge computational power make AI more capable and effective while making it easier and faster to build and deploy applications. Historically, the focus has been on software development and user-experience design. But its no longer a case of simply designing a system that solves for x. It is our responsibility to create an engaging, personalized, frictionless, and trustworthy experience for each user.

The ability to do this successfully is largely dependent on user data. System performance, reliability, and user confidence in AI model output is affected as much by the quality of the model design as the data going into it. Data is the fuel that powers the AI engine that virtually converts the potential energy of user data into kinetic energy in the form of actionable insights and intelligent output. Just as filling a Formula 1 race car with poor or tainted fuel would diminish performance, and the drivers ability to compete, an AI system trained with incorrect or inadequate data can produce inaccurate or unpredictable results that break user trust. Once broken, trust is hard to regain. That is why rigorous data stewardship practices by AI developers and vendors are critical for building effective AI models as well as creating customer acceptance, satisfaction, and retention.

Responsible data stewardship establishes a chain of trust that extends from consumers to the companies collecting user data and those of us building AI-powered systems. Its our responsibility to know and understand privacy laws and policies and consider security and compliance during the primary design phase. We must have a deep understanding of how the data is used and who has access to it. We also need to detect and eliminate hidden biases in the data through comprehensive testing.

Treat user data as sensitive intellectual property (IP). It is the proprietary source code used to build AI models that solve specific problems, create bespoke experiences, and achieve targeted desired outcomes. This data is derived from personal user interactions, such as conversations between consumers and call agents, doctors and patients, and banks and customers. It is sensitive because it creates intimate, highly detailed digital user profiles based on private financial, health, biometric, and other information.

User data needs to be protected and used as carefully as any other IP, especially for AI systems in highly regulated industries such as health care and financial services. Doctors use AI speech, natural-language understanding, and conversational virtual agents created with patient health data to document care and access diagnostic guidance in real time. In banking and financial services, AI systems process millions of customer transactions and use biometric voiceprint, eye movement, and behavioral data (for example, how fast you type, the words you use, which hand you swipe with) to detect possible fraud or authenticate user identities.

Health-care providers and businesses alike are creating their own branded digital front door that provides efficient, personalized user experiences through SMS, web, phone, video, apps, and other channels. Consumers also are opting for time-saving real-time digital interactions. Health-care and commercial organizations rightfully want to control and safeguard their patient and customer relationships and data in each method of digital engagement to build brand awareness, personalized interactions, and loyalty.

Every AI vendor and developer not only needs to be aware of the inherently sensitive nature of user data but also of the need to operate with high ethical standards to build and maintain the required chain of trust.

Here are key questions to consider:

Who has access to the data? Have a clear and transparent policy that includes strict protections such as limiting access to certain types of data, and prohibiting resale or third-party sharing. The same policies should apply to cloud providers or other development partners.

Where is the data stored, and for how long? Ask where the data lives (cloud, edge, device) and how long it will be kept. The implementation of the European Unions General Data Protection Regulation, the California Consumer Privacy Act, and the prospect of additional state and federal privacy protections should make data storage and retention practices top of mind during AI development.

How are benefits defined and shared? AI applications must also be tested with diverse data sets to reflect the intended real-world applications, eliminate unintentional bias, and ensure reliable results.

How does the data manifest within the system? Understand how data will flow through the system. Is sensitive data accessed and essentially processed by a neural net as a series of 0s and 1s, or is it stored in its original form with medical or personally identifying information? Establish and follow appropriate data retention and deletion policies for each type of sensitive data.

Who can realize commercial value from user data? Consider the potential consequences of data-sharing for purposes outside the original scope or source of the data. Account for possible mergers and acquisitions, possible follow-on products, and other factors.

Is the system secure and compliant? Design and build for privacy and security first. Consider how transparency, user consent, and system performance could be affected throughout the product or service lifecycle.

Biometric applications help prevent fraud and simplify authentication. HSBCs VoiceID voice biometrics system has successfully prevented the theft of nearly 400 million (about $493 million) by phone scammers in the UK. It compares a persons voiceprint with thousands of individual speech characteristics in an established voice record to confirm a users identity. Other companies use voice biometrics to validate the identities of remote call center employees before they can access proprietary systems and data. The need for such measures is growing as consumers conduct more digital and phone-based interactions.

Intelligent applications deliver secure, personalized, digital-first customer service. A global telecommunications company is using conversational AI to create consistent, secure, and personalized customer experiences across its large and diverse brand portfolio. With customers increasingly engaging across digital channels, the company looked to technology partners to expand its own in-house expertise while ensuring it would retain control of its data in deploying a virtual assistant for customer service.

A top-three retailer uses voice-powered virtual assistant technology to let shoppers upload photos of items theyve seen offline, then presents items for them to consider buying based on those images.

Ambient AI-powered clinical applications improve health-care experiences while alleviating physician burnout. EmergeOrtho in North Carolina is using the Nuance Dragon Ambient eXperience (DAX) application to transform how its orthopedic practices across the state can engage with patients and document care. The ambient clinical intelligence telehealth application accurately captures each doctor-patient interaction in the exam room or on a telehealth call, then automatically updates the patient's health record. Patients have the doctors full attention while streamlining the burnout-causing electronic paperwork physicians need to complete to get paid for delivering care.

AI-driven diagnostic imaging systems ensure that patients receive necessary follow-up care. Radiologists at multiple hospitals use AI and natural language processing to automatically identify and extract recommendations for follow-up exams for suspected cancers and other diseases seen in X-rays and other images. The same technology can help manage a surge of backlogged and follow-up imaging as covid-19 restrictions ease, allowing providers to schedule procedures, begin revenue recovery, and maintain patient care.

As digital transformation accelerates, we must solve the challenges we face today while preparing for an abundance of future opportunities. At the heart of that effort is the commitment to building trust and data stewardship into our AI development projects and organizations.

See more here:

Beyond the AI hype cycle: Trust and the future of AI - MIT Technology Review

Posted in Ai

Teaming with AI: How Microsoft is taking on Zoom on virtual background front – The Financial Express

When governments the world over announced lockdowns, the hunt for best collaboration and video-calling apps had begun for most users. There were video calling apps for funHousepartyand then there were business apps. But competition in the space was limited. Zoom captured a large share of the market with its user interface and accessible features. The disaster that followed in terms of the company trying to keep up with demand and buying Chinese servers gave space to the likes of Microsoft and Google to add more users. But as work from home becomes a norm, and people get attuned to living with video calling apps, companies are incorporating more features to keep their userbase. One of the biggest highlights for all apps has been the use of artificial intelligence and machine learning to attract users. The latest addition to this is Microsoft.

What has Microsoft introduced?Microsoft last week announced features that will help users enable team mode, where they can sit together in a different environment. So, with a virtual background, you can see everyone sitting right in front of you in a classroom, library or coffee house setting, thereby making the whole experience more personal. Microsoft is also trying to incorporate a feature which allows you to adjust brightness and other parameters of the video.

How is it different from virtual backgrounds?Zoom has had virtual backgrounds for long now. Microsoft is a late entrant, but the concept is the same. When Zoom uses virtual background, it often renders the depth of the image to superimpose other backgrounds on it. The machine-learning algorithm then identifies the human component and changes the rest. The technology is not perfect, adjust the camera too fast, and it will not work. In this case, Microsoft is using the same technology to extract you from the image and put you in the same room along with friends and colleagues sitting behind a desk or a table. That way you can see all the participants in one window.

What is Google Meet doing?Google is using AI differently. Instead of using it for video, it is using the technology to cut out background noise. This active noise filtering means that you can only hear the sound of the speaker, and every other sound gets muzzled.

Techsplained @FE features weekly on Mondays. For queries, mail us at ishaan.gera@expressindia.com

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

Follow this link:

Teaming with AI: How Microsoft is taking on Zoom on virtual background front - The Financial Express

Posted in Ai

Dermatology researchers: AI tools soon to be ‘tightly integrated into daily clinical practice’ – AI in Healthcare

Lead author Ernest Lee, MD, PhD, and colleagues found many studies in the recent literature focused on image analysis and classification of skin lesionsno surprise since digital photography is by now ubiquitous in the field.

Here they comment that machine learning is a natural fit for translation into dermatology because dermatology is a specialty that is heavily reliant on visual evaluation and pattern recognition.

However, the researchers also found machine learning is being applied to everything from studying the genetic basis of skin diseases to identifying associations between comorbidities, and to designing and predicting patient responses to drug therapies.

The simultaneous rise of machine learning and next-generation sequencing in particular represents a golden opportunity to advance precision dermatology, and multidisciplinary collaborations between machine learning experts, biologists and dermatologists will be required to expand the scope of this research, Lee and co-authors write.

Read the original post:

Dermatology researchers: AI tools soon to be 'tightly integrated into daily clinical practice' - AI in Healthcare

Posted in Ai

...10...1819202122...304050...