Analyzing Impacts of Covid-19 on Cognitive System and Artificial Intelligence (AI) Systems Market Effects, Aftermath and Forecast To 2026 – Cole of…

The global Cognitive System and Artificial Intelligence (AI) Systems market focuses on encompassing major statistical evidence for the Cognitive System and Artificial Intelligence (AI) Systems industry as it offers our readers a value addition on guiding them in encountering the obstacles surrounding the market. A comprehensive addition of several factors such as global distribution, manufacturers, market size, and market factors that affect the global contributions are reported in the study. In addition the Cognitive System and Artificial Intelligence (AI) Systems study also shifts its attention with an in-depth competitive landscape, defined growth opportunities, market share coupled with product type and applications, key companies responsible for the production, and utilized strategies are also marked.

This intelligence and 2026 forecasts Cognitive System and Artificial Intelligence (AI) Systems industry report further exhibits a pattern of analyzing previous data sources gathered from reliable sources and sets a precedented growth trajectory for the Cognitive System and Artificial Intelligence (AI) Systems market. The report also focuses on a comprehensive market revenue streams along with growth patterns, analytics focused on market trends, and the overall volume of the market.

Download PDF Sample of Cognitive System and Artificial Intelligence (AI) Systems Market report @ https://hongchunresearch.com/request-a-sample/25074

The study covers the following key players:BrainasoftBrighterionAstute SolutionsKITT.AIIFlyTekGoogleMegvii TechnologyNanoRep(LogMeIn)IDEAL.comIntelSalesforceAlbert TechnologiesMicrosoftAda SupportIpsoftSAPYseopIBMWiproH2O.aiBaidu

Moreover, the Cognitive System and Artificial Intelligence (AI) Systems report describes the market division based on various parameters and attributes that are based on geographical distribution, product types, applications, etc. The market segmentation clarifies further regional distribution for the Cognitive System and Artificial Intelligence (AI) Systems market, business trends, potential revenue sources, and upcoming market opportunities.

Market segment by type, the Cognitive System and Artificial Intelligence (AI) Systems market can be split into,On-PremiseCloud-based

Market segment by applications, the Cognitive System and Artificial Intelligence (AI) Systems market can be split into,Voice ProcessingText ProcessingImage Processing

The Cognitive System and Artificial Intelligence (AI) Systems market study further highlights the segmentation of the Cognitive System and Artificial Intelligence (AI) Systems industry on a global distribution. The report focuses on regions of North America, Europe, Asia, and the Rest of the World in terms of developing business trends, preferred market channels, investment feasibility, long term investments, and environmental analysis. The Cognitive System and Artificial Intelligence (AI) Systems report also calls attention to investigate product capacity, product price, profit streams, supply to demand ratio, production and market growth rate, and a projected growth forecast.

In addition, the Cognitive System and Artificial Intelligence (AI) Systems market study also covers several factors such as market status, key market trends, growth forecast, and growth opportunities. Furthermore, we analyze the challenges faced by the Cognitive System and Artificial Intelligence (AI) Systems market in terms of global and regional basis. The study also encompasses a number of opportunities and emerging trends which are considered by considering their impact on the global scale in acquiring a majority of the market share.

The study encompasses a variety of analytical resources such as SWOT analysis and Porters Five Forces analysis coupled with primary and secondary research methodologies. It covers all the bases surrounding the Cognitive System and Artificial Intelligence (AI) Systems industry as it explores the competitive nature of the market complete with a regional analysis.

Brief about Cognitive System and Artificial Intelligence (AI) Systems Market Report with [emailprotected] https://hongchunresearch.com/report/cognitive-system-and-artificial-intelligence-ai-systems-market-25074

Some Point of Table of Content:

Chapter One: Cognitive System & Artificial Intelligence(AI) Systems Market Overview

Chapter Two: Global Cognitive System & Artificial Intelligence(AI) Systems Market Landscape by Player

Chapter Three: Players Profiles

Chapter Four: Global Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue (Value), Price Trend by Type

Chapter Five: Global Cognitive System & Artificial Intelligence(AI) Systems Market Analysis by Application

Chapter Six: Global Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import by Region (2014-2019)

Chapter Seven: Global Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue (Value) by Region (2014-2019)

Chapter Eight: Cognitive System & Artificial Intelligence(AI) Systems Manufacturing Analysis

Chapter Nine: Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter Ten: Market Dynamics

Chapter Eleven: Global Cognitive System & Artificial Intelligence(AI) Systems Market Forecast (2019-2026)

Chapter Twelve: Research Findings and Conclusion

Chapter Thirteen: Appendix continued

Check [emailprotected] https://hongchunresearch.com/check-discount/25074

List of tablesList of Tables and Figures

Figure Cognitive System & Artificial Intelligence(AI) Systems Product PictureTable Global Cognitive System & Artificial Intelligence(AI) Systems Production and CAGR (%) Comparison by TypeTable Profile of On-PremiseTable Profile of Cloud-basedTable Cognitive System & Artificial Intelligence(AI) Systems Consumption (Sales) Comparison by Application (2014-2026)Table Profile of Voice ProcessingTable Profile of Text ProcessingTable Profile of Image ProcessingFigure Global Cognitive System & Artificial Intelligence(AI) Systems Market Size (Value) and CAGR (%) (2014-2026)Figure United States Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Europe Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Germany Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure UK Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure France Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Italy Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Spain Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Russia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Poland Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure China Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Japan Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure India Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Southeast Asia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Malaysia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Singapore Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Philippines Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Indonesia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Thailand Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Vietnam Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Central and South America Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Brazil Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Mexico Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Colombia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Middle East and Africa Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Saudi Arabia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure United Arab Emirates Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Turkey Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Egypt Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure South Africa Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Nigeria Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Status and Outlook (2014-2026)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production by Player (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production Share by Player (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Share by Player in 2018Table Cognitive System & Artificial Intelligence(AI) Systems Revenue by Player (2014-2019)Table Cognitive System & Artificial Intelligence(AI) Systems Revenue Market Share by Player (2014-2019)Table Cognitive System & Artificial Intelligence(AI) Systems Price by Player (2014-2019)Table Cognitive System & Artificial Intelligence(AI) Systems Manufacturing Base Distribution and Sales Area by PlayerTable Cognitive System & Artificial Intelligence(AI) Systems Product Type by PlayerTable Mergers & Acquisitions, Expansion PlansTable Brainasoft ProfileTable Brainasoft Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Brighterion ProfileTable Brighterion Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Astute Solutions ProfileTable Astute Solutions Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table KITT.AI ProfileTable KITT.AI Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table IFlyTek ProfileTable IFlyTek Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Google ProfileTable Google Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Megvii Technology ProfileTable Megvii Technology Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table NanoRep(LogMeIn) ProfileTable NanoRep(LogMeIn) Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table IDEAL.com ProfileTable IDEAL.com Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Intel ProfileTable Intel Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Salesforce ProfileTable Salesforce Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Albert Technologies ProfileTable Albert Technologies Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Microsoft ProfileTable Microsoft Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Ada Support ProfileTable Ada Support Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Ipsoft ProfileTable Ipsoft Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table SAP ProfileTable SAP Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Yseop ProfileTable Yseop Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table IBM ProfileTable IBM Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Wipro ProfileTable Wipro Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table H2O.ai ProfileTable H2O.ai Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Baidu ProfileTable Baidu Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production by Type (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production Market Share by Type (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Market Share by Type in 2018Table Global Cognitive System & Artificial Intelligence(AI) Systems Revenue by Type (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Revenue Market Share by Type (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Revenue Market Share by Type in 2018Table Cognitive System & Artificial Intelligence(AI) Systems Price by Type (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Growth Rate of On-Premise (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Growth Rate of Cloud-based (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption by Application (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption Market Share by Application (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption of Voice Processing (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption of Text Processing (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption of Image Processing (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption by Region (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption Market Share by Region (2014-2019)Table United States Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Europe Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table China Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Japan Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table India Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Southeast Asia Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Central and South America Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)continued

About HongChun Research:HongChun Research main aim is to assist our clients in order to give a detailed perspective on the current market trends and build long-lasting connections with our clientele. Our studies are designed to provide solid quantitative facts combined with strategic industrial insights that are acquired from proprietary sources and an in-house model.

Contact Details:Jennifer GrayManager Global Sales+ 852 8170 0792[emailprotected]

Original post:

Analyzing Impacts of Covid-19 on Cognitive System and Artificial Intelligence (AI) Systems Market Effects, Aftermath and Forecast To 2026 - Cole of...

Face it, AI is better at data-analysis than humans – TNW

Its time we stopped pretending that were computers and let the machines do their jobs.Anodot, a real-time analytics company, is using advanced machine-learning algorithms to overcome the limitations that humans bring to data analysis.

AI can chew up all your data and spit out more answers than youve got questions for, and the e-commerce businesses that dont integrate machine-learning into data analysis will lose money.

Weve all been there before: youve just launched a brand new product after spending millions on the development cycle and, for no discernible reason, your online marketplace crashes in a major market and you dont find out for several hours. All those sales: lost, like an unsaved file.

Okay, maybe we havent all been there but weve definitely been on the other end. Error messages on checkouts, product listings that lead nowhere, and worst of all shortages. If we dont get what we want when we want it well get it somewhere else. Anomalies in the market, and the ability to respond to them, can be the difference between profits and shutters to any business.

Data analysis isnt a popular water-cooler topic anywhere, presumably even at companies that specialize in it. Rebecca Herson, Vice President of Marketing for Anodot, explains the need for AI in the field:

Theres just so much data being generated, theres no way for a human to go through it all. Sometimes, when we analyze historical data for businesses were introducing Anodot to, they discern things they never knew where happening. Obviously businesses know if servers go down, but if you have a funnel leaking in a few different places it can be difficult to find all the problems.

The concern isnt just lost sales; theres also product-supply disruption and customer satisfaction to worry about. In numerous case studies Anodot found an estimated 80 percent of the anomalies its machine-learning software uncovered were negative factors, as opposed to positive opportunities. These companies were losing money because they werent aware of specific problems.

Weve seen data-analysis software before, but Anodots use of machine-learning is an entirely different application. Anodot is using unsupervised AI, which accesses deep-learning, to autonomously find new ways to categorize and understand data.

With customers like Microsoft, Google Waze, and Comcast, it would appear as though this software is prohibitvely complex and designed for the tech-elite, but Herson explains:

This is something that, while data scientist is the new sexy profession, you wont need one to use this. Its got the data scientist baked-in. If you have one, they can leverage this to provide immediate results. An e-commerce strategist can leverage the data and provide real-time analysis. This isnt something that requires a dedicated staff, your existing analysts can use this.

While we ponder the future of AI, companies like Anodot are applying it in all the right ways (see: non-lethal and money-saving). Automating data-analysis isnt quite as thrilling as an AI that can write speeches for the President, but its far more useful.

Read next: iTunes oversight practically confirms 4K Apple TV is imminent

See the article here:

Face it, AI is better at data-analysis than humans - TNW

Companies Work on AI-Based Sensors, Weapons for Use in Image Processing, Target Identification – ExecutiveBiz

artificial intelligence

Defense contractors are working on artificial intelligence-powered sensors and other partially autonomous machines that could help the U.S. Army process images and identify targets, Breaking Defense reported Thursday.

Vern Boyle, vice president for advanced capabilities at Northrop Grumman, said companies are developing sensors that can identify features and share data with other systems without requiring a lot of command and control back into physical systems.

An example of a weapon system that can see, share and record data is the Ripsaw robotic tank demonstrator from Textron Systems, Howe & Howe and FLIR Systems. This combat vehicle features a Skyraider quadcopter drone and a ground robot.

The quality of image processing by sensors and other machines relies on the quality of collected data and industry executives said companies should train algorithms on weird images and data to ensure their accuracy in target identification.

Should we bias training data towards the weird stuff?, said Patrick Biltgen of Perspecta. If theres a war, were almost certain to see weird things weve never seen before.

See the original post:

Companies Work on AI-Based Sensors, Weapons for Use in Image Processing, Target Identification - ExecutiveBiz

Othot develops AI software to help universities predict whether students will succeed – NEXTpittsburgh

By Jamie Schuman

A Green Tree startup is using artificial intelligence to help students succeed in college.

Othots software is designed to improve the admissions process, increase student retention and graduation rates and boost job placement. Its newest product uses data and machine learning to identify students who are at risk of dropping out, the reasons why, and what schools can do to help them.

Its all about optimizing the students situation, says Andy Hannah, co-founder and CEO of Othot, which specializes in artificial intelligence and analytic solutions for colleges and universities. We want that student to graduate, and with as low a debt as possible.

The Othot platform is a cloud-based software that can be accessed 24 hours a day from a web browser. (Othot derives its name from combining original and thought.)

It uses a large variety of data points, such as a students high school and college grades, financial circumstances and co-curricular activities, as well as census numbers and other information, to understand people at a very deep level, Hannah says.

Andy Hanna, co-founder and CEO of Othot

The software can predict if a student will struggle academically and suggest ways to help them succeed. Administrators may get suggestions to increase a students financial aid or when to start academic counseling.

The University of Pittsburgh uses Othots AI-driven recommendations to help students choose study abroad programs, which are important for student retention, says Stephen Wisniewski, Pitts vice provost for data and information.

We know that an engaged student is more likely to persist and at a much higher rate, Wisniewski says. Study abroad programs at Pitt are a centerpiece of that measure of engagement.

Othot has worked for the past year and a half to develop its student retention tool, and Pitt is one of a handful of universities already using it. The company is now ready to roll out the product more broadly, Hannah says.

Hannah says the tool is useful because as college costs increase, administrators have a duty to make sure that students succeed and graduate. And as enrollments are projected to decrease in coming years due to lower birth rates, universities will compete against each other to recruit and retain freshmen, he says.

Hannah says the new tool is incredibly accurate because it uses a nonlinear model, which looks at the relationship between thousands of variables, whereas other products may focus predominantly on grades.

We are in a different era related to the use of technology and the understanding of the individual, Hannah says. Its just refreshing to me to see that power being used to help students reach their desired endpoints, which is graduating and getting great jobs with debt that they can manage.

Andy Hannahartificial intelligenceothotuniversity of pittsburgh

More:

Othot develops AI software to help universities predict whether students will succeed - NEXTpittsburgh

The Impact of Artificial Intelligence on Workspaces – Forbes

Intelligent and intelligible office buildings

It is a truth universally acknowledged that artificial intelligence will change everything. In the next few decades, the world will become intelligible, and in many ways, intelligent. But insiders suggest that the world of big office real estate will get there more slowly - at least in the worlds major cities.

The real estate industry in London, New York, Hong Kong and other world cities moves in cycles of 10 or 15 years. This is the period of the lease. After a tense renewal negotiation, and perhaps a big row, landlord and tenant are generally happy to leave each other alone until the next time. This does not encourage innovation, or investment in new services in between the renewals. There are alternatives to this arrangement. In Scandinavia, for instance, lease durations are shorter - often three years or so. This encourages a more collegiate working relationship, where landlord and tenant are more like business partners.

Another part of the pathology of major city real estate is the landmark building. With the possible exception of planners, everyone likes grand buildings: certainly, architects, developers, and the property managers and CEOs of big companies do. A mutual appreciation society is formed, which is less concerned about the impact on a business than about appearing in the right magazines, and winning awards.

Outside the big cities, priorities are different. To attract a major tenant to Dixons old headquarters in Hemel Hempstead, for instance, the landlord will need to seduce with pragmatism rather than glamour.

Tim Oldman is the founder and CEO of Leesman, a firm which helps clients understand how to manage their workspaces in the best interests of their staff and their businesses. He says there is plenty of opportunity for AI to enhance real estate, and much of the impetus for it to happen will come from the employees who work in office buildings rather than the developers who design and build them. Employees, the actual users of buildings, will be welcoming AI into many corners of their lives in the coming years and decades, often without realising it. They will expect the same convenience and efficiency at work that they experience at home and when travelling. They will demand more from their employers and their landlords.

Christina Wood is responsible for two of Emap'sconferences on the office sector: Property Week's annual flagship event WorkSpace, and AV Magazine's new annual event AVWorks, which explores the changing role of AV in the workspace. She says that workspaces are undergoing an evolution that increasingly looks like a revolution, powered by technology innovation and driven by workforce demands for flexibility, connectivity, safety and style.

Buildings should be smart, and increasingly they will be. Smart buildings will be a major component of smart cities, a phenomenon which we have been hearing about since the end of the last century, and which will finally start to become a reality in the coming decade, enabled in part by 5G.

Buildings should know what load they are handling at any given time. They should provide the right amount of heat and light: not too little and not too much. The air conditioning should not go off at 7pm when an after-hours conference is in full flow. They should monitor noise levels, and let occupants know where the quiet places are, if they ask. They should manage the movement of water and waste intelligently. All this and much more is possible, given enough sensors, and a sensible approach to the use of data.

Imagine we are colleagues who usually work in different buildings. Today we are both in the head office, and our calendars show that we have scheduled a meeting. An intelligent building could suggest workspaces near to each other. Tim Oldman calls this assisted serendipity.

Generation Z is coming into the workplace. They are not naive about data and the potential for its mis-use, but they are more comfortable with sharing it in return for a defined benefit. Older generations are somewhat less trusting. We expect our taxi firm to know when we will be exiting the building, and to have a car waiting. But we are suspicious if the building wants to know our movements. Employees in Asian countries show more trust than those in France and Germany, say, with the US and the UK in between.

Robotic process automation, or RPA, can make mundane office interactions smoother and more efficient. But we will want it to be smart. IT helpdesks should not be rewarded for closing a ticket quickly, but for solving your problem in a way which means you wont come back with the same problem a week later and neither will anyone else.

That said, spreadsheet-driven efficiency is not always the best solution. Face-to-face genius bar-style helpdesks routinely deliver twice the level of customer satisfaction as the same service delivered over the phone, even when they use exactly the same people, the same technology, and the same infrastructure. There is a time and place for machines, and a time and a place for humans.

Rolls Royce is said to make more money from predictive maintenance plans than it makes by selling engines. Sensors in their engines relay huge volumes of real-time data about each engine component to headquarters in Derby. If a fault is developing, they can often have the relevant spare part waiting at the next airport before the pilot even knows theres a problem. One day, buildings will operate this way too.

The technology to enable these services is not cheap today, and an investment bank or a top management consultancy can offer their employees features which will not be available for years to workers in the garment industry in the developing world. There will be digital divides, but the divisions will be constantly changing, with laggards catching up, and sometimes overtaking, as they leapfrog legacy infrastructures. China is a world leader in smartphone payment apps partly because its banking infrastructure was so poor.

Covid will bring new pressure to bear on developers and landlords. Employees will demand biosecurity measures such as the provision of air which is fresh and filtered air, not re-circulated. They may want to know how many people are in which parts of the building, to help them maintain physical distancing. This means more sensors, and more data.

The great unplanned experiment in working from home which we are all engaged in thanks to covid-19 will probably result in a blended approach to office life in the future. Working from home suits some people very well, reducing commuting time, and enabling them to spend more time with their families. But others miss the decompression that commuting allows, and many of us dont have good working environments at home. In the winter, many homes are draughty, and the cost of heating them all day long can be considerable.

Tim Oldman thinks the net impact on demand for office space will probably be a slight reduction overall, and a new mix of locations. There are indications that companies will provide satellite offices closer to where their people live, perhaps sharing space with workers from other firms. This is the same principle as the co-working facilities provided by WeWork and Regus, but whereas those companies have buildings in city centres, there will be a new demand for space on local High Streets.

Retail banks have spotted this as an opportunity, a way of using the branch network which they have been shrinking as people shift to online banking. Old bank branches can be transformed into safe and comfortable satellite offices, and restore some life to tired suburban streets. Companies will have to up their game to co-ordinate this more flexible approach, and landlords will need to help them. They will need to collect and analyse information about where their people are each day, and develop and refine algorithms to predict where they will be tomorrow.

Some employers will face a crisis of trust as we emerge from the pandemic. Millions of us have been been trusted to work from home, and to the surprise of more than a few senior managers, it has mostly worked well. Snatching back the laptop and demanding that people come straight back to the office is not a good idea. Companies will adopt different approaches, and some will be more successful than others. Facebook has told its staff they can work from wherever they want, but their salary will be adjusted downwards if they leave the Bay Area. Google has simply offered every employee $1,000 to make their home offices more effective.

The way we work is being changed by lessons learned during the pandemic, and by the deployment of AI throughout the economy. Builders and owners of large office buildings must not get left behind.

Read the rest here:

The Impact of Artificial Intelligence on Workspaces - Forbes

What is Artificial Intelligence (AI)? | IBM

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

A number of definitions of artificial intelligence (AI) have surfaced over the last few decades. John McCarthy offers the following definition in this 2004 paper(PDF, 106 KB) (link resides outside IBM)): "It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the artificial intelligence conversation began with Alan Turing's 1950 work "Computing Machinery and Intelligence" (PDF, 89.8 KB) (link resides outside of IBM). In this paper, Turing, often referred to as the "father of computer science", asks the following question: "Can machines think?"From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publication, it remains an important part of the history of AI.

One of the leading AI textbooks is Artificial Intelligence: A Modern Approach(link resides outside IBM, [PDF, 20.9 MB]), by Stuart Russell and Peter Norvig. In the book, they delve into four potential goals or definitions of AI, which differentiate computer systems as follows:

Human approach:

Ideal approach:

Alan Turings definition would have fallen under the category of systems that act like humans.

In its simplest form, artificial intelligence is a field that combines computer science and robust datasets to enable problem-solving. Expert systems, an early successful application of AI, aimed to copy a humans decision-making process. In the early days, it was time-consuming to extract and codify the humans knowledge.

AI today includes the sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms that typically make predictions or classifications based on input data. Machine learning has improved the quality of some expert systems, and made it easier to create them.

Today, AI plays an often invisible role in everyday life, powering search engines, product recommendations, and speech recognition systems.

There is a lot of hype about AI development, which is to be expected of any emerging technology. As noted in Gartners hype cycle (link resides outside IBM), product innovations like self-driving cars and personal assistants follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman notes (01:08:15) (link resides outside IBM) in his 2019 MIT lecture, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations continue around AI ethics, we can see the initial glimpses of the trough of disillusionment. Read more about where IBM stands on AI ethics here.

Weak AIalso called Narrow AI or Artificial Narrow Intelligence (ANI)is AI trained to perform specific tasks. Weak AI drives most of the AI that surrounds us today. Narrow might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some powerful applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial General Intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)also known as superintelligencewould surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, AI researchers are exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the rogue computer assistant in 2001: A Space Odyssey.

Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

The way in which deep learning and machine learning differ is in how each algorithm learns. "Deep" machine learning can use labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. Deep learning can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the set of features which distinguish different categories of data from one another. This eliminates some of the human intervention required and enables the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman notes in the same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the set of features to understand the differences between data inputs, usually requiring more structured data to learn.

Deep learning (like some machine learning) uses neural networks. The deep in a deep learning algorithm refers to a neural network with more than three layers, including the input and output layers. This is generally represented using the following diagram:

The rise of deep learning has been one of the most significant breakthroughs in AI in recent years, because it has reduced the manual effort involved in building AI systems. Deep learning was in part enabled by big data and cloud architectures, making it possible to access huge amounts of data and processing power for training AI solutions.

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

Computer vision: This AI technology enables computers to derive meaningful information from digital images, videos, and other visual inputs, and then take the appropriate action. Powered by convolutional neural networks, computer vision has applications in photo tagging on social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.

Recommendation engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This approach is used by online retailers to make relevant product recommendations to customers during the checkout process.

Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention.

Fraud detection: Banks and other financial institutions can use machine learning to spot suspicious transactions. Supervised learning can train a model using information about known fraudulent transactions. Anomaly detection can identify transactions that look atypical and deserve further investigation.

Since the advent of electronic computing, some important events and milestones in the evolution of artificial intelligence include the following:

While Artificial General Intelligence remains a long way off, more and more businesses will adopt AI in the short term to solve specific challenges. Gartner predicts (link resides outside IBM) that 50% of enterprises will have platforms to operationalize AI by 2025 (a sharp increase from 10% in 2020).

Knowledge graphs are an emerging technology within AI. They can encapsulate associations between pieces of information and drive upsell strategies, recommendation engines, and personalized medicine. Natural language processing (NLP) applications are also expected to increase in sophistication, enabling more intuitive interactions between humans and machines.

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

Go here to read the rest:

What is Artificial Intelligence (AI)? | IBM

AI tool suggests ways to improve your outfit – Futurity: Research News

Share this Article

You are free to share this article under the Attribution 4.0 International license.

A new artificial intelligence system can look at a photo of an outfit and suggest helpful tips to make it more fashionable.

Suggestions may include tweaks such as selecting a sleeveless top or a longer jacket.

We thought of it like a friend giving you feedback, says Kristen Grauman, a professor of computer science at the University of Texas at Austin whose previous research has largely focused on visual recognition for artificial intelligence.

Its also motivated by a practical idea: that we can work with a given outfit to make small changes so its just a bit better.

The tool, named Fashion++, uses visual recognition systems to analyze the color, pattern, texture, and shape of garments in an image. It considers where edits will have the most impact. It then offers several alternative outfits to the user.

Researchers trained Fashion++ using more than 10,000 images of outfits shared publicly on online sites for fashion enthusiasts. Finding images of fashionable outfits was easy, says graduate student Kimberly Hsiao. Finding unfashionable images proved challenging. So, she came up with a workaround. She mixed images of fashionable outfits to create less-fashionable examples and trained the system on what not to wear.

As fashion styles evolve, the AI can continue to learn by giving it new images, which are abundant on the internet, Hsiao says.

As in any AI system, bias can creep in through the data sets for Fashion++. The researchers point out that vintage looks are harder to recognize as stylish because training images came from the internet, which has been in wide use only since the 1990s. Additionally, because the users submitting images were mostly from North America, styles from other parts of the world dont show up as much.

Another challenge is that many images of fashionable clothes appear on models, but bodies come in many sizes and shapes, affecting fashion choices. Next up, Grauman and Hsiao are working toward letting the AI learn what flatters different body shapes so its recommendations can be more tailored.

We are examining the interaction between how a persons body is shaped and how the clothing would suit them. Were excited to broaden the applicability to people of all body sizes and shapes by doing this research, Grauman says.

Grauman and Hsiao will present a paper on their approach at next weeks International Conference on Computer Vision in Seoul, South Korea.

Additional researchers from Cornell Tech, Georgia Tech, and Facebook AI Research contributed to the work.

Source: UT Austin

Original Study

See the original post:

AI tool suggests ways to improve your outfit - Futurity: Research News

For AI, data are harder to come by than you think – The Economist

Jun 11th 2020

AMAZONS GO STORES are impressive places. The cashier-less shops, which first opened in Seattle in 2018, allow app-wielding customers to pick up items and simply walk out with them. The system uses many sensors, but the bulk of the magic is performed by cameras connected to an AI system that tracks items as they are taken from shelves. Once the shoppers leave with their goods, the bill is calculated and they are automatically charged.

Doing that in a crowded shop is not easy. The system must handle crowded stores, in which people disappear from view behind other customers. It must recognise individual customers as well as friends or family groups (if a child puts an item into a family basket, the system must realise that it should charge the parents). And it must do all that in real-time, and to a high degree of accuracy.

Teaching the machines required showing them a lot of training data in the form of videos of customers browsing shelves, picking up items, putting them back and the like. For standardised tasks like image recognition, AI developers can use public training datasets, each containing thousands of pictures. But there was no such training set featuring people browsing in shops.

Some data could be generated by Amazons own staff, who were allowed into test versions of the shops. But that approach took the firm only so far. There are many ways in which a human might take a product from a shelf and then decide to choose it, put it back immediately or return it later. To work in the real world, the system would have to cover as many of those as possible.

In theory, the world is awash with data, the lifeblood of modern AI. IDC, a market-research firm, reckons the world generated 33 zettabytes of data in 2018, enough to fill seven trillion DVDs. But Kathleen Walch of Cognilytica, an AI-focused consultancy, says that, nevertheless, data issues are one of the most common sticking-points in any AI project. As in Amazons case, the required data may not exist at all. Or they might be locked up in the vaults of a competitor. Even when relevant data can be dug up, they might not be suitable for feeding to computers.

Data-wrangling of various sorts takes up about 80% of the time consumed in a typical AI project, says Cognilytica. Training a machine-learning system requires large numbers of carefully labelled examples, and those labels usually have to be applied by humans. Big tech firms often do the work internally. Companies that lack the required resources or expertise can take advantage of a growing outsourcing industry to do it for them. A Chinese firm called MBH, for instance, employees more than 300,000 people to label endless pictures of faces, street scenes or medical scans so that they can be processed by machines. Mechanical Turk, another subdivision of Amazon, connects firms with an army of casual human workers who are paid a piece rate to perform repetitive tasks.

Cognilytica reckons that the third-party data preparation market was worth more than $1.5bn in 2019 and could grow to $3.5bn by 2024. The data-labelling business is similar, with firms spending at least $1.7bn in 2019, a number that could reach $4.1bn by 2024. Mastery of a topic is not necessary, says Ron Schmelzer, also of Cognilytica. In medical diagnostics, for instance, amateur data-labellers can be trained to become almost as good as doctors at recognising things like fractures and tumours. But some amount of what AI researchers call domain expertise is vital.

The data themselves can contain traps. Machine-learning systems correlate inputs with outputs, but they do it blindly, with no understanding of broader context. In 1968 Donald Knuth, a programming guru, warned that computers do exactly what they are told, no more and no less. Machine learning is full of examples of Mr Knuths dictum, in which machines have followed the letter of the law precisely, while being oblivious to its spirit.

In 2018 researchers at Mount Sinai, a hospital network in New York, found that an AI system trained to spot pneumonia on chest x-rays became markedly less competent when used in hospitals other than those it had been trained in. The researchers discovered that the machine had been able to work out which hospital a scan had come from. (One way was to analyse small metal tokens placed in the corner of scans, which differ between hospitals.)

Since one hospital in its training set had a baseline rate of pneumonia far higher than the others, that information by itself was enough to boost the systems accuracy substantially. The researchers dubbed that clever wheeze cheating, on the grounds that it failed when the system was presented with data from hospitals it did not know.

Bias is another source of problems. Last year Americas National Institute of Standards and Technology tested nearly 200 facialrecognition algorithms and found that many were significantly less accurate at identifying black faces than white ones. The problem may reflect a preponderance of white faces in their training data. A study from IBM, published last year, found that over 80% of faces in three widely used training sets had light skin.

Such deficiencies are, at least in theory, straightforward to fix (IBM offered a more representative dataset for anyone to use). Other sources of bias can be trickier to remove. In 2017 Amazon abandoned a recruitment project designed to hunt through CVs to identify suitable candidates when the system was found to be favouring male applicants. The post mortem revealed a circular, self-reinforcing problem. The system had been trained on the CVs of previous successful applicants to the firm. But since the tech workforce is already mostly male, a system trained on historical data will latch onto maleness as a strong predictor of suitability.

Humans can try to forbid such inferences, says Fabrice Ciais, who runs PwCs machine-learning team in Britain (and Amazon tried to do exactly that). In many cases they are required to: in most rich countries employers cannot hire on the basis of factors such as sex, age or race. But algorithms can outsmart their human masters by using proxy variables to reconstruct the forbidden information, says Mr Ciais. Everything from hobbies to previous jobs to area codes in telephone numbers could contain hints that an applicant is likely to be female, or young, or from an ethnic minority.

If the difficulties of real-world data are too daunting, one option is to make up some data of your own. That is what Amazon did to fine-tune its Go shops. The company used graphics software to create virtual shoppers. Those ersatz humans were used to train the machines on many hard or unusual situations that had not arisen in the real training data, but might when the system was deployed in the real world.

Amazon is not alone. Self-driving car firms do a lot of training in high-fidelity simulations of reality, where no real damage can be done when something goes wrong. A paper in 2018 from Nvidia, a chipmaker, described a method for quickly creating synthetic training data for self-driving cars, and concluded that the resulting algorithms worked better than those trained on real data alone.

Privacy is another attraction of synthetic data. Firms hoping to use AI in medicine or finance must contend with laws such as Americas Health Insurance Portability and Accountability Act, or the European Unions General Data Protection Regulation. Properly anonymising data can be difficult, a problem that systems trained on made-up people do not need to bother about.

The trick, says Euan Cameron, one of Mr Ciaiss colleagues, is ensuring simulations are close enough to reality that their lessons carry over. For some well-bounded problems such as fraud detection or credit scoring, that is straightforward. Synthetic data can be created by adding statistical noise to the real kind. Although individual transactions are therefore fictitious, it is possible to guarantee that they will have, collectively, the same statistical characteristics as the real data from which they were derived. But the more complicated a problem becomes, the harder it is to ensure that lessons from virtual data will translate smoothly to the real world.

The hope is that all this data-related faff will be a one-off, and that, once trained, a machine-learning model will repay the effort over millions of automated decisions. Amazon has opened 26 Go stores, and has offered to license the technology to other retailers. But even here there are reasons for caution. Many AI models are subject to drift, in which changes in how the world works mean their decisions become less accurate over time, says Svetlana Sicular of Gartner, a research firm. Customer behaviour changes, language evolves, regulators change what companies can do.

Sometimes, drift happens overnight. Buying one-way airline tickets was a good predictor of fraud [in automated detection models], says Ms Sicular. And then with the covid-19 lockdowns, suddenly lots of innocent people were doing it. Some facial-recognition systems, used to seeing uncovered human faces, are struggling now that masks have become the norm. Automated logistics systems have needed help from humans to deal with the sudden demand for toilet roll, flour and other staples. The worlds changeability means more training, which means providing the machines with yet more data, in a never-ending cycle of re-training. AI is not an install-and-forget system, warns Mr Cameron.

This article appeared in the Technology Quarterly section of the print edition under the headline "Not so big"

See the original post:

For AI, data are harder to come by than you think - The Economist

Elon Musk and Mark Zuckerberg Exchange Heated Words Over AI. Whose Side Are You On? – Inc.com

Tech billionaires Elon Musk and Mark Zuckerberg are engaged in a very public disagreement about the nature of artificial intelligence (machines that can think) and whether it's a boon or bane to society. It's almost as interesting to follow as the Hollywood supercouple-of-the-month's divorce proceedings.

Just kidding. Let's agree that the former is relevant, the latter ridiculous.

Musk has been warning for some time now that AI is "our greatest existential threat" and that we should fear perpetuating a world where machines are smarter than humans.

It's not that he's against AI: Musk has invested in several AI companies "to keep an eye on them." He's even launched his own AI start-up, Neuralink--intended to connect the human brain with computers to someday do mind blowing things like repair cancer lesions and brain injuries (for example).

Musk fears the loss of human control if AI is not very carefully monitored.

Zuckerberg sees things very differently and is apparently frustrated by the fear-mongering. The Facebook chief has made AI a strategic priority for his company. He talks about the advances AI could make in healthcare and self-driving cars, for example.

In a recent Facebook Live session where he was answering a question about Musk's continued warning on AI, the Facebook founder responded, "I think that people who are naysayers and kind of trying to drum up these doomsday scenarios--I just, I don't understand it. I think it's really negative and in some ways I actually think it is pretty irresponsible."

Musk quickly fired back with this tweet:

The debate is sure to continue to volley back and forth in a sort of Wimbledon of the Way Out There.

So, at the risk or Mr. Musk calling me out, I thought I'd try to bring it a bit closer to home so you can track better with the debate and form your own opinion. Here are some of the most commonly cited pros and cons to AI:

So are you more of a Muskie or a Zuckerberger?

Better decide which side you lean towards. Before the machines decide for you.

Read this article:

Elon Musk and Mark Zuckerberg Exchange Heated Words Over AI. Whose Side Are You On? - Inc.com

Facebook Stock Rated Neutral This Week By AI – Forbes

getty

Markets traded higher this week as investors continue to parse the quarterly earnings of many companies. Company specific moves were mostly on account of their performance in the last quarter.The stimulus package for containing the adverse economic effects of the pandemic will determine how investors react in the days to come, if Congress can come to an agreement. Developments around the tension between the U.S. and China have also kept investors on their toes. Our deep learning algorithms using Artificial Intelligence (AI) technology have selected some Unusual Movers this week.

Sign up for the free Forbes AI Investor newsletterhereto join an exclusive AI investing community and get premium investing ideas before markets open.

The first company on our list is Facebook Inc FB . The stock is rated Neutral and has received factor scores of F in Technical, B in Growth, B in Momentum Volatility and B in Quality Value. The stock is up by 5.82% for the week and looks like a good bet after posting strong quarterly results. The stock closed with a volume of 72,766,364 vs its 10-day volume average of 28,269,713.3. The stock is up 30.79% this year and the financials of the company has been growing at a steady pace over the past years. Operating Income grew by 16.31% in the last fiscal year to $23986.0M, growing by 38.09% over the last three fiscal years from $20203.0M. Revenue is expected to grow by 11.15% over the next 12 months and the stock is trading with a Forward 12M P/E of 30.56.

Price of Facebook Inc compared to its Simple Moving Average

Walt Disney Company DIS , a leading entertainment company is next on our list. The stock is rated Top Short by our AI and has received ratings of F in Technical, D in Growth, C in Momentum Volatility and D in Quality Value. The price closed up by 11.11% this week after making inroads into the online segment. The volume of shares traded was 16,079,643 vs its 22-day volume average of 13,604,401.91. As for the financials, Revenue grew by 0.28% in the last fiscal year to $69570.0M, growing by 26.52% over the last three fiscal years from $55137.0M. While the ROE has fallen from 20.04% three years ago to 13.92% in the last fiscal year, it is still at a healthy level for investors to consider. Revenue is forecasted to grow by 0.49% in the next 12 months and the stock trades with a Forward 12M P/E of 115.08.

Price of Walt Disney Company compared to its Simple Moving Average

Evergy Inc engages in the generation, transmission, distribution, and sale of electricity in Kansas and Missouri. With factor scores of C in Technical, B in Growth, A in Momentum Volatility and B in Quality Value, the stock is rated Top Buy. The price is down 14.92% for the week and the stock closed with a volume of 3,222,895 vs its 10-day volume average of 3,454,627.6 and its 22-day volume average of 2,458,811.82. EPS grew by 21.57% over the last three fiscal years from $2.27 to $2.79 in the last fiscal year.ROE figures declined to 7.40% in the last year from 8.75% three years ago. Growth in revenue in the next 12 months is expected to clock a rate of 0.33%. The share is trading with a Forward 12M P/E of 17.9.

Price of Evergy Inc compared to its Simple Moving Average

Next on our list we have The Mosaic Company MOS , a company that produces and markets concentrated phosphate and potash crop nutrients in North America and internationally. This Neutral-rated stock is was given factor scores of A in Technical, B in Growth, F in Momentum Volatility and C in Quality Value. The stock is up 27.62% for the week after it beat Q2 estimates. Volumes also surged as it closed with volume 8,035,051 vs its 10-day volume average of 5,330,381.0. Revenue grew by 17.05% over the last three fiscal years to $8906.3M in the last fiscal year from $7409.4M. The firm has reported negative earnings with revenue set to grow by 2.63% over the next 12 months. The stock is trading with a Forward 12M P/E of 36.9.

Price of The Mosaic Company compared to its Simple Moving Average

United Parcel Service Inc UPS , a company that provides letter and package delivery, specialized transportation, logistics, and financial services closed higher by 9.9% this week as it looked to increase prices for its services. The stock is rated Neutral and was given factor scores of C in Technical, C in Growth, A in Momentum Volatility and C in Quality Value. Digging into the financials, Revenue grew by 4.43% in the last fiscal year to $74094.0M and grew by 16.21% over the last three fiscal years from $66585.0M. EPS dropped to $5.11 in the last fiscal year compared to $5.61 three years ago. ROE continues to be very high and was at 140.51%, lower compared to 675.15% three years ago. Revenue is expected to grow by 1.66% in the next 12 months and the Forward 12M P/E of the stock is 21.56.

Price of United Parcel Service Inc compared to its Simple Moving Average

Western Digital Corporation WDC was down 14.15% last week and the stock closed with volume 10,250,035 vs its 10-day volume average of 7,733,489.9 and its 22-day volume average of 6,019,216.68. The stock was given an Unattractive rating by our AI with factor scores of C in Technical, C in Growth, D in Momentum Volatility and D in Quality Value. The manufacturer and seller of storage devices was under pressure after missing its revenue estimates and the outlook is weak too. The financials seem to be in a declining phase with revenue dropping to $16736.0M in the last fiscal year from $20647.0M three years ago. Operating Income was $370.0M in the last fiscal year, significantly lower compared to $3832.0M three years ago. EPS and ROE turned negative to $(0.84) and -2.56% respectively in the last fiscal year compared to $2.2 and 5.88% three years ago. Forward 12M P/E of the stock is at a reasonable level of 11.51.

Price of Western Digital Corporation compared to its Simple Moving Average

Ralph Lauren Corporation RL , a recognized name in the lifestyle products category closed with a volume of 1,917,186 vs its 10-day volume average of 1,569,876.8 and its 22-day volume average of 1,154,283.68. The stock is down 7.49% for the week and 43.21% for the year. The stock is rated Neutral and has received factor scores of B in Technical, D in Growth, C in Momentum Volatility and C in Quality Value. Looking at the financials, we see revenue drop to $6159.8M in the last fiscal year from $6182.3M three years ago. Operating income also fell to $602.1M from $663.8M during these three years. The firm saw its EPS increase to $4.98 from $1.97 and ROE improve to 12.85% from 4.82% in three years. Revenue is projected to grow by 16.75% in the next 12 months and the stock is trading with a Forward 12M P/E of 15.61.

Price of Ralph Lauren Corporation compared to its Simple Moving Average

Liked what you read? Sign up for our free Forbes AI Investor Newsletterhereto get AI driven investing ideas weekly. For a limited time, subscribers can join an exclusive slack group to get these ideas before markets open.

Go here to read the rest:

Facebook Stock Rated Neutral This Week By AI - Forbes

This is why AI shouldn’t design inspirational posters – CNET

Inspirational posters have their place. But if you're not the kind of person to take workplace spark from a beautiful photograph of a random person canoeing at twilight or an eagle soaring, you might want to turn the poster-making over to an artificial intelligence.

An AI dubbed InspiroBot, brought to our attention by IFL Science, puts together some of the most bizarre (and thus delightful) inspirational posters around.

This one's probably not a good idea for either a stranger or a friend.

The dog's cute, but this isn't great advice either.

Hard to argue with this one, which is kinda Yoda-esque.

Hey! Who you callin' "desperate"?

This bot obviously doesn't know many LARPers, or hang around at Renaissance Faires.

The bot's posters fall in between Commander Data trying to offer advice and a mistranslated book of quaint sayings. And they're mostly fun. Except sometimes, when the AI gets really dark and it's time to leave the site entirely and Google kittens fighting themselves in the mirror.

Read more:

This is why AI shouldn't design inspirational posters - CNET

Exyn unveils AI to help drones fly autonomously, even indoors or off the grid – TechCrunch

A startup called Exyn Technologies Inc. today revealed AI software that enables drones to fly autonomously, even in dark, obstacle-filled environments or beyond the reaches of GPS. A spin-out of the University of Pennsylvanias GRASP Labs, Exyn uses sensor fusion to give drones situational awareness much like a humans.

In a demo video shared by the company with TechCrunch, a drone using Exyns AI can be seen waking up and taking in its surroundings. It then navigates from a launch point in a populated office to the nearest identified exit without human intervention. The route is not pre-programmed, and pilots did not manipulate controls to influence the path that the drone takes. They simply tell it to find and go to the nearest door.

According to Exyn founderVijay Kumar, a veteran roboticist and dean of Penns School of Engineering, Artificial intelligence that lets drones understand their environment is an order of magnitude more complex than for self-driving cars or ground-based robots.

Thats because the world that drones inhabit is inherently 3D. They have to do more than obey traffic laws and avoid pedestrians and trees. They must maneuver over and around obstacles in un-mapped skies where internet connectivity is not consistently available. Additionally, Kumar said, With drones you actually have to lift and fly with your payload and sensors. Cars roll along on wheels and can carry large batteries. But drones must preserve all the power they can for flight.

The AI that Exyn is adapting from Kumars original research will work with any type of unmanned aerial vehicle, from popular DJI models to more niche research and industrial UAVs. Exyn Chief Engineer Jason Derenick described how the technology basically works: We fuse multiple sensors from different parts of the spectrum to let a drone build a 3D map in real time. We only give the drone a relative goal and start location. But it takes off, updates its map and then goes through a process of planning and re-planning until it achieves that goal.

Keeping the technology self-contained on the drone means Exyn-powered UAVS dont rely on outside infrastructure, or human pilots to complete a mission. Going forward, the company can integrate data from cloud-based sources.

Exyn, which is backed by IP Group, faces competition from other startups like Iris Automation or Area 17 in Silicon Valley, as well as companies building drones with proprietary autonomous-flight software, like Skydio in Menlo Park, or Israel-based Airobotics.

The startups CEO and chairman Nader Elm is hoping Exyns AI will yield new uses for drones, and put drones in places where its not safe or easy for humans to work.

For example, the CEO said, the companys technology could allow drones to count inventory in warehouses filled with towering pallets and robots moving across the ground; or to work in dark mine shafts and unfinished buildings that require frequent inspections for safety and to measure worker productivity.

Looking forward, Exyns CEO said, Well continue advancing the technology to first of all make it more robust and hardened for commercial use while adding features and functionality. Ultimately we want to move from one drone to multiple, collaborating drones that can work on a common mission. We have focused on obstacle avoidance, but were also thinking about how drones can interact with various things in their environment.

Follow this link:

Exyn unveils AI to help drones fly autonomously, even indoors or off the grid - TechCrunch

Researchers create AI bot to protect the identities of BLM protesters – AI News

Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos.

Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because theyve been snapped at a demonstration from which a select few may have gone on to do criminal acts such as arson and looting.

With images from the protests being widely shared on social media to raise awareness, police have been using the opportunity to add the people featured within them to facial recognition databases.

Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity, the researchers explain.

Software has been available for some time to blur faces, but recent AI advancements have proved that its possible to deblur such images.

Researchers from Stanford Machine Learning set out to develop an automated tool which prevents the real identity of those in an image from being revealed.

The result of their work is BLMPrivacyBot:

Rather than blur the faces, the bot automatically covers them up with the black fist emoji which has become synonymous with the Black Lives Matter movement. The researchers hope such a solution will be built-in to social media platforms, but admit its unlikely.

The researchers trained the model for their AI bot on a dataset consisting of around 1.2 million people called QNRF. However, they warn its not foolproof as an individual could be identified through other means such as what clothing theyre wearing.

To use the BLMPrivacyBot, you can either send an image to its Twitter handle or upload a photo to the web interface here. The open source repo is available if you want to look at the inner workings.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: ai, artificial intelligence, black lives matter, blm, bot, face recognition, facial recognition, privacy, protest, surveillance

Read more here:

Researchers create AI bot to protect the identities of BLM protesters - AI News

Charlotte-based Tradier teams with Forbes company to offer AI-driven investing platform – WRAL Tech Wire

CHARLOTTE Investors, get ready for some advanced insights driven by artificial intelligence (AI).

Tradier, a Charlotte-based online brokerage specializing in API and White Label platforms, has teamed with Q.ai, a Forbes company, to offer a trading platform utilizing machine-learning algorithms, multi-factor models and other deep quantituative tools.

We are excited to partner with innovative companies like Q.ai that bringa true opportunity to change the industry and power retail investors with the great technology, data, and AI-based insights, said Dan Raju, co-founder and CEO of Tradier, in a statementQ.ai is looking to transform the way trading Intelligence is delivered into a content-rich experience.

Q.ai, meanwhile, said the collaboration aligns with its mission to democratize access to AI and other quantitative investing methodologies.

As active investors and traders, the team at Q.ai understands the value of real-time data and insights to retail investors, and our AI-based engines will help them get a better edge on the market, said Q.ai CEO and FounderStephen Mathai-Davis. Tradierbrings a wealth of experience and, most importantly, advanced APIs and platform capabilities to help us further our mission of changing the wayinvestors achieve their goals.

Read the original post:

Charlotte-based Tradier teams with Forbes company to offer AI-driven investing platform - WRAL Tech Wire

New Research Reveals Adoption and Implementation of Artificial Intelligence in the Enterprise – GlobeNewswire

SAN FRANCISCO, July 09, 2020 (GLOBE NEWSWIRE) -- Informa Tech media brands, InformationWeek and ITPro Today, today announced findings from their latest research survey the 2020 State of Artificial Intelligence. The team surveyed technology decision makers across North American companies to uncover the ways organizations are approaching and implementing emerging technologies specifically artificial intelligence (AI) and the Internet of Things (IoT) in order to grow and get ahead of the competition.

Key Findings in the 2020 State of Artificial Intelligence

To download a complimentary copy of The 2020 State of Artificial Intelligence, click here.

Media interested in receiving a copy of the report or the State of AI infographic should contact Briana Pontremoli at Briana.Pontremoli@informa.com.

2020 State of Artificial Intelligence Report MethodologyThe survey collected opinions from nearly 300 business professionals at companies engaged with AI-related projects. Nearly 90% of respondents have an IT or technology-related job function, such as application development, security, Internet of Things, networking, cloud, or engineering. Just over half of respondents work in a management capacity, with titles such as C-level executive, director, manager, or vice president. One half are from large companies with 1,000 or more employees, and 20% work at companies with 100 to 999 employees.

About Informa TechInforma Tech is a market leading provider of integrated research, media, training and events to the global Technology community. We're an international business of more than 600 colleagues, operating in more than 20 markets. Our aim is to inspire the Technology community to design, build and run a better digital world through research, media, training and event brands that inform, educate and connect. Over 7,000 professionals subscribe to our research, with 225,000 delegates attending our events and over 18,000 students participating in our training programs each year, and nearly 4 million people visiting our digital communities each month. Learn more about Informa Tech.

Media Contact:Briana PontremoliInforma Tech PRbriana.pontremoli@informa.com

See more here:

New Research Reveals Adoption and Implementation of Artificial Intelligence in the Enterprise - GlobeNewswire

Sheba Medical Center Inks Strategic Agreement with Iguazio to Deliver Real-Time AI for COVID-19 Patient Treatment Optimization – Business Wire

HERZLIYA, Israel--(BUSINESS WIRE)--Iguazio, developers of the Data Science Platform built for production and real-time machine learning (ML) applications, announced that it is working with the Sheba Medical Centers ARC innovation complex to deliver real-time AI across a variety of clinical and logistical use cases in order to improve COVID-19 patient treatment.

Sheba is the largest medical facility in Israel and the Middle East and has been ranked amongst the Top 10 Hospitals in the World by Newsweek magazine. Iguazio was selected to facilitate Shebas transformation with real-time AI and MLOps (machine learning operations) in a variety of projects. One of these projects is optimization of patient care through clinical, real-time predictive insights. Using the Iguazio Data Science Platform, Sheba is incorporating real-time vital signs from patients by utilizing the patients medical history to predict and mitigate complications such as COVID-19 patient deterioration or to aid decision making during surgery.

Another project optimizes the patients journey with smart mobility, from the moment of the patients arrival to Sheba as well as their departure after treatment. The patients entire journey is orchestrated with AI, including: parking allocation, shuttle arrival times, patient routing and waiting, with times optimized using real-time data to ensure optimal patient care and satisfaction. This also enables the management of patient flow to comply with COVID-19 social distancing regulations.

At the core of these projects is Iguazios cloud-native, serverless Data Science Platform with its integrated feature store. This technology brings data science to life by automating real-time machine learning pipelines and allowing rapid development, deployment and management of complex AI applications. The projects include collaboration on hybrid and multi-cloud deployments with Microsoft Azure and Google GCP.

Bringing real-time AI to every aspect of Shebas emerging City of Health is the next step in our digital transformation, said Eyal Zimlichman MD, MSc, Shebas Chief Medical Officer and Chief Innovation Officer, as well as the founder of the ARC (Accelerate Redesign Collaborate) innovation complex. After months of perfecting our AI research across many use cases, its time to bring them to life in our daily operations.

Using Iguazio, we are revolutionizing the way we use data, by unifying real-time and historic data from different sources and rapidly deploying and monitoring complex AI models to improve patient outcomes and the City of Healths efficiency, added Nathalie Bloch, MD, Head of Big Data & AI at ARC.

We are honored to be supporting Sheba, a global leader in healthcare innovation, especially in the midst of the current pandemic, when the community is relying on health facilities the most, commented Asaf Somekh, Co-Founder and CEO of Iguazio. Incorporating AI into these many real-time use cases is setting a new standard for medical centers worldwide.

On Dec 30th, Sheba is hosting a Big Data and AI conference, where Asaf Somekh will present these projects and discuss how Sheba is bringing data science to life.

Medical centers worldwide are invited to get in touch with the ARC Innovation Center for more information and to discuss how to implement real-time AI in their health facilities.

Earlier today, Iguazio announced the launch of the first integrated feature store within their Data Science Platform to accelerate deployment of AI in any cloud environment

About Iguazio

The Iguazio Data Science Platform enables enterprises to develop, deploy and manage AI applications at scale. With Iguazio, enterprises can run AI models in real time, deploy them anywhere (multi-cloud, on-prem or edge), and bring to life their most ambitious AI-driven strategies. Enterprises spanning a wide range of verticals, including financial services, manufacturing, smart mobility, telecoms and ad-tech use Iguazio to solve the complexities of MLOps and create business impact through a multitude of real-time use cases such as fraud prevention, predictive maintenance and real-time recommendations. Iguazio is backed by top financial and strategic investors including Pitango, Samsung, Verizon, Bosch, CME Group, and Dell. Iguazio brings data science to life. Find out more on http://www.iguazio.com.

About Sheba Medical Center

Sheba Medical Center, Tel HaShomer is the largest and most comprehensive medical center in the Middle East, which combines an acute care hospital and a rehabilitation hospital on one campus, and it is at the forefront of medical treatments, patient care, research, education and innovation. For the past two years (2019 and 2020), Newsweek Magazine has named Sheba one of the top ten hospitals in the world. For more information, visit: eng.sheba.co.il

See original here:

Sheba Medical Center Inks Strategic Agreement with Iguazio to Deliver Real-Time AI for COVID-19 Patient Treatment Optimization - Business Wire

OpenAIs fiction-spewing AI is learning to generate images – MIT Technology Review

At its core, GPT-2 is a powerful prediction engine. It learned to grasp the structure of the English language by looking at billions of examples of words, sentences, and paragraphs, scraped from the corners of the internet. With that structure, it could then manipulate words into new sentences by statistically predicting the order in which they should appear.

So researchers at OpenAI decided to swap the words for pixels and train the same algorithm on images in ImageNet, the most popular image bank for deep learning. Because the algorithm was designed to work with one-dimensional data (i.e., strings of text), they unfurled the images into a single sequence of pixels. They found that the new model, named iGPT, was still able to grasp the two-dimensional structures of the visual world. Given the sequence of pixels for the first half of an image, it could predict the second half in ways that a human would deem sensible.

Below, you can see a few examples. The left-most column is the input, the right-most column is the original, and the middle columns are iGPTs predicted completions. (See more examples here.)

OPENAI

The results are startlingly impressive and demonstrate a new path for using unsupervised learning, which trains on unlabeled data, in the development of computer vision systems. While early computer vision systems in the mid-2000s trialed such techniques before, they fell out of favor as supervised learning, which uses labeled data, proved far more successful. The benefit of unsupervised learning, however, is that it allows an AI system to learn about the world without a human filter, and significantly reduces the manual labor of labeling data.

The fact that iGPT uses the same algorithm as GPT-2 also shows its promising adaptability. This is in line with OpenAIs ultimate ambition to achieve more generalizable machine intelligence.

At the same time, the method presents a concerning new way to create deepfake images. Generative adversarial networks, the most common category of algorithms used to create deepfakes in the past, must be trained on highly curated data. If you want to get a GAN to generate a face, for example, its training data should only include faces. iGPT, by contrast, simply learns enough of the structure of the visual world across millions and billions of examples to spit out images that could feasibly exist within it. While training the model is still computationally expensive, offering a natural barrier to its access, that may not be the case for long.

OpenAI did not grant an interview request, but in an internal policy team meeting that MIT Technology Review attended last year, its policy director, Jack Clark, mused about the future risks of GPT-style generation, including what would happen if it were applied to images. Video is coming, he said, projecting where he saw the fields research trajectory going. In probably five years, youll have conditional video generation over a five- to 10-second horizon." He then proceeded to describe what he imagined: you'd feed in a photo of a politician and an explosion next to them, and it would generate a likely output of that politician being killed.

Update: This article has been updated to remove the name of the politician in the hypothetical scenario described at the end.

Link:

OpenAIs fiction-spewing AI is learning to generate images - MIT Technology Review

The Future Of Work NowMedical Coding With AI – Forbes

Thomas H. Davenport and Steven Miller

The coding of medical diagnosis and treatment has always been a challenging issue. Translating a patients complex symptoms, and a clinicians efforts to address them, into a clear and unambiguous classification code was difficult even in simpler times. Now, however, hospitals and health insurance companies want very detailed information on what was wrong with a patient and the steps taken to treat them for clinical record-keeping, for hospital operations review and planning, and perhaps most importantly, for financial reimbursement purposes.

More Codes, More Complexity

The current international standard for medical coding is ICD-10 (the tenth version of International Classification of Disease codes), from the World Health Organization (WHO). ICD10 has over 14,000 codes for diagnoses. The next update to this international standard, ICD-11, has already been formally adopted by WHO member states in May 2019. WHO member states, including the US, will begin implementation of ICD-11 as of January 2022. The new ICD-11 has over 55,000 diagnostic codes, four times the number of diagnostic codes contained in the WHOs ICD-10.

DENVER, CO - NOVEMBER 25: The ICD-10 code for being burned due to water-skis on fire on the computer ... [+] of Dr. David Opperman at the Colorado Voice Clinic on November 25, 2015 in Denver, Colorado. Opperman like many other doctors are having to deal with the new 69,000 diagnostic codes to describe issues the aliments of their patients including burned while water skiing and injured in spacecraft collision. (Photo by Brent Lewis/The Denver Post via Getty Images)

In fact, there are even substantially more codes than the numbers given above, at least in the United States.An enhanced version of IDC-10 that is specific to usage in the United States has about 140,000 classification codes, about 70,000 for diagnosis, and another 70,000 codes for classifying treatments. We expect the enhanced version of IDC-11 that will be specific to usage in the US to have at least several times number of codes in the WHO version of IDC-11 given that the US version also includes treatment codes and has previously included a larger number of diagnostic codes as well.

No human being can remember all the codes for diseases and treatments, especially as the number of codes has climbed over the decades to tens of thousands. For decades, medical coders have relied on code books to look up the right code for classifying a disease or treatment. Thumbing through a reference book of codes obviously slowed down the process. And it is not just a matter of finding the right code. There are interpretation issues. With ICD-10 and prior versions of the classification scheme, there is often more than one way to code a diagnosis or treatment, and the medical coder has to decide on the most appropriate choices.

Over the past 20 years, the usage of computer-assisted coding systems has steadily increased across the healthcare industry as a means of coping with the increasing complexity of coding diagnosis and treatments. More recent versions of computer-assisted coding systems have incorporated state-of-the-art machine learning methods and other aspects of artificial intelligence to enhance the systems ability to analyze the clinical documentationcharts and notesand determine which codes are relevant to a particular case. Some medical coders are now working hand-in-hand with AI-enhanced computer-assisted coding systems to identify and validate the correct codes.

Elcilene Moseley and AI-Assisted Coding

Elcilene Moseley lives in Florida and is an 11-year veteran medical coder. She previously worked for a company that owned multiple hospitals, but she now works for a coding services vendor that has a contract for coding in the same hospitals Moseley used to work for. She does her work from home, generally working for eight hours to do a certain number of patient charts per day. She specializes in outpatient therapies, often including outpatient surgeries.

Moseley is acutely aware of the increased complexity of coding and is a big supporter of the AI-enhanced com coding systemdeveloped by her employerthat suggests codes for her to review. Its gotten so detailedright side, left side, fracture displaced or nottheres no way I can remember everything. However, she notes, AI only goes so far. For example, the system may process the text in a chart document, note that the patient has congestive heart failure, and select that disease as a code for diagnosis and reimbursement. But that particular diagnosis is in the patients history, not what he or she is being treated for now. Sometimes Im amazed at how accurate the systems coding is, she says. But sometimes it makes no sense.

When Moseley opens a chart, on the left side of each page there are codes with pointers to where the code came from in the chart report. Some coders dont bother to read the patient chart from beginning to end, but Moseley believes its important to do so. Maybe I am a little old fashioned, she admits, but its more accurate when I read it. She acknowledges that the system makes you faster, but it can also make you a little lazy.

Some patient cases are relatively simple to code, others more complex. If its just an appendectomy for a healthy patient, Moseley says, I can check all the codes and get through it in five minutes. This is despite multiple sections on a chart for even a simple surgery, including patient physical examination, anesthesiology, pathology, etc. On the other hand, she notes,

If its a surgery on a 75-year-old man with end-stage kidney disease, diabetes, and cancer, I have to code their medical history, what meds they are takingit takes much longer. And the medical history codes are important because if the patient has multiple diagnoses, it means the physician is spending more time. Those evaluation and management codes are important for correctly reimbursing the physician and the hospital.

Moseley and other coders are held to a 95% coding quality standard, and their work is audited every three months to ensure they meet it.

When Moseley first began to use AI-enhanced coding a couple of years ago, she was suspicious of it because she thought it might put her out of a job. Now, however, she believes that will never happen and human coders will always be necessary. She notes that medical coding is so complex and there are so many variables, and so many specific circumstances. Due to this complexity, she believes that coding will never be fully automated. She has effectively become a code supervisor and auditorchecking the codes the system assigns and verifying if system recommendations are appropriate for the specific case. In her view, all coders will eventually transition to roles of auditor and supervisor of the AI-enabled coding system. The AI system simply makes coders too productive to not use it.

Educating Coders

Moseley has a two-year Associate of Science in Medical Billing and Coding degree. In addition, she holds several different coding certificationsfor general coding and in her specialty fields like emergency medicine. Keeping the certifications active requires regular continuing education units and tests.

Not all coders, however, have this much training. Moseley says that there are lots of sketchy schools that offer online training in medical coding. They often overpromise about the prospects of a lucrative jobwith an up to $100,000 annual salaryif a student takes a coding course for six months. Working from home is another appealing aspect of these jobs.

The problem is that hospitals and coding services firms want experienced coders, not entry-level hires with inadequate training. The more straightforward and simpler coding decisions are made by AI; more complex coding decisions and audits require experts. The newbies may be certified, Moseley says, but without prior experience they have a difficult time getting jobs. It would require too much on-the-job training by their employers to make them effective. The two professional associations for medical coding, AAPC (originally the American Academy of Professional Coders) and AHIMA (American Health Information Management Association), both have Facebook pages for their members to discuss issues in the coding field. Moseley says they are replete with complaints about the inability to find the entry-level jobs promised by the coding schools.

For Elcilene Moseley, however, codingespecially with the help of AIis a good job. She finds it interesting and relatively well-paid. Her work at homeat any hour of the day or nightprovides her with a high level of flexibility. If she didnt like her current position, she is constantly approached by headhunters about other coding jobs. Moseley argues that the only medical coders who are suffering from the use of AI are those at the entry level and those who refuse to learn the new skills to work with a smart machine.

Steven Miller is a Professor of Information Systems and Vice Provost for Research at Singapore Management University.

View post:

The Future Of Work NowMedical Coding With AI - Forbes

SCAN Health Plan Leverages AI Based Predictive Models to Improve Identification of High Risk Members – PRNewswire

"At SCAN our goal is to support our members at every stage of their journey and utilizing advanced technology, such as AI, enables us to do so on a much more proactive basis," said Josh Goode, SCAN chief information officer. "As our members' needs evolve, the KenSci platform allows us to better interpret the needs behind the data so that we can respond with programs and services to help keep them healthy and independent."

As a part of the first phase implementation, SCAN and KenSci, a system of intelligence for healthcare, have launched explainable AI models for healthcare, enabling SCAN to identify members at risk of Hospitalization for Potentially Preventable Complications* (HPC) as well as those eligible for Nursing Facility Level of Care (NFLOC). The platform provides SCAN with insights, proactively identifying members potentially at risk for specific disease states allowing for early interventions. In addition, SCAN is using machine learning (ML) techniques that are routine across consumer applications but new to healthcare in helping identify gaps in care to improve the management of chronic conditions.

"The volume and veracity of data opens up immense possibilities for healthcare organizations to transform the way they support their plan members," said Samir Manjure, co-founder & CEO, KenSci. "We are excited to work with SCAN and appreciate their expertise in developing these tools to meet the needs of seniors. Together, there is tremendous opportunity to impact the health of older adults."

"Data is a critical asset in modern healthcare and harnessing it appropriately provides invaluable insight," said Moon Leung, SCAN senior vice president and chief informatics officer. "By leveraging KenSci's AI expertise, we believe we can utilize our data to improve health outcomes and quality of life for many of our members."

*HPC measure is based on National Committee for Quality Assurance (NCQA) HEDIS technical specifications. For more details, please visit ncqa.org

About SCAN Health PlanSCAN Health Plan is one of the nation's largest not-for-profit Medicare Advantage plans, serving more than220,000 members in California. Since its founding in 1977, SCAN has been a mission-driven organization dedicated to keeping seniors healthy and independent. Independence at Home, a SCAN community service, provides vitally needed services and support to seniors and their caregivers. SCAN also offers education programs, community funding, volunteer opportunities and other community services throughout our California service area. To learn more, visitscanhealthplan.comorfacebook.com/scanhealthplanor follow us on Twitter@scanhealthplan.

About KenSci KenSci's machine learning powered risk prediction platform helps healthcare providers and payers intervene early by identifying clinical, financial and operational risk to save costs and lives. KenSci's platform is engineered to ingest, transform and integrate healthcare data across clinical, claims, and patient generated sources. With a library of pre-built models and modular solutions, KenSci's machine learning platform to integrates into existing workflows allowing health systems to better identify utilization, variation and improve hospital operations. With Explainable AI models for healthcare, KenSci is making risk-based prediction more efficient and accountable.

KenSci was incubated at University of Washington's Center for Data Science at UW Tacoma and designed on the cloud with help from Microsoft's Azure4Research grant program. KenSci is headquartered in Seattle, with offices in Singapore and Hyderabad. For more information, visit http://www.kensci.com.

SOURCE SCAN Health Plan

http://www.scanhealthplan.com

See more here:

SCAN Health Plan Leverages AI Based Predictive Models to Improve Identification of High Risk Members - PRNewswire