Page 11234..1020..»

Category Archives: Ai

Why you need to pay more attention to combatting AI bias – TechRepublic

Posted: December 3, 2019 at 12:48 am

According to DataRobot report, nearly half of tech pros are concerned about AI bias, yet many organizations still use untrustworthy AI systems

As artificial intelligence (AI) continues its march into enterprises, many IT pros are beginning to express concern about potential AI bias in the systems they use.

A new report from DataRobot finds that nearly half (42%) of AI professionals in the US and UK are "very" to "extremely" concerned about AI bias.

The report, conducted last June of more than 350 US- and UK-based CIOs, CTOs, VPs, and IT managers involved in AI and machine learning (ML) purchasing decisions, also found that "compromised brand reputation" and "loss of customer trust" are the most concerning repercussions of AI bias. This prompted 93% of respondents to say they plan to invest more in AI bias prevention initiatives in the next 12 months.

SEE:The ethical challenges of AI: A leader's guide (free PDF)(TechRepublic)

Despite the fact that many organizations see AI as a game changer, many organizations are still using untrustworthy AI systems, said Ted Kwartler, vice president of trusted AI, at DataRobot.

He said the survey's finding that 42% of executives are very concerned about AI bias comes as no surprise "given the high-profile missteps organizations have had employing AI. Organizations have to ensure AI methods align with their organizational values, Kwartler said. "Among the many steps needed in an AI deployment, ensuring your training data doesn't have hidden bias helps keep organizations from being reactionary later in the workflow."

DataRobot's research found that while most organizations (71%) currently rely on AI to execute up to 19 business functions, 19% use AI to manage as many as 20 to 49 functions, and 0% leverage the technology to tackle more than 50 functions.

While managing AI-driven functions within an enterprise can be valuable, it can also present challenges, the DataRobot report said. "Not all AI is treated equal, and without the proper knowledge or resources, companies could select or deploy AI in ways that could be more detrimental than beneficial."

The survey found that more than a third (38%) of AI professionals still use black-box AI systems--meaning they have little to no visibility into how the data inputs into their AI solutions are being used. This lack of visibility could contribute to respondents' concerns about AI bias occurring within their organization, DataRobot said.

AI bias is occurring because "we are making decisions on incomplete data in familiar retrieval systems,'' said Sue Feldman, president of the cognitive computing and content analytics consultancy Synthexis. "Algorithms all make assumptions about the world and the priorities of the user. That means that unless you understand these assumptions, you will still be flying blind."

This is why it is important to use systems that include humans in the loop, instead of making decisions in a vacuum, added Feldman, who is also co-founder and managing director of the Cognitive Computing Consortium. They are "an improvement over completely automatic systems," she said.

SEE:Managing AI and ML in the enterprise 2019: Tech leaders expect more difficulty than previous IT projects(TechRepublic Premium)

Bias based on race, gender, age or location, and bias based on a specific structure of data, have been long-standing risks in training AI models, according to Gartner.

In addition, opaque algorithms such as deep learning can incorporate many implicit, highly variable interactions into their predictions that can be difficult to interpret, the firm said.

By 2023, 75% of large organizations will hire AI behavior forensic, privacy and customer trust specialists to reduce brand and reputation risk, Gartner predicts.

"New tools and skills are needed to help organizations identify these and other potential sources of bias, build more trust in using AI models, and reduce corporate brand and reputation risk," said Jim Hare, a research vice president at Gartner, in a statement.

"More and more data and analytics leaders and chief data officers (CDOs) are hiring ML forensic and ethics investigators," Hare added.

Organizations such as Facebook, Google, Bank of America, MassMutual, and NASA are hiring or have already appointed AI behavior forensic specialists to focus on uncovering undesired bias in AI models before they are deployed, Gartner said.

If AI is to reach its potential and increase human trust in the systems, steps must be taken to minimize bias, according to McKinsey. They include:

The DataRobot study found that to combat instances of AI bias, 83% of all AI professionals say they have established AI guidelines to ensure AI systems are properly maintained and yielding accurate, trusted outputs. In addition:

The latter stat surprised Kwartler. "I am concerned that only about half of the executives have algorithms in place to detect hidden bias in training data."

There were also cultural differences discovered between US and UK respondents to the DataRobot study.

While US respondents are most concerned with emergent biaswhich is bias resulting from a misalignment between the user and the system design UK respondents are more concerned with technical biasor bias arising from technical limitations, the study found.

To enhance AI bias prevention efforts, 59% of respondents say they plan to invest in more sophisticated white box systems, 54% state they will hire internal personnel to manage AI trust, and 48% say they intend to enlist third party vendors to oversee AI trust, according to the study.

The 48% figure should be higher, Kwartler believes. "Organizations need to own and internalize their AI strategy because that helps them ensure the AI models align with their values. For each business context and industry, models need to be evaluated before and after deployment to mitigate risks," he said.

Besides those AI bias prevention measures, 85% of all global respondents believe AI regulation would be helpful for defining what constitutes AI bias and how it should be prevented, according to the report.

Be in the know about smart cities, AI, Internet of Things, VR, autonomous driving, drones, robotics, and more of the coolest tech innovations. Delivered Wednesdays and Fridays

Image: iStockphoto/PhonlamaiPhoto

Go here to read the rest:

Why you need to pay more attention to combatting AI bias - TechRepublic

Posted in Ai | Comments Off on Why you need to pay more attention to combatting AI bias – TechRepublic

Maximize The Promise And Minimize The Perils Of Artificial Intelligence (AI) – Forbes

Posted: at 12:48 am

How businesses can use artificial intelligence (AI) to their advantage, perhaps even in a ... [+] transformative way, without turning the pursuit of AI advantage into a quixotic quest

Frankly, I was hoping an artificial intelligence (AI) algorithm would write this column for me, because who knows more about AI than the mysterious little gremlins that make machine learning possible? That, alas, didnt happen; so Im on my own.

Like most people in business, I dont need any convincing that artificial intelligence (for most companies in many areas of their operations) will become a game-changer.

Still, it remains a fluid, if not amorphous, concept in many respects. What, exactly, can we expect AI to do for us that were not already doingor how will it improve what were doing by doing it better, faster, cheaper, with greater insight or fewer errors?

As an important article (Winning With AI) in the MIT Sloan Management Review put it back in October, AI can be revolutionary, but executives must act strategically. [And] acting strategically means deciding what not to do. Thats not as easy as it sounds.

The problem I have with most discussions of artificial intelligence is that they assume the reader or listener already understands the promises and perils of AI. But based on my conversations with a lot of very intelligent people I dont think thats always the case.

As Amir Husain, founder and CEO of the Austin, Texas-based machine learning company, SparkCognition, explained to Business News Daily last spring, Artificial intelligence is kind of the second coming of software. Its a form of software that makes decisions on its own, thats able to act even in situations not foreseen by the programmers.

AI is so ubiquitous we hardly even think about it. As the Business News Daily article pointed out: Most of us interact with artificial intelligence in some form or another on a daily basis. From the mundane to the breathtaking, artificial intelligence is already disrupting virtually every business process in every industry.

Examples abound: online searches, spam filters, smart personal assistants, such as Alexa, Echo, Google Home and Siri, the programs that protect our information when we buy (or sell) something online, voice to text programs, smart-auto technologies, programs that automatically sound alarms or shut down operating systems when problems are identified, security alarm systems, even those annoying pop-up ads that follow us throughout the day. To one degree or another, theyre all based on or impacted by AI.

In other words, most of us are far more familiar with AIintimately sothan we give ourselves credit for.

The business question is (as the Sloan article correctly put it): How can executives exploit the opportunities, manage the risks, and minimize the difficulties associated with AI? Put another way, how can they use it to their advantage, perhaps even in a transformative way, without turning the pursuit of AI advantage into a quixotic questkeeping in mind that acting strategically involves deciding what not to do as well as pushing ahead and taking chances in some areas?

Some suggestions from the MIT Sloan Management Review article:

First: Dont treat AI initiatives as everyday technology gambits. Theyre more important than that. Run them from the C suite and closely coordinate them with other digital transformation efforts.

Second: Be sure to coordinate AI with the companys overall business strategy. One of the surest ways to come up shortas most AI initiatives do [from 40% to 70%, according to the Sloan article]is to focus AI narrowly on one set of priorities while the company is equally or more concerned with others. While AI can help companies reduce costs, for example, by identifying waste and inefficiencies, growing the business may be a higher priority.

The Hartford, Conn.-based insurance company, Aetna (now a subsidiary of CVS), for example, has been using AI to prevent fraud and uncover overpaymentstypical insurance company concerns. Its also been using AI to design products and increase customers and customer engagement. In one Medicare-related Aetna product, the article notes, designers used AI to customize benefits, leading to 180% growth in new member acquisition. More long term, Aetnas head of analytics, VP Ali Keshavarz, told the authors Aetnas goal is to use AI to become the first place customers go when they are thinking about their health.

Third: This may be obvious to the geeks among us, but perhaps less so to the more technology-challenged: Be sure to align the production of AI with the consumption of AI.

Fourth: Invest in AI talent, data and process change in addition to (and often more so than) AI technology. Recognize that every successful AI undertaking is the product of a great group of people. While some of this talent should be home grown, youll also have to hire from the outside: bring people in to develop and enhance your internal capabilities. Thats a fact of modern business life.

As with everything else in business, all companies are different. Their needs are different. Their available resources (financial, talent, patience) are different. And their goals and expectations should be different.

Its important to take the time to understand how to maximize the promise and minimize the pitfalls of AI. If you do, youre more likely to succeed.

See the rest here:

Maximize The Promise And Minimize The Perils Of Artificial Intelligence (AI) - Forbes

Posted in Ai | Comments Off on Maximize The Promise And Minimize The Perils Of Artificial Intelligence (AI) – Forbes

Nvidia will dominate this crucial part of the AI market for at least the next two years – MarketWatch

Posted: at 12:48 am

The principal tasks of artificial intelligence (AI) are training and inferencing. The former is a data-intensive process to prepare AI models for production applications. Training an AI model ensures that it can perform its designated inferencing tasksuch as recognizing faces or understanding human speechaccurately and in an automated fashion.

Inferencing is big business and is set to become the biggest driver of growth in AI. McKinsey has predicted that the opportunity for AI inferencing hardware in the data center will be twice that of AI training hardware by 2025 ($9 billion to 10 billion vs. $4 billion to $5 billion today). In edge device deployments, the market for inferencing will be three times as large as for training by that same year.

For the overall AI market, the market for deep-learning chipsets will increase from $1.6 billion in 2017 to $66.3 billion by 2025, according to Tractica forecasts.

I believe Nvidia NVDA, -3.46% will realize better-than-expected growth due to its early lead in AI inferencing hardware accelerator chips. That lead should last for at least the next two years, given industry growth and the companys current product mix and positioning.

In most server- and cloud-based applications of machine learning, deep learning and natural language processing, the graphics processing unit, or GPU, is the predominant chip architecture used for both training and inferencing. A GPU is a programmable processor designed to quickly render high-resolution images and video, originally used for gaming.

Nvidias biggest strength and arguably its largest competitive vulnerability lies in its core chipset technology. Its GPUs have been optimized primarily for high-volume, high-speed training of AI models, though they also are used for inferencing in most server-based machine learning applications. Today, that GPU technology is a significant competitive differentiator in the AI inferencing market.

Liftr Cloud Insights has estimated that the top four clouds in May 2019 deployed Nvidia GPUs in 97.4% of their infrastructure-as-a-service compute instance types with dedicated accelerators.

While GPUs have a stronghold on training and much of the server based inference, for edge-based inferencing, CPUs rule.

Whats the difference between GPUs and CPUs? In simple terms, a CPU is the brains of the computer and a GPU acts as a specialized microprocessor. A CPU can handle multiple tasks, and a GPU can handle a few tasks very quickly. CPUs currently dominate in adoption. In fact, McKinsey projects that CPUs will account for 50% of AI inferencing demand in 2025, with ASICs, which are custom chips designed for specific activities, at 40% and GPUs and other architectures picking up the rest.

The challenge: While Nvidias GPUs are extremely capable for handling AIs most resource-intensive inferencing tasks in the cloud and server platforms, GPUs are not as cost-effective for automating inferencing within mobile, IoT, and other edge computing uses.

Various non-GPU technologiesincluding CPUs, ASICs, FPGAs, and various neural network processing unitshave performance, cost, and power-efficiency advantages over GPUs in many edge-based inferencing scenarios, such as autonomous vehicles and robotics.

The opportunity: The company no doubt recognizes the much larger opportunity resides in inferencing chips and other components optimized for deployment in edge devices. But it has its work cut out to enhance or augment its current offerings with lower-cost, specialty AI chips to address that important part of the market.

Nvidia continues to enhance its GPU technology to close the performance gap vis--vis other chip architectures. One notable recent milestone was the recent release of AI industry benchmarks that show Nvidia technology setting new records in both training and inferencing performance. The companys forthcoming new AI-optimized Jetson Xavier NX hardware module will offer server-class performance, a small footprint, low cost, low power, high performance and flexible deployment for edge applications.

With an annual revenue run rate nearing $12 billion, Nvidia retains a formidable lead over other AI-accelerator chip manufacturers, especially AMD AMD, -1.07% and Intel INTC, -0.67%.

Intel, however, has upped its game in AI inference with the recent release of multiple specialty AI chips and the recent announcement that Ponte Vecchio, the companys first discrete GPU, should hit the market in 2021. There is also a range of cloud, analytics and development tool vendors who have flocked into the AI space over the past several years.

Nvidias early lead can be attributed to the companys focus, as well as the deep software integration that enables developers to rapidly develop and scale models on its hardware. This is why many of the hyperscalers (Alphabets GOOG, -1.15% GOOGL, -1.17% GoogleCloud, Microsofts MSFT, -1.21% Azure, Amazons AMZN, -1.07% AWS) also deliver AI inference capabilities on their infrastructure based upon Nvidia technology.

In edge-based inferencing, where AI executes directly on mobile, embedded, and devices, no one hardware/software vendor is expected to dominate, and Nvidia stands a very good chance of pacing the field. However, competition is intensifying from many directions. In edge-based AI inferencing hardware alone, Nvidia faces competition from dozens of vendors that either now provide or are developing AI inferencing hardware accelerators. Nvidias direct rivalswho are backing diverse AI inferencing chipset technologiesinclude hyperscale cloud providers AWS, Microsoft, Google, Alibaba BABA, -1.84% and IBM IBM, -1.15% ; consumer cloud providers Apple AAPL, -1.16%, Facebook FB, -0.96% and Baidu BIDU, -0.92% ; semiconductor manufacturers Intel, AMD, Arm, Samsung, Qualcomm QCOM, -1.32%, Xilinx XLNX, -2.71% and LG; and a staggering number of China-based startups and technology companies such as Huawei.

The significant opportunities tied to the growth of AI inferencing will drive innovation and competition to develop more powerful and affordable solutions to leverage AI. With the deep resources and capabilities of most of the aforementioned competitors, there is certainly a possibility of a breakthrough that could rapidly shift the power positions in AI inferencing. However, at the moment, Nvidia is the company to beat, and I believe this strong market position will continue for at least the next 24 months.

With Nvidia placing an increased focus on low-cost edge-based inferencing accelerators as well as high-performance hardware for all AI workloads, the company provides widely adopted algorithm libraries, APIs and ancillary software products designed for the full range of AI challenges. Any competitor would need to do all of this better than Nvidia. That would be a tall task, but certainly not insurmountable.

Daniel Newman is the principal analyst at Futurum Research. Follow him on Twitter @danielnewmanUV. Futurum Research, like all research and analyst firms, provides or has provided research, analysis, advising, and/or consulting to many high-tech companies in the tech and digital industries. Neither he nor his firm holds any equity positions with any companies cited.

Go here to read the rest:

Nvidia will dominate this crucial part of the AI market for at least the next two years - MarketWatch

Posted in Ai | Comments Off on Nvidia will dominate this crucial part of the AI market for at least the next two years – MarketWatch

How Women Can Win in the Age of AI – Yahoo News

Posted: at 12:48 am

Artificial intelligence is being adopted by companies in numerous industries and will displace jobs. It will also likely worsen gender and racial gaps. Nearly 11 percent of jobs held by women may be eliminated because of AI, warns the International Monetary Fund. McKinsey, meanwhile, estimates that 20 percent of women employed today could experience AI-driven job erosion in the coming decade. Preparing women to succeed in the age of AI must be a global priority as we work towards one key United Nations Sustainable Development Goal: to achieve gender equality, and empower women and girls.

Despite the gloomy statistics, theres reason to be hopefulif business, governments and individuals strategically work together. It helps to know that nearly 20 percent more women could be employed by 2030 than todayif theyre able to maintain their current representation level in each sector of the economy. The good news is that women tend to be concentrated in fields that are growing, like health care.

But then there are industries like financial services, where women make up nearly half of the U.S. sectors workforce. Women hold only 25 percent of the financial services senior management jobs that experts say are less vulnerable to AIs impact. Frontline workers are particularly vulnerable to AI disruption, and in the U.S. 85 percent of bank tellers are women.

Some companies are proactively developing solutions. Synchrony, a Connecticut-based financial services firm, has created a deliberate AI workforce strategy and invested aggressively in training programs for its 16,500 employees. In todays technology revolution, leaders must upskill their workforce and findor trainthe right talent,wroteCEO Margaret Keane. Nearly half of Synchronys employees are in frontline roles such as customer service. In one program, the company pays up to $20,000 for an employee to earn a higher education degree. This potentially helps employees transition from, say, customer service into a role thats more technicalpossibly in a field thats growing.

Between 40 and 60 million women around the world may need to change occupations to remain employed.

More companies should follow Synchronys lead. Globally, business and government must boost the number of women in high school, college and graduate schoolespecially those who focus on science, technology, engineering and math-related fields, all of which are growing. McKinsey estimates that between 40 and 60 million women around the world may need to change occupations to remain employed, often stepping into higher-skilled roles.

The unfortunate truth is that women tend to have smaller professional networks than men. So we must develop mechanisms for women to expand their base of contactswhich could ease their transition between fields. We must also train more women who can joinand leadthe teams that actually write the algorithms for AI. A critical mass of women in these roles should reduce the likelihood such systems will have built-in biasagainst women, a known problem. Theres no shortage of need: Only 22 percent of AI professionals are women, according to an analysis by the World Economic Forum and LinkedIn. By a variety of measures, the number of AI-related jobs are growing.

Shannon Eusey The Nations Largest Female-Owned RIA Started as a Business School Class Project

This isnt an issue just for womenall of business and society must participate. Thats why companies are moving quickly to create an infrastructure for job transformation. Bank of America is another financial services company taking concrete steps to help women adapt to an increasingly AI-driven workforce. Says Cathy Bessant, the banks chief operations and technology officer: Its an area where we still have a lot of work to do.

This article originally appeared in Techonomys Winter 2020 magazine.

The post How Women Can Win in the Age of AI appeared first on Worth.

See the original post here:

How Women Can Win in the Age of AI - Yahoo News

Posted in Ai | Comments Off on How Women Can Win in the Age of AI – Yahoo News

Will AI liberate the IoT’s potential? – Smart Industry

Posted: at 12:48 am

By Bob Sperber

When deployed in tandem, artificial intelligence (AI) and the Internet of Things (IoT) can bring powerful new capabilities and competitive advantagesa net effect that is greater than the sum of its constituent parts.

This is the central finding of a new study conducted by SAS, Deloitte and Intel with research from IDC based on survey of 450 global business leaders. Entitled AIoT: How Leaders Are Breaking Away, the survey report indicates that this combination of technologies, dubbed the Artificial Intelligence of Things, represents a key competitive advantage that already has passed from pilot-scale tests to early rollouts.

As companies grow into a fuller implementation of IoT, they begin to realize that the tremendous volumes of data generated are difficult to tame. In this context, the combination of AI with IoT is a natural fit for gaining insights that can help advance not just operational goals but business strategy.

Consider some of the findings the research brought to light:

99% of respondents said,in aggregate, the benefits of using AI together with their IoT solutions met or exceeded expectations. 90% of respondents who reported heavy use of AI for IoT operations said it exceeded their expectations for value. 35% of senior leaders cited increased revenue as the single most important area of improvement they expected to achieve from their IoT efforts.

Overall, projects that combine IoT with AI are having a greater-than-expected impact in operations, the enterprise and ultimately, the bottom line.

The expectations game

It came as a bit of a surprise to IDCs Maureen Fleming, program vice president for intelligent process automation, that leaders value the addition of AI to IoT projects as highly as the do. She confessed to Smart Industry that in her travels and client encounters, shes been barraged with negative feedback to the point where it seems like everywhere I go people are talking about the high failure rate of digital transformation efforts.

Naturally, she expected lower engagement among respondents. But what we found true is the exact opposite.

One possible explanation for the surprising, healthy attitude toward this thing called AIoT is that its the IT and operations leaders who fret over the details more than the CEOs office. According to the research, 56% of senior leaders believe their AIoT projects significantly exceeded expectations, a margin 18% greater than operations-related teams and 31% greater than data scientists and IT leaders. Interestingly, operational leaders were the greatest proponents of IoT alone (Figure 1).

Figure 1

In my experience, senior executives tend to be a lot more optimistic than those at other levels in the organizationits kind of a requirement for the job, says Shak Parran, partner at Deloitte Canada and analytics leader for its Omnia AI practice. Below the top floor, he says, the practical reality of putting these capabilities to work can make data scientists a little more pessimistic. They know that their data has to be cleaned up, they have to teach machines to do the right things, their processes have to be optimized, and so on. They see the obstacles, because thats what theyre responsible for navigating.

The good news is that this attitudinal gap may close over time, if an observation by Melvin Greer, Intels chief data scientist, comes to fruition. Over the past 24-36 months, were seen ample evidence of chief data officers moving into the CEO suite.

A competitive AI-vantage

As implementation teams have matured, so has the likelihood of success with digital transformation initiatives. For successful projects, focus shifts from connecting devices and collecting new and different data to the next step of the journey, analytics. From analytics to the use of AI is another step forward in the ability to filter, correlate and uncover complex relationships. The researchers confirm that industrial firms are indeed moving from proofs of concept and pilot tests to production systems with analytical approaches that incorporate AI.

The key to driving long-term, sustainable value with AIoT lies in building experience with large-scale rollouts, with higher levels of automation, throughout the organization. And the only way to reach that scale with AIoT is to increase the level of automation, according to Oliver Schabenberger, COO and CTO at SAS. So many CIOs I talk with say automation is a primary focus, to make IoT-related analytics insights consumable by business analysts and others, not just the data scientists.

In turn, the reason to scale-up and automate is to gain a competitive advantage. And, the report shows, companies that use AI and IoT together are more competitive than those using only IoT.

When asked about their success across six major initiativesfrom speeding-up operations to improving productivity and reducing coststhose respondents who used AI in conjunction with IoT said they were significantly more successful than counterparts who used only IoT. For instance, 53% of leaders reported significant value in using AIoT for speeding-up operations as compared to 32% using IoT without AI. Roughly similar numbers hold for initiatives to improve employee productivity, streamline operations and provide new digital services and innovations. In all cases, there was a double-digit gap between users of AIoT and IoT alone (Figure 2).

Figure 2

Of the six initiatives examined, AI was seen to be least important in the area of reducing costs/expenses.

This is not unexpected, according to Jason Mann, vice president of IoT, SAS. Companies are primarily focused on three core business objectives: achieving higher levels of operational efficiency, improving top-line growth and enhancing customer engagement, he says. While cost-cutting is important, its typically not a strategic business driver.

Companies who refresh their data at least once a day were asked about the role AI plays in rapid, tactical planning. When AI isnt in play, IoT data is overwhelmingly applied to operational decisions (68%); only 12% use IoT for day-to-day planning-oriented decisions. But with the introduction of AI, the number of respondents using this data for day-to-day planning nearly triples, increasing to 31%.

IDCs Fleming, calling this the most interesting finding of the study, explained that improving the speed of sensor data refresh combined with AI expands an organizations ability to focus on immediate planning, while also quickly identifying and resolving operational problems. The combination produces greater agility and more efficiency.

More generally, those who use AI this way broaden their toolset to address issues of supply and demand, product quality, merchandising and more, says Intels Gadgil: Theyre focusing on issues like productivity, but theyre also looking for the next opportunities for transformation in their businesstheyre pushing their organizations to connect the dots and see how some of these new technologies can contribute.

Increased revenue is job one

Leading companies that use AI to leverage data beyond its own operations and into the supply chain are better able to drive value back to your customer, and build a portfolio of data-driven services, says Bill Roberts, senior director in SAS IoT division. Further, after 12-24 months of using AI + IoT, users reported decreased costs or expenses (85%), improved employee productivity (87%) and streamlined operations (86%).

Logic and intelligence are now going to be distributed across the architecture, right back into the service center, onto the device or truck or piece of equipment, Roberts says.

Among the benefits sought for their IoT efforts, increased revenue topped the list for senior leaders across geographies, industries, and companies of all sizes. And if the improved results reported by those who have begun to add AI to their IoT connectivity projects is any indication, the AIoT has a bright future indeed.

More:

Will AI liberate the IoT's potential? - Smart Industry

Posted in Ai | Comments Off on Will AI liberate the IoT’s potential? – Smart Industry

AI and the future of design – Information Age

Posted: at 12:48 am

Artificial intelligence is disrupting industries across the board. In healthcare, AI technologies are outperforming humans in diagnosing disease--particularly when it comes to spotting malignant tumors. In marketing, artificial intelligence analyzes users behavioral patterns, enabling businesses to target customers with highly personalized content

AI could disrupt the design industry in a variety of ways.

The design industry is yet another area where AI is making huge strides. Just recently, an AI tool pulled from a database of dozens of patterns and colors to create 7 million unique packaging designs for Nutella.

Lets take a closer look at how artificial intelligence is shaping the future of design.

In addition to generating its own designsas it did for NutellaAI is the driving force behind modern web design. Artificial design intelligence (ADI) systems are democratizing website development, generating functional, attractive websites from the bottom up.

Wix and Bookmark both offer AI platforms that allow websites to intelligently design themselves; the customer is responsible for choosing the sites name and answering a few quick questions, but AI will do the rest. Designers and developers no longer need to build websites completely from scratch, and they can create attractive sites regardless of their level of experience with web building or design.

Quality assurance (QA) is an essential area of concern to companies. If the public thinks a company or service does not care about keeping its quality level high, theyll likely do business elsewhere. AI for quality assurance can help. Read here

In addition to designing the face of web pages, AI builds the image of a brand. Artificial intelligence tools like Tailor Brands can gather data from around the web to design customized logos in seconds, requiring from the user nothing but their businesss name and a quick description of their company or industry. In this sense, AI has made brand design more accessible for emerging entrepreneurs with small budgets and little to no design experience.

Designers working with AI can create designs faster and more cheaply. Artificial intelligence tools can analyze vast amounts of data within minutes and suggest designs accordingly. Airbnb is already taking advantage of this function, feeding wireframe sketches to AI machines which, in turn, can generate complete images. This capability can be used to create several different prototype designs that can then be A/B tested with users.

Domos VP Data and Curiosity Ben Schein discusses integrating AI into your company practices without comprimising agility, speed or control. Read here

AI saves designers time by automating mundane tasks, allowing designers to instead focus on higher-level work. Rather than designing banners and product labels in multiple languages, for instance, designers can outsource the work to AI. The designers, in turn, can approve or reject the suggested graphics, saving them enormous amounts of time.

Artificial intelligence not only designs two-dimensional graphics like web pages and branding materials, but it also builds three-dimensional graphics. AI is being used to build three-dimensional architectural models, facilitating the work of architects by enabling them to create a detailed, lifelike blueprint of their building plan. AI is also being used to build worlds for virtual reality, artificial reality, and mixed reality. Artificial intelligence tools not only design the graphics for these worlds, but they also react intelligently to user interaction and behavior.

Some people worry that, with the advanced design capabilities of artificial intelligence, AI will eventually replace human designers altogether. The reality, however, is that AI will play a more complex and nuanced role in design. Artificial intelligence tools will facilitate the work of human designers while remaining a tooland not a replacementfor human designers.

Think about it: standard machines dont replace workers altogether but, instead, automate basic tasks and help workers operate more efficiently. Artificial intelligence programs will operate the same way, automating mundane tasks and even making original suggestions while still requiring a human designer to oversee the process and make the most important decisions.

Jens Krueger, chief technology officer, Workday, explains why a culture of inclusion, innovation and upskilling is crucial in order to reap the benefits of automation. Read here

So while AI isnt going to replace designers, it is going to replace something else: traditional methods of doing business. AI is changing the way people lead, manage, and make decisions.

In the world of design, this means using market data to make informed hypotheses about which designs work best for business. AI platforms can analyze which colors, shapes, typefaces, and other visuals perform best across different industries. That way, rather than coming up with a design based on limited research and intuition, human designers can use artificial intelligence tools to make design choices based on real, hard data.

In the next few years, designers will increasingly harness the power of artificial intelligence to inform their design decisions. AI tools wont replace designers but, on the contrary, will facilitate designers work so they can focus on bigger picture tasks. Artificial intelligence is already strong across the design industry, generating graphics for websites, brands, virtual reality and more. As AI becomes stronger, it wont just show us what we already know; itll also present us with novel ideas and open new avenues for thought.

Written by Tailor Brands, an automated, AI-powered logo design and branding platform.

Read more:

AI and the future of design - Information Age

Posted in Ai | Comments Off on AI and the future of design – Information Age

AI’s Impact in 2020: 3 Trends to Watch – TDWI

Posted: at 12:48 am

AI's Impact in 2020: 3 Trends to Watch

The popularity of AI and ML have wide-reaching effects on your enterprise. Here are three important trends driven by AI to look out for next year.

[Editor's note: Upside asked executives from around the country to tell us what top three trends they believe data professionals should pay attention to in 2020. Ryohei Fujimaki, Ph.D., founder and CEO of dotData, focused on AI and ML.]

The Rise of AutoML 2.0 Platforms

As the need for additional AI applications grows, businesses will need to invest in technologies that help them accelerate the data science process. However, implementing and optimizing machine learning models is only part of the data science challenge. In fact, the vast majority of the work that data scientists must perform is often associated with the tasks that preceded the selection and optimization of ML models such as feature engineering -- the heart of data science.

This means that organizations will need to look for new, more sophisticated automated machine learning platforms. These "AutoML 2.0" tools will need to provide end-to-end automation, from automatically creating and evaluating thousands of features (AI-based feature engineering) to the operationalization of ML and AI models -- and all the steps in between.

The Shift to Automation Will Intensify Focus on Privacy and Regulations

As AI and ML models become easier to create using advanced "AutoML 2.0" platforms, data scientists and citizen data scientists will begin to scale ML and AI model production in record numbers. This means organizations will need to pay special attention to data collection, maintenance, and privacy oversight to ensure that the creation of new, sophisticated models does not violate privacy laws or cause privacy concerns for consumers.

As a result, in 2020 we will see an emergence of new tools that will enable data scientists to have greater transparency without sacrificing accuracy. This shift to a more "white box" approach to data science will deliver more transparent and accurate models thereby empowering businesses to make data-centric decisions and accelerating their digital transformations.

More Citizen Data Scientists Doing Data Science

Big data will continue to be on the upsurge in 2020 with a growing demand for skilled data scientists and a continued shortage of data science talent -- creating ongoing challenges for businesses implementing AI and ML initiatives. Although AutoML platforms have alleviated some of the pressure on data science teams, they have not resulted in the productivity gains organizations are seeking from their AI and ML initiatives. As such, companies need better solutions to help them leverage their data for business insights.

In 2020, we will see a swift adoption of new, broader, "full-cycle" data science platforms that will significantly simplify tasks that formerly could only be completed by data scientists and boost the productivity of citizen data scientists -- business analysts and other data experts who have domain expertise but are not necessarily skilled data scientists. This continued democratization will lead to new use cases that are closer to the needs of business users and will enable faster time-to-market for AI applications in the enterprise.

About the Author

Dr. Ryohei Fujimaki is the founder and CEO of dotData. In his career, Dr. Fujimaki was the youngest research fellow ever in NEC Corporations 119-year history, the title was honored for only six individuals among over 1000 researchers. During his tenure at NEC, Ryohei was heavily involved in developing many cutting-edge data science solutions with NECs global business clients, and was instrumental in the successful delivery of several high-profile analytical solutions that are now widely used in industry. You can reach the author via email or LinkedIn.

Go here to read the rest:

AI's Impact in 2020: 3 Trends to Watch - TDWI

Posted in Ai | Comments Off on AI’s Impact in 2020: 3 Trends to Watch – TDWI

Andrew Yang Is Right The US Is Losing The AI Arms Race – Forbes

Posted: at 12:48 am

Robots communication, artificial intelligence concept. Two robotic characters with light bulb

On November 20, 2019, Andrew Yang, during a Democratic Candidates debate, stated that the US is losing the AI arms race to China.A little over a year ago, I argued the same thing.Yang is right (after Yangs comment, Pete Buttigieg agreed).A couple of other candidates on the stage also wanted to chime in about AI, but were cut off by a commercial break.

Let me ask again:what took so long?Artificial intelligence and machine learning are transformative technologies that level many playing fields, so many in fact that a tiny nation can militarily compete with a great military power, like the US.How?The smartest weapon is not the largest weapon.Its code.It smart code.Its smart bots.Its cyberwarfare.Software grounded the Boeing 757 Max.The future of war is as much about AI and ML as payloads.The same for manufacturing, supply chain optimization and process automation:its all about digital technology, especially AI/ML.

The war for global leadership in artificial intelligence and machine learning is well underway, and the US is losing.

Is the AI battlefield well-understood?Not even close, at least by the leaders who develop national strategies or by the citizens of the United States who all need to spend some time on:

Since theyre mostly unaware of the war, US leaders have no clear strategy to prevent a historic loss.Imagine the implications of electing politicians who have no idea a deadly war is underway or who think the arms race is all about aircraft carriers and fighter squadrons?

The Threat

robot, technology, future, futuristic, business, laser, laserbeam, nuclear, high tech, cyber, cyber ... [+] technology, data, artificial intelligence, 3D, metal, blue background, studio, science, sci fi, hand, gesture, robotic, tech, illustration, innovation, shiny, chrome, silver, wires, concept, creative

The Chinese have a very public, very-deep, extremely well-funded commitment to AI.Air Force General VeraLinn Jamieson says it plainly:"We estimate the total spending on artificial intelligence systems in China in 2017 was $12 billion.We also estimate that it will grow to at least $70 billion by 2020."According to the Obama White House Report in 2016, China publishes more journal articles on deep learning than the US and has increased its number of AI patents by 200%.China is determined to be the world leader in AI by 2030.

Listen to what Tristan Greene writing in TNWconcludes about the USs commitment to AI: Unfortunately, despite congressional efforts to get the conversation started at the national level in the US, the White Houses current leadership doesnt appear interested in coming up with a strategy tokeep upwith China. It gets worse:China has allocated billions of dollars towards infrastructure to house hundreds of AI businesses in dedicated industrial parks.It has specific companies, the Chinese counterparts to US operations like Google and Amazon, working on different problems in the field of AI.And itsregulating education so that the nation produces more STEM workers.But perhaps most importantly, China makes it compulsory for businesses and private citizens to share their data with the government something far more valuable than money in the world of AI.

Greenes scary bottom line?Meanwhile, in the US, the Trump administration has shown little interest in discussing its own countrys AI yet,may soon have to talk to Chinas.

According to Iris Deng,China ranks first in the quantity and citation of research papers, and holds the most AI patents, edging out the US and Japan (and)China has not been shy about its ambitions for AI dominance, with the State Council releasing a road map in July 2017 with a goal of creating a domestic industry worth 1 trillion yuan and becoming a global AI powerhouse by 2030.

It's obvious:Without more leadership from Congress and the President, the U.S. is in serious danger of losing the economic and military rewards of artificial intelligence (AI) to China.Thats the somber conclusion of a report published today (September 25) by the House Oversight and Reform IT subcommittee.

Jerry Bowles says it clearly:The U.S. has traditionally led the world in the development and application of AI-driven technologies, due in part to the governments commitment to investing heavily in research and development.That has, in turn, helped support AIs growth and development.In 2015, the United States led the world in total gross domestic R&D expenditures, spending $497 billion.But, since then, neither Congress nor the Trump administration has paid much attention to AI and government R&D investment has been essentially flat.Meanwhile, China has made AI a key part of its formal economic plans for the future.

The Response

3d rendering robot learning or machine learning with education hud interface

The Trump administration finally said something about the most importanttechnology in a generation.The Executive Order onMaintaining American Leadership in Artificial Intelligenceissued on February 11, 2019 hopefully is just the opening shot in the AI war,an implementation war the US is arguably already losing, especially in areas like robotics.

So what does the Trump administration want to do?

1.Invest in AI Research and Development (R&D)

The initiative focuses on maintaining our Nations strong, long-term emphasis on high-reward, fundamental R&D in AI by directing Federal agencies to prioritize AI investments in their R&D missions.

2.Unleash AI Resources

The initiative directs agencies to make Federal data, models, and computing resources more available to Americas AI R&D experts, researchers, and industries to foster public trust and increase the value of these resources to AI R&D experts, while maintaining the safety, security, civil liberties, privacy, and confidentiality protections we all expect.

3.Set AI Governance Standards

As part of the American AI Initiative, Federal agencies will foster public trust in AI systems by establishing guidance for AI development and use across different types of technology and industrial sectors.

4.Build the AI Workforce

To prepare our workforce with the skills needed to adapt and thrive in this new age of AI, the American AI Initiative calls for agencies to prioritize fellowship and training programs in computer science and other growing Science, Technology, Engineering, and Math (STEM) fields.

5.International Engagement and Protecting our AI Advantage

Federal agencies will also develop and implement an action plan to protect the advantage of the United States in AI and technology critical to United States national and economic security interests against strategic competitors and foreign adversaries.

While all of these initiatives are welcomed, theres no new money targeted directly at the above 5 initiatives or for AI R&D generally.Instead, the American AI Initiative directs agencies to prioritize, open and share investments within among existing research programs.

As described by Will Knight in theMIT Technology Review,the initiative is designed to boost Americas AI industry by reallocating funding, creating new resources, and devising ways for the country to shape the technology even as it becomes increasingly global however, while the goals are lofty, the details are vague.And it will not include a big lump sum of funding for AI research.As Knight points out, other nations, includingChina,Canada, and France, have made bigger moves to back and benefit from the technology in recent years.

As reported by Cade Metz in theNew York Times,General James Mattis, before he resigned his position as the US Secretary of Defense, implored (Trump) to create a national strategy for artificial intelligenceMattis argued that the United States was not keeping pace with the ambitious plans of China and other countries.

Ben Brody reports in Bloomberg, thatSenator Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, praised some aspects of the order but criticized its tone.The tone of this executive order reflects a laissez-faire approach to AI development that I worry will have the U.S. repeating the mistakes it has made in treating digital technologies as inherently positive forces, with insufficient consideration paid to their misapplication, he said.

But wheres the funding?Why the vagueness about investments?Why is funding the responsibility of Congress?Why not White House-directed funding for the American AI Initiative?It would be hard to improve upon the observation made by William Carter from theCenter for Strategic and International Studiesas reported by Kaveh Waddell inAxios:"If they can find$5 billion for a border wall, they should be able to find a few billion for the foundation of our future economic growth."

Waddell further reports that so far, U.S.fundingfor AI has been anemican analysisfrom Bloomberg Government found thatthe Pentagon's R&D spending on AI has increasedfrom $1.4 billion to about $1.9 billion between 2017 and 2019.DARPA, the Pentagon's research arm, has separately pledged $2 billion in AI funding over the next five years it's hard to put a numberon the entire federal government's AI spend, says Chris Cornillie, a Bloomberg Government analyst, because most civilian agencies don't mention AI in their 2019 budget requests (but) these numberspale in comparison to estimates of Chinese spending on AI.Exact numbers are hard to come by, but just two Chinese cities ShanghaiandTiajin have committed to spending about$15 billion each. More recently, the proposed 2020 budget sees more increases in AI R&D. According to Cornillie at BloombergGovernment in Washington, "the 2020 budget has allocated almost $5B for AI R&D (for the Pentagon and all other US government agencies). From FY 2018 to 2020, the Pentagons budget request for AI R&D rose from $2.7 billion to $4.0 billion" ... (but) when you look at what Google or Apple alone are investing in AI, $5 billion doesnt seem that large of a figure. Especially if you put that in the context of the federal governments $1.37 trillion discretionary budget request. But if you look at year over year spending, it appears that agencies are getting the message that investments in AI will be critical in terms of economic competitiveness and national security." Finally! While Cornillie is hopefully right and the US is waking up,$5B is still dwarfed by Chinese investments in AI.

Its also unclear about how this initiative builds significantly upon theObama Administrations National Artificial Intelligence Research and Development Strategic Plan issued in October of 2016.

So now what?General Mattis is right:we need a national AI strategy (that answers the Chinese plan)and we need significant, long-term dedicated funding.

Immediate Steps

A coordinated, heavily-funded American response is way overdue.But theres more:

These steps represent a good start to turn the tide of the AI war a war the US cannot afford to lose.

I restate all this now because AI is finally getting the attention of lawmakers in Washington and across the country even in presidential debates!Hopefully, calls for AI strategiesand additional fundingare more than just campaign promises.

Go here to see the original:

Andrew Yang Is Right The US Is Losing The AI Arms Race - Forbes

Posted in Ai | Comments Off on Andrew Yang Is Right The US Is Losing The AI Arms Race – Forbes

Nvidia Moves Clara Healthcare AI To The Edge – The Next Platform

Posted: at 12:48 am

Nvidia has for years made artificial intelligence (AI) and its various subsets such as machine learning and deep learning a foundation of future growth and sees it as a competitive advantage against rival Intel and a growing crop of smaller chip maker and newcomers looking to gain traction in a rapidly evolving IT environment. And for any company looking to make its mark in the fast-moving AI space, the healthcare industry is an important vertical to focus on.

We at The Next Platform have talked about the various benefits AI and deep learning can bring to an industry that is steeped in data and in need of anything that can help them gain real-time information from all that data. AI will touch all segments of healthcare, from predictive diagnostics that will drive prevention programs and precision surgery for better outcomes to increasingly precise diagnoses that will help lead to more personalized treatments and systems that will create more efficient operations and drive down costs. It will impact not only hospitals and other healthcare facilities but also other segments of the industry, from pharmaceuticals and drug development to clinical research.

Nvidia has put a focus on healthcare. For years the company has worked with GE to support its medical devices, and a year ago announced it was working with the Scripps Research Translational Institute to develop tools and infrastructure that leverage deep learning to fuel the development of AI-based software to expand the use of AI beyond medical imaging, which in large part has been the primary focus of AI in medicine.

However, in a hyper-regulated environment like healthcare where the privacy of patient data is paramount, any AI-based technology needs to ensure that the data is protected from cyber-criminals that want to steal it or leverage it, as illustrated in the increasingly targeting of healthcare facilities by ransomware campaigns, where bad actors take a hospitals data hostage by using malware to encrypt it and giving the facility the decryption key only after it pays the ransom.

At the Radiological Society of North America (RSNA) conferencethis weekend, Nvidia laid out a plan aimed at allowing hospitals to use machine learning models to train the mountains of confidential information they hold without the risk of exposing the data to outside parties. The company unveiled its Clara Federated Learning (FL) technique that will enable organizations to leverage training while keeping the data housed within the healthcare facility systems.

Nvidia describes Clara FL as a reference edge application that creates a distributed and collaborate AI training model, a key capability given that healthcare facilities house their data in disparate parts of their IT environment. Its based on Nvidias EGX intelligent edge computing platform, a software-define offering introduced in May that includes the EGX stack, which comes with support for Kubernetes and contains, GPU monitoring tools and Nvidia drives, which are managed by Nvidias GPU Operator tool. The Nvidia distributed NGC-Ready for Edge servers are built by OEMs leveraging the Nvidia technologies and not only can perform training locally but also collaborate to create more complete models of the data.

The Clara FL application is housed in a Helm chart to make it easier to deploy within a Kubernetes-based infrastructure, and the EGX platform provisions the federated server and the clients that it will collaborate with, according to Kimberly Powellis vice president of Nvidias healthcare business.

The application containers, initial AI model and other tools needed to begin the federated learning is delivered by the platform. Radiologists use the Nvidia Clara AI-Assisted Annotation software development kit (SDK) which is integrated into medical views like 3D slicer, Fovia and Philips Intellispace Discovery to label their hospitals patient data. Hospitals participating in the training effort use the EGX servers in their facilities to train the global model on their local data and the results are sent to the federated training server via a secure link. The data is kept private because only partial model weights are shared rather than patient records, with the model built through federated averaging.

The process is repeated until the model is as accurate as it need be.

As part of all this, Nvidia also announced Clara AGX, an embedded AI developer kit that is powered by the companys Xavier chips, which are also used in self-driving cars and can use as little as 10 watts of power. The idea is to offer lightweight and efficient AI computers that can quickly ingest huge amounts of data from large numbers of sensors to process images and video at high rates, bringing inference and 3D visualization capabilities to the picture.

The systems can be small enough to be embedded into a medical device, be small computers adjacent to the device or full-size servers, as pictured below:

AI in edge computing isnt only for AI development, Powell said during a press conference before the show started. AI wants to live with the device and AI wants to be deployed very close, in some cases, to medical devices for several reasons. One: AI can deliver real-time applications. If you think about an ultrasound or an endoscopy, where the data is streaming from a sensor, you want to be able to apply algorithms and many of them at once in a livestream. They want to embed the AI. You may also want to put an AI computer right next to a device. This is a device that is already living inside the hospital. You can augment that with an AI-embedded computer. The third way that we can imagine edge computing and how were demonstrating that at RSNA today is by connecting many devices and streaming devices to an NGX platform server thats living in the datacenter. We really see three models for edge and devices coming together. One is embedded into the instrument. The next is right next to the instrument for real-time and the third is to augment many, many devices around the hospital fleets of devices in the hospital to have augmented AI in the datacenter.

At the show, Nvidia also showed off a point-of-care MRI system built by Hyperfine Research that leverages Clara AGX. The device is about three feet wide and five feet tall:

The Hyperfine system uses ordinary magnets that do need power or cooling to produce an MRI image, can run in an open environment, and do not need to be shielded from electromagnetic fields around it. The company is still testing it, but early tests indicate that it can image brain disease and injury using scans that are about five to ten minutes long, according to Hyperfine.

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now

Original post:

Nvidia Moves Clara Healthcare AI To The Edge - The Next Platform

Posted in Ai | Comments Off on Nvidia Moves Clara Healthcare AI To The Edge – The Next Platform

TEACHER VOICE: AI in school, better than the ballpoint pen? – The Hechinger Report

Posted: at 12:48 am

The Hechinger Report is a national nonprofit newsroom that reports on one topic: education. Sign up for our weekly newsletters to get stories like this delivered directly to your inbox.

As the artificial intelligence revolution comes to education, teachers are rightly concerned that AI in schools will replace the human assessment of human learning.

However, if developers work in tandem with teachers on the ground, fields like assisted writing feedback can evolve to make instruction more effective, more personalized and more human.

I used to grade papers by hand, a black Bic my tool of choice, etching feedback in the margins, scrawling questions and circling errors. It took innumerable hours, and I accepted the drudge as inevitable to life as an English teacher.

It had its rewards, of course, as I saw students gaining skills, developing their voices and flashing brilliance. Less rewarding was grinding out the feedback on standard mechanics, figuring out who needed what remediation and wondering why my efforts werent more effective.

Over the years, I learned to prioritize positive comments in every students paper, in the margins and final notes, mindful of student affect and motivation. I stopped circling every flaw, and shifted to finding patterns to aid instruction. I focused comments and grades on the deeper content of student work, with less attention to surface-level issues in the writing.

I began teaching the skills of self-regulated learning and self-regulated writing. Occasionally, I had to catch, counsel and retrain the plagiarist, who sometimes changed every other word from an internet source. I could spend two hours tracking down the original text of one dishonest paper.

That was my life when I started teaching high school 20 years ago.

Today, with AI in schools, assisted writing feedback makes my job easier and more rewarding. Feedback platforms can help teachers do what they love doing: developing writers, engaging learners, and helping students meet their goals for college and careers.

Years ago, I took notes on which students made which errors, and photocopied remedial exercises to target instruction as best I could. Today, I open my Clever portal and click on Quill. My students are already loaded into the system, and I simply assign a diagnostic.

Related: Will AI really transform education?

Quill figures out what each student needs, and then individualizes lessons. I can assign a diagnostic by level, targeting English language learners or advanced native speakers. Quill improves student writing and frees me up for more advanced instruction and personalized learning.

I first began using Turnitin.com for plagiarism prevention and detection, and that service has saved me hundreds of hours in tracking down borrowed material. Today, I use it more for its feedback tools. I can attach custom rubrics to various assignments or choose pre-built ones aligned with Common Core standards.

I author my own feedback codes, and when students mouse-over them, theyll get my explanation of how to repair an error or replicate success. I use the platforms automated spelling and grammar checkers, but I massage the feedback to avoid spilling red ink all over students papers, dismissing or deleting some categories completely, according to each students needs.

I embed positive feedback in the margins, and type customized comments and questions more quickly than I could write by hand. For English language learners who need targeted feedback, I check the metrics and focus my corrections.

Today, my feedback is clearer and more accessible and completely paperless. I cant lose a graded paper anymore, and neither can my students. Feedback is more effective when its more accessible and immediate.

At the moment, assisted writing platforms are getting better at seeing whats wrong with the surface of a students writing, but not what a student is doing well on that surface, or anything below. Students need teachers to see what strengths they bring, new insights they offer and the deep understandings they forge of complex texts. Students also need time to write without judgment, to tell their own stories and express their thoughts and feelings, with a focus on the whole student, not the surface of their texts.

For their academic work, however, students need more reliable and immediate feedback with clear paths from remedial skills to advanced work. As more teachers use assisted feedback, and take part in its development, platforms will get better at helping students write well and grow in self-confidence. Right now, such platforms are often telling students they misspelled their own names.

Related: Ten jobs that are safe from robots

One day soon, they might be picking up repetitious sentence structures or skillful rhythms in syntax. We might be crowd-sourcing positive reinforcements among teachers and sharing rubrics that push for excellence. Assisted feedback might communicate across platforms and automate the boilerplate instruction on standard grammar and mechanics. It could also make more transparent the biases embedded in standard English, for students and for teachers, while recognizing vernaculars and promoting both code-switching and code-meshing.

Artificial intelligence cannot replace good teaching, nor can it provide high-quality feedback on its own. AI can assist our work, streamline processes and connect our efforts at improving the craft. It can help teachers and researchers develop more effective writing instruction, through qualitative evaluation, and in robust studies with big data sets, randomized controls and rapid analysis.

With more researchers and educators of color involved in its development, AI can help us better serve marginalized populations through more culturally competent writing instruction, to help close opportunity gaps. Platforms can help coach students for self-regulated writing strategies, and help teachers explicitly teach them. Assisted writing feedback can become a new tool in the teachers hand, as we encourage students to express their genius and author their own futures.

This story about the use of artificial intelligence in schools was produced byThe Hechinger Report, a nonprofit, independent news organization focused oninequality and innovation in education. Sign upherefor our newsletter.

Robert Comeau teaches senior English at Another Course to College, a college-preparatory high school in the Boston Public Schools network.

Join us today.

Follow this link:

TEACHER VOICE: AI in school, better than the ballpoint pen? - The Hechinger Report

Posted in Ai | Comments Off on TEACHER VOICE: AI in school, better than the ballpoint pen? – The Hechinger Report

Page 11234..1020..»