Daily Archives: September 27, 2019

AI is coming to schools, and if were not careful, so will its biases – Brookings Institution

Posted: September 27, 2019 at 7:49 am

Artificial intelligence has transformed almost every aspect of our lives, from driverless cars to Siri, and soon, education will be no different. The automation of a school or universitys administrative tasks and customization of student curricula is not only possible, but imminent. The goal is for our computers to make humanlike judgments and perform tasks to make educators lives easier, but if were not careful, these machines will replicate our racism, too.

Kids from Black and Latino or Hispanic communitieswho are often already on the wrong side of the digital dividewill face greater inequalities if we go too far toward digitizing education without considering how to check the inherent biases of the (mostly white) developers who create AI systems. AI is only as good as the information and values of the programmers who design it, and their biases can ultimately lead to both flaws in the technology and amplified biases in the real world.

This was the topic at the recent conference Where Does Artificial Intelligence Fit in the Classroom?, put on by the United Nations General Assembly, the United Nations Educational, Scientific and Cultural Organization (UNESCO), the think tank WISE and the Transformative Learning Technologies Lab at Teachers College, and hosted by Columbia University.

While many argue that the efficiencies of AI can level the playing field in classrooms, we need more due diligence and intellectual exploration before we deploy the technology to schools. Systemic racism and discrimination are already embedded in our educational systems. Developers must intentionally build AI systems through a lens of racial equity if the technology is going to disrupt the status quo. Weve already seen the risks of using biased algorithms in the courtroom: Software used to forecast the risk of reoffending incorrectly marks Black defendants as future criminals at twice the rate of white defendants.

Developers must intentionally build AI systems through a lens of racial equity if the technology is going to disrupt the status quo.

Previous attempts at making education more efficient and equitable demonstrate what can go wrong. Standardized testing promised an innovation that was irresistible to an earlier generation of education leaders hoping to democratize the system, and allowed schools and teachers to be held accountable when students didnt measure up to expectations. But the designers of these assessment tools didnt consider how the racism and inequality rife in American society would be baked into the tests if care wasnt taken to make them more fair.

Overuse of standardized tests has helped concentrate wealthy people in select colleges and universities, stifling inclusion of and investment in talented people who happen to be lower-income. To fix this, the College Board, the nonprofit that prepares the SAT, announced a potential solution in May: the planned rollout of an adversity score assigned to each student who takes the college admissions exam. The score was to be comprised of 15 factors, including neighborhood and demographic characteristics such as crime rate and poverty to be added to each students result. However, bending to a wave of criticism, the College Board retreated from their plan in August.

Recent attempts to introduce AI in schools have led to improvements in assessing students prior and ongoing learning, placing students in appropriate subject levels, scheduling classes, and individualizing instruction. Such advances enable differentiated lesson plans for a diverse set of learners. But that sorting can be fraught with pernicious consequences if the algorithms dont consider students nuanced experiences, trapping low-income and minority students in low-achievement tracks, where they face worse instruction and reduced expectations.

The spread of AI technology can also tempt districts to replace human teachers with software, as is already happening in such places as the Mississippi Delta. Faced with a teacher shortage, districts there have turned to online platforms. But students have struggled without trained human teachers who not only know the subject matter but know and care about the students.

Overzealous tech salesmen havent helped matters. The educational landscape is now littered with virtual schools because ed-tech companies promised that they would reach the hard-to-educate as well as Black and Latino or Hispanic students, and create efficiencies in low-funded districts. Instead, many of the startups have been hit by scandal: After nearly 2,000 students earned zero credits last year, two online charter schools in Indiana were forced to close.

AI wont work if its intended as a way to avoid the hard work of recruiting skilled teachers, especially those who look like the kids theyre working with.

Artificial intelligence could still provide real benefits. For example, it could free teachers from time-consuming chores like grading homework. But AI wont work if its intended as a way to avoid the hard work of recruiting skilled teachers, especially those who look like the kids theyre working with. For the rise of robots to equate to progress, it should improve work conditions and increase job satisfaction for teachers. AI should reduce attrition and increase the desirability of the job. But if technologists dont work with Black teachers, they wont know what conditions need to change to maximize higher-order thinking and tasks.

We must diversify the pool of technology creators to incorporate people of color in all aspects of AI development, while continuing to train teachers on its proper usage and building in regulations to punish discrimination in its application.

AI will continue to disrupt long-standing institutions; the education system will face this transformation all the same. But with diligent oversight, these new systems can be utilized to produce satisfied teachers, accomplished students, andfinallyequity in the classroom.

Read more from the original source:

AI is coming to schools, and if were not careful, so will its biases - Brookings Institution

Posted in Ai | Comments Off on AI is coming to schools, and if were not careful, so will its biases – Brookings Institution

China is eroding the U.S. edge in AI and 5G – Axios

Posted: at 7:49 am

The U.S. has the upper hand in pivotal emerging technologies like artificial intelligence and quantum computing, in part because American universities and companies boast world-class talent. But experts say its dominance could soon slip.

Why it matters: The country that reigns in AI, 5G or quantum cryptography will likely have a huge military and economic advantage over its adversaries for years to come and will get to shape the technologies as they are implemented the world over.

A new report from the Council on Foreign Relations identifies the areas in which China is rapidly closing the gap with the U.S. "Slowing down China is not enough," says Adam Segal, an expert on emerging technologies and national security at CFR. "The U.S. needs to do significantly more at home."

What to watch: The U.S. needs to pull in scientists from around the globe to compete, Segal says. "China is producing 3 times as many STEM graduates at the undergrad level."

Go deeper: Facebook to buy a leading brain interface startup

Go here to read the rest:

China is eroding the U.S. edge in AI and 5G - Axios

Posted in Ai | Comments Off on China is eroding the U.S. edge in AI and 5G – Axios

Finance jobs requiring A.I. skills increased 60% last yearhere’s what they look like – CNBC

Posted: at 7:49 am

The finance industry is banking on AI and they're creating new jobs to bridge the gap.

Traditional financial institutions and fintech start-ups alike are looking for more candidates who specialize in artificial intelligence, machine learning and data science. According to reporting by Bloomberg reporting and data from LinkedIn, job listings requiring these skills in the financial industry increased nearly 60% in the past year.

According to Glassdoor data, "some of the most common job openings in AI and finance are for machine learning engineers and data engineers, among other highly specialized software engineering roles," Glassdoor senior economist Daniel Zhao tells CNBC Make It. "We're also seeing job openings for workers who can help navigate the AI landscape, including consultants and researchers. As companies establish the foundations for their AI functions, we're seeing employers hire more senior candidates to lead these new teams."

Not all new job functions are rooted in computer science or engineering, however. For example, chatbot copywriters (those who write conversational answers to technical questions customers ask on websites' "chat" functions), product strategists and technical sales representatives are also in demand, Zhao says. Those who have a business or communications background may be better suited to these roles.

And workers who already work in finance but are willing to learn more about AI have a leg up, Zhao says. "Their domain expertise in business and finance is a great way to differentiate themselves in a hot technical field."

Here's a look at some of Glassdoor's current postings of AI jobs in financial services, along with the job site's estimated salary range for each.

Senior Experience Designer, Bank of America

Data Scientist, Morgan Stanley

Senior Product Manager of Commercial Credit, Capital One

AI Backend Engineer, J.P. Morgan

Professionals with a background in engineering will have a growing field of opportunities within the finance space. For those without a STEM education, however, the ability to adapt and learn such skills will be crucial across a wide set of job functions. "With numerous online courses and boot camps available, it's never been easier to learn AI and machine learning skills that can enhance your career," Zhao says.

LinkedIn provides online courses to learn skills like cloud computing, artificial intelligence and analytical reasoning. Hundreds of universities around the world offer online courses for free or partially free with many falling in the categories of computer science, mathematics, programming and data science. Furthermore, training academies and boot camps have cropped up in order to bridge the gap of working professionals who want to pick up technical skills that can translate to a new role or enhance their current work.

The question of whether workers will have to seek out these opportunities, or if they'll be encouraged and provided by employers, hangs in the balance.

"It's important for companies to continue to invest in their people so that they are up-skilling and re-skilling their people to keep up with the roles that are in demand," said Feon Ang, vice president for talent and learning solutions in Asia Pacific at LinkedIn, to CNBC's "Capital Connection." "At the same time, people need to continue to invest in themselves and have a growth mindset."

A recent report from IBM suggests employers recognize the increasing need to retrain workers an estimated 120 million worldwide within the next three years as a result of AI and automation. However, executives from the report point to soft skills such as flexibility, time management, and ability to work on teams as skills more important than technical STEM knowledge or basic computer and software/application skills.

Hiring adaptable professionals and investing in training programs in data science, engineering and AI can help businesses drive technological innovations from within, the IBM report suggests.

Like this story? Subscribe to CNBC Make It on YouTube!

Don't miss: The 10 most attractive employers for business, engineering and computer science majors

@sabrinafvholder | Twenty20

See the original post here:

Finance jobs requiring A.I. skills increased 60% last yearhere's what they look like - CNBC

Posted in Ai | Comments Off on Finance jobs requiring A.I. skills increased 60% last yearhere’s what they look like – CNBC

Even successful cities aren’t ready for AI’s disruptions – Quartz

Posted: at 7:49 am

Which major city is well prepared for the challenges that will be brought about by artificial intelligence? According to one recent report, the answer is simple: none.

AI, which refers to programming that can mimic human behaviors such as speaking, learning and carrying out tasks, is flourishing fast across the world, and being used in applications ranging from facial recognition to autonomous driving. However, along with the many possibilities of AI, risks from its capacity to replace human workers, or from unethical uses of the technology, have also become more obvious.

The Global Cities AI Disruption Index, published by research outfit Oliver Wyman Forum today (Sept. 26), aims to look at how 105 major cities are preparing for the AI era. The report was conducted based on interviews with stakeholders such as government officials and academics, a survey of 9,000 residents in 21 of those cities, as well as an analysis of public social and economic data on the cities examined. Overall, the report measures readiness using four broad parametersa citys understanding of AI-related risks and its corresponding plans, its ability to carry out those plans, the asset base it can rely on, and the direction the city is taking.

Singapore, Stockholm, London and Shenzhen rank first in each of those four categories, respectively. But not a single city ranks in the top 20 among all four categories, and none appears in the top 10 across more than two categories. This means no city is close to being ready for the challenges ahead, said the report. Sure, some are better prepared than others, but all cities will need to continue to make substantial improvements to fully prepare for the impacts of next-generation technology.

Gauging AI readiness is far from an exact science at the moment though various efforts in recent years have been trying to do it. For now, it looks like many of the qualities that might make a government or a city prepared to deal with AI are likely to be similar to those that put them high up on rankings of good places to do business. A three-year-old index from Oxford Insights that gauges government ability to capitalize on AI looks at parameters like the skill level of the workforce, for example, as well as a more technical measure like the quality of the data the government has to work with.

But given controversies surrounding the use of AI, from how it can propagate existing biases, to privacy and misinformation risks, governments abilityand transparencyin dealing with those need to be a vital part of AI readiness.

China, for example, has been accused of using facial recognition in profiling ethnic minorities including Muslims in its Xinjiang region. Meanwhile FaceApp, an AI-powered face-ageing app developed by a Russian company, has also stirred privacy concerns among users.

The OW Forum research urged governments worldwide to get real about the risks posed by AI, saying they tend to downplay or ignore such disruptions while focusing only on opportunities such as smart city projects.

See the original post here:

Even successful cities aren't ready for AI's disruptions - Quartz

Posted in Ai | Comments Off on Even successful cities aren’t ready for AI’s disruptions – Quartz

Unlocking the power of AI through data and trust – WTOP

Posted: at 7:49 am

Federal agencies are moving fast toward adopting artificial intelligence. Michael Kratsios, chief technology officer of the United States, just revealed that federal spending on AI research and development has doubled in the last three years, with the Defense Department accounting for about $1 billion alone.

This content is sponsored by Arrow NetApp

Federal agencies are moving fast toward adopting artificial intelligence. Michael Kratsios, chief technology officer of the United States, just revealed that federal spending on AI research and development has doubled in the last three years, with the Defense Department accounting for about $1 billion alone.

But AI is not a plug-and-play technology; barriers still exist to federal adoption. Two of the main ones, according to Rob Stein, vice president of U.S. Public Sector at NetApp, including dealing with the massive amount of data, and trust in the AI systems themselves.

So in my view, unlocking the power of AI is really completely dependent on the data. So what does it mean, building the data pipeline, continuously acquiring, organizing and using that data in an optimal way, as more and more data gets accumulated? This is a big challenge, because not only is the volume of data massive, its everywhere, Stein said during a Sept. 4 Meritalk webinar. Just for example, theres so much data just coming from sensors that more and more compute is going to have to be deployed at the edge just to analyze that data, reduce it, and get it to where the end user can actually make use of it.

But agencies are often stymied on how to accomplish that. How do they transport the data in an efficient manner? How can they get the infrastructure and compute out to where they need it? Data silos and complexity of the technology are difficult hurdles in the road to a data pipeline.

Thats why Stein recommends a data fabric to overcome these challenges.

What does the data fabric do? It creates this integrated data pipeline, from the edge to the core to the cloud, so the data can be ingested, collected, stored and protected, no matter where it resides, Stein said. And in my view, only then can really the data be optimally applied to train AI, drive machine learning and power the deep learning algorithms that are needed to bring AI to life.

Stein also said connecting data scientists to IT organizations will be a critical step to managing this data with an eye toward AI adoption.

Ive gone to many universities and talked to their IT shops, and they said more and more researchers are reaching out to IT to provide the tools and the infrastructure and the capability around gathering and managing the data that researchers and data scientists need to really perform the critical functions of AI that help the organization either further their mission, or whatever their goals and objectives are, he said.

And agencies are coming to realize that as well. Mason McDaniel, chief technology officer at the Bureau of Alcohol, Tobacco, Firearms and Explosives, said during the webinar that its important that agencies get a handle on their data as quickly as possible.

I cant emphasize enough how important it is to build a data team, he said. Weve had the need for a long time to generate test data for applications. [We] havent necessarily always done a good job of that. So thats been sort of a pain point for a lot of organizations. But thats going to be magnified so much as we start moving more towards AI. Because the results from AI are only as good as the training data you put into it, and how you actually trained the models. So youve got to have people that focus on how you actually collect that and then use it.

And that need gets more urgent every day, as AI technologies get more commonplace. In fact, AI is already a component of many popular technologies already incorporated into day-to-day life. For example, AI underpins most call center technologies, which are used by most federal agencies at this point, especially those with primarily public-facing missions like the IRS. Some federal agencies also use chatbots and virtual assistants as part of their customer service options.

Machine learning uses processed data to learn and take actions. So how do we ensure that we can trust the data and ensure that, for example, a critical target on the battlefield is truly a tank and not a civilian vehicle of some sort? Stein asked. So these initial applications will help us learn and gain this trust. And in fact, organizations such as the Defense Advanced Research Projects Agency have already begun developing technologies that would allow AI to better explain its reasoning, to not just give us an answer, but explain it. DARPA calls it explainable AI. So thats one step towards what I think is very important. And thats building trust in AI.

And building trust early on is important, Stein said, because in the next five or six years, the technological complexity of AI wont even be visible anymore. It will be embedded in applications, functioning in the background. Thats already beginning to happen, he said. Think of a traffic app, which automatically adjusts your route in response to real-time data. You dont have to ask it to do that, or manually activate an AI. It just happens. And thats going to get more common in the near future.

Agencies are very optimistic about using AI as a tool to deliver on their mission. And those missions are varied, whatever they might be, whether theyre in the front lines, in a highly secure location with sensitive data, or theyre out there educating the next generation of leaders, Stein said. Theres so much optimism around what AI can bring. I think once organizations get ideas on quick win projects, and start to use them and do them, well start to see a lot more momentum pick up.

But those quick wins wont be possible until agencies get a handle on their data. Thats why establishing a data fabric is the first step agencies can take to unlocking the potential of AI.

Read the original post:

Unlocking the power of AI through data and trust - WTOP

Posted in Ai | Comments Off on Unlocking the power of AI through data and trust – WTOP

Amazon is gradually giving Alexa more AI – Fast Company

Posted: at 7:49 am

Amazon announced a large batch of new products on Wednesday, making it clear once again that it wants to spread its Alexa digital assistant into as many consumer tech categories as possiblenot just smart speakers, but everything from earbuds to eyeglasses to rings. But there was another storyline woven into the announcements in Seattle. More artificial intelligence, specifically natural language AI, is finding its way into Alexa and in more ways.

For starters, Amazon says its been using neural networks to make Alexas voice sound more human when it translates text (like your text messages) into speech. Rohit Prasad, who heads up Alexa machine learning and artificial intelligence, told me that this technology has allowed Amazon to take a totally different approach to generating speech.

In the past, Alexas algorithms broke down language into word parts or vocal sounds, then tried to string them together as smoothly as possible. But it always sounded somewhat choppy and robotic. Now, Amazon is using neural networks that can generate whole sentences of text in real time, says Prasad. This creates a vocal sound thats more fluid and more human-sounding. (Apples Siri and Googles Assistant have also achieved more natural voices recently through similar means.)

Its this same natural language modeling that will very soon give Alexa a completely different voices. Amazon says it will start with celebrities, with Samual L. Jackson being the first. Amazon will sell Jackson-as-Alexa as an add-on service starting later this year.

Amazons Jackson voice is at least partially driven by a natural language model. The model learns from Jacksons voicehe recorded a bunch of samples in a studioto generate a voice that mimics his distinctive tone while providing the answers and information the assistant would normally provide. But Amazon also curated a set of complete Jackson utterances for the assistant to use when the time is right.

Jackson will likely be just the first of many celebrity voices that Amazon will offer as alternatives to the standard Alexa voice. (Google, meanwhile, let the Google Assistant talk like John Legend early this year, also due to advances in using AI to synthesize voices.)

Amazon also added some machine learning tricks to its Ring doorbell cams. In a new service Amazon is calling Doorbell Concierge, the devices will soon be able to detect various kinds of people who show up at the front door unannounced. The demo I saw featured three kinds of visitorsa guy delivering a package, a Girl Scout selling cookies, and an unidentified man. The Ring engaged them all in a short dialogue to find out what they wanted, and a neural network in the background used what they said to determine what kind of a caller they were. It did this based only on what they said, not on camera imagery. The categorization then informed the Ring device what to say to each one. For instance, it told the delivery guy where to put the package, after asking if he needed a signature. And it asked the unidentifiable man if he would like to leave his contact information.

The Ring video doorbell. [Photo: courtesy of Ring]The new Concierge feature isnt quite ready for market yet. When its released, it will likely be able to recognize a small set of types of callers. But that set will probably grow.

Last year, Amazon expanded Alexas hearing to detect more than just human commands. As part of its Guard home security mode, the sensitive microphone array used in Echo speakers began listening for the sounds of glass breaking and smoke alarms going off when nobody was in a home. Now Amazon has added the ability to listen for human-related sounds in the home while Guard is set to its away mode. These include the sounds of footsteps, coughing, and doors closing when theres supposed to be no one home. Alexa can send an alert to a user if it detects one of these sounds.

In all these cases, a deep learning model is taking the audio input from the microphones and flagging potentially dangerous sounds. Amazon could train the assistant to listen for many other types of sounds. For example, Alexa devices could begin listening for the sounds of falls or labored breathing in places where elderly people live. Whether Amazon moves in this direction is anybodys guess, but the fact that the company is steadily adding things that Alexa can listen for is telling.

A relatively new area in natural language research is using neural networks to detect emotion through words and intonations. Amazon has been focusing on the sound of frustration in the voices of people talking to Alexa. When it detects frustration, Alexa may conclude that its given an answer the user didnt like and then search for another way to answer. Prasad said Amazon has its own set of labeled recordings of people sounding frustrated, which it uses to train the neural networks.

But its a hard problem. The assistant has to know how to react after detecting a frustrated person. And if it takes another stab at providing an answer, the assistant better be fairly certain that the second answer is useful. And there are times when the assistant has to say Sorry, I dont have the answer.

We are starting to experiment with these different ways of responding, and once this is launched, you will see many different flavors, Prasad said.

This kind of emotional awareness will likely start showing up in many kinds of assistants. Any assistant should be capable of knowing when its done something wrong and be able to open up a feedback loop in order to get better.

The frustration detection feature will likely show up in Alexa next year.

Read more from the original source:

Amazon is gradually giving Alexa more AI - Fast Company

Posted in Ai | Comments Off on Amazon is gradually giving Alexa more AI – Fast Company

The AI-Art Boom – Barron’s

Posted: at 7:49 am

Michael Tyka wanted to get something out of the way. Is it art? Tyka, an artist and software engineer at Google, asked the audience at Christies Art + Tech Summit in New York in June. The events theme was The AI Revolution, and Tyka was referring to artwork created using artificial intelligence.

The question, of course, was rhetorical. He flashed an image of a urinal on two large screens at either side of the stageMarcel Duchamps famous and controversial sculpture Fountain. The audience laughed. Obviously, it can be, he said.

There was otherwise little debate about the artistic merit of AI art at the summit, which attracted players from across the tech, art, and collecting worlds. The bigger questions instead focused on just how much this new form was poised to disrupt the industry.

The location for the discussion was fitting: In October 2018, Christies New York sold an algorithm-generated print that resembled 19th century European portraits, called Edmond de Belamy, from La Famille de Belamy, for the staggering sum of $432,500, nearly 45 times its high estimate. The print, by the French collective Obvious, had never been exhibited or sold before coming to auction, and its sale stunned the art world.

But despite the buzz, many in the art community are wrestling with several unanswered questions. For example: When artwork is created by an algorithm, who is the artistthe programmer or the computer? Because many works of AI art are digital, how do you value a creation thats designed to live natively on the internet and be widely shared? And where, exactly, is the market for this new kind of work headed? There are few clear answers.

The de Belamy sale may have been the splashiest AI artrelated event of the past year, but it wasnt the only one. In March, Sothebys sold an AI video installation by the German artist Mario Klingemann, Memories of Passersby I, for $51,012. Last spring, HG Contemporary gallery in New Yorks Chelsea neighborhood hosted what it described as the first solo gallery exhibit devoted to an AI artist, with the show Faceless Portraits Transcending Time, a collaboration between an AI and its creator, Ahmed Elgammal, a computer science professor at Rutgers University.

Prominent art institutions and collections around the world are paying attention. If we look at the larger landscape of whats happening in the art world, and not just in sales, theres a ton of momentum as well as institutional support for whats happening, says Marisa Kayyem, program director of continuing education at Christies Education. Collectors are growing more accustomed to it.

Just like photography never went away, Im pretty sure AI will establish itself as a new media format.

Many people working in the field recoil from the term AI art, finding it both misleading and too specific. Like other programmer-cum-artists, Klingemann, whose work was sold by Sothebys, prefers the term generative art, which includes all works created using algorithms. Generative arts origins date back to the late 1950s.

AI art is really a term the press came up with in the last three to five years, says Jason Bailey, founder of the digital-art blog Artnome, who believes the term conjures up the false impression of robots creating art. Most of the artists I talk to dont like to be called AI artists. But its become shorthand, whether people like it or not, for the work thats being done.

Although the de Belamy portrait is the best-known work of AI art, its a bit of a red herring for those looking to understand the medium. The portrait was created using generative adversarial networks, or GANs. GANs use a sample set of images of art to deduce patterns, and then use that knowledge to replicate what theyve seen, cross-referenced against the originals, creating a stream of new images.

The de Belamy sale came with a dose of controversy: Obvious didnt write the algorithm for itthe collective borrowed it from a young American programmer/artist named Robbie Barrat, who received nothing from the sale of the work. Obvious simply chose the image, printed it, put it in a frame, and signed it with Barrats algorithm.

In other words, de Belamy was sold as a single piece of art, even though the number of images the AI could produce was infinite. But many, if not most, works of AI art arent produced as a single, physical object. They are videos, animations, and everything digital and algorithmic in betweenworks designed to live online and to be shared.

This presents a tricky problem: In an industry that has always created value through scarcity, how do you value a work of art that is inherently nonscarce?

Theres a big change coming, and its one of these tectonic shifts, says Kelani Nichole, founder of Transfer gallery in Los Angeles, which focuses on artists who make computer-based artworks. I think that its about value, and I think were going to be moving away from a scarcity market thats purely a financial instrument.

One answer to the ownership quandary may be blockchain, which can be used to create a token that denotes a digital works authenticity. But Nichole says that might be beside the point to a new generation of younger investors who think differently about art and collecting. People who came of ageand became wealthyin the digital age have different ideas of material scarcity, transparency, and ownership, she says. The experience of a work of art may be more important than a physical object. The way they live is as digital nomads. They dont possess objects in the same way. Its a whole new generation of values, which is not about material scarcity, Nichole says.

Claire Marmion, founder and CEO of Haven Art Group, a fine-art-collection management company based in New York, says collectors are still trying to figure out where the market for AI art is heading, and that it may not be the disruptive force some think it will be. Or, at least, the industry will adapt to it.

The art world has a long tradition of artists bringing in new things and changing the status quo, Marmion says. In terms of valuation, theres a small data set. I dont know about the accuracy of valuation put on at the moment. Its very speculative. Collectors are interested in it, but Im not sure a lot of collectors have embraced it.

Klingemann believes the current buzz will eventually die down, but that AI art isnt going anywhere. Instead, he thinks it will one day be viewed as simply another tool of the artist.

Just like photography never went away, or making movies doesnt, Im pretty sure it will establish itself as a new media format, he says. Right now, of course, its all this mystery about AI, but I expect this to become really just a normal thing, where people will focus on what artists are actually saying with their art.

See original here:

The AI-Art Boom - Barron's

Posted in Ai | Comments Off on The AI-Art Boom – Barron’s

Avoid These Pitfalls When Investing In AI – Forbes

Posted: at 7:49 am

Artificial intelligence (AI) and machine learning (ML) are the latest buzzwords in technology, and companies of all sizes and industries are eager to jump on the opportunities they offer.

The powerhouse duo of AI and ML is now well within reach, thanks to the big data explosion and highly scalable cloud computing capacity. Together, AI and ML hold the promise of delivering insights that result in cost savings, increased efficiencies and productivity gains on a scale not possible with traditional processes.

At my company, Applause, we are using data science and machine learning to enhance various aspects of our software platform. Applause is a crowdsourced community of software testers and digital experts, and we have had strong success leveraging these new technologies to transform our tester selection process.

Our journey to integrate machine learning into our product portfolio has taught us a thing or two. Weve learned important lessons about avoiding bias, incorporating feedback loops, leveraging data and using progressive rollouts and popular tools.

Using machine learning models, we can now pinpoint the best candidates to deploy on any given testing project, for anything from an innovative user interface design to ensuring voice capabilities stand up in an omnichannel experience. By automating the process of matching optimal testers to specific projects, we deliver better value to our customers while removing some of the natural human bias that will inevitably sneak into the tester selection process.

Based on what we learned, here is a hands-on practitioners take on what to consider when starting to deploy AI and machine learning, along with some of the more common pitfalls to avoid in the spirit of keeping nascent projects on track.

Beware of human bias. Yes, AI and machine learning can help mitigate or eliminate human bias, but theyre not foolproof, especially if the software team doesnt keep the bias issue front and center when designing and training models. There have been more than a few high-profile AI failsthat have caused companies to take a step back and think about how bias is introduced into the data. These types of bias could cause consumers to distrust AI systems permanently, potentially impacting the technologys future uses.

To avoid these scenarios, developers should carefully consider the kinds of data used to train algorithms and where that data comes from. It is important to create specific rules about data selection to provide guidelines for developers.

In Applauses case, we refrained from having engineers pick the criteria and instead consulted with subject matter experts and solution leaders in areas like manual testing, test automation, usability and accessibility (among others) to ensure we were prioritizing the right skills that would deliver the greatest value to customers.

Create a feedback loop. Communication is key for AI and machine learning development, yet too many teams get stuck in silos and fail to integrate a consistent feedback loop into their processes. With a proper feedback loop, the team stays up to speed on all issues, including whats going right and what can be improved. Initially, you might start with a communications sequence limited to subject matter experts, but as machine learning models evolve, it is important to broaden the feedback loop to include other stakeholders in the process. Ultimately, you want the model to be able to collect feedback on its own in an automated fashion.

Avoid analysis paralysis. It happens to the best of us getting stuck on a machine model for too long with the quest for perfection that causes delays in the delivery cycle. AI and machine learning are all about iteration and continuous improvement, so the work is never really done. The more data fed to the hungry beast, the better the model will be.

At some point, however, you need to draw the line and get the product out the door. Weve scaled that hurdle at Applause by doing progressive rollouts. We get the model to an agreed-upon state and get it out the door, but we continue to iterate and improve the model as the process evolves so things are never at a standstill.

Fully leverage data. Dont let all the valuable data youre collecting go to waste. Our data science team at Applause is solely focused on taking the data we collect and turning it into action. Weve cultivated a strong partnership between that team and our business intelligence group, and the two work together to manage the mass amounts of data collected and turn it into something that will improve the machine models while delivering essential insights to the business.

Avoid an 'all or nothing' mindset. You dont just flip a switch on AI and machine learning and immediately take humans out of the picture. It takes time to build capable machine models that can become a 100% replacement for manual processes. Again, I go back to the idea of progressive rollouts: Create a minimally viable machine learning model that can be released and trained, and improve upon that model to make it more efficient over time.

Dont reinvent the wheel. Dont feel like you have to start from scratch to dive into machine learning and AI. There are plenty of off-the-shelf tools (Scikit Learn and TensorFlow, just to name a few) to fast-track deployment, and AWS, Azure and Google Cloud Platform (GCP) all have their own service-based offerings that can facilitate the process. These tools are successful because they work. Dont hesitate to take the help. Relying on a roll-your-own approach can dramatically slow down your timeline and increase your costs.

Just like any development effort, AI and machine learning will not be a fit for every application. In many cases, humans are still the more efficient and cost-effective option. While there is no one-size-fits-all development model, these basic tips should help companies steer clear of the pitfalls and navigate a successful path to AI-driven transformation.

Continue reading here:

Avoid These Pitfalls When Investing In AI - Forbes

Posted in Ai | Comments Off on Avoid These Pitfalls When Investing In AI – Forbes

Alibaba joins the push to provide AI-ready infrastructures – Diginomica

Posted: at 7:49 am

Over the past couple of years, Chinas online retail/cloud hosting/entertainment streaming powerhouse Alibaba has really got its teeth into AI, not least because its business now demands that it does its level best to exploit such technology as much as it can. So it comes as not too much of a surprise to note that it has stepped up alongside its Chinese compatriot Huawei to introduce a new processor designed specifically for the job of running AI tasks.

Known as the Hangguang 800, it is a Neural Processing Unit that the company has designed to boost the performance and capabilities for search functions, recommendation engines and improving customer service in e-commerce. Not surprisingly, these are three areas which are high priorities with the Alibabas online retailing operations.

The processor has been design by the T-Head research unit, which is under the aegis of the Alibaba DAMO Academy and is said to have a Chinese name that translates as `Honey Badger.

As well as making use of the processor itself, the company is also going to make it available to its customer base of commercial users, many f which will already be users of Alibaba Cloud services. Unlike Huawei, however, it will not be making the chips themselves available for those commercial customers to engineer their own AI solutions and services from the ground up.

Instead the company will be making access to the processors, or more specifically the servers that will be running them, available as cloud services. This is an interesting approach that is well suited to both the capabilities of the AI processor and delivery by cloud. Customers gain access to just those capabilities the compute resources on a classic time and materials basis, and avoid the potentially huge investment associated with moving to a new systems architecture and developing new applications from the ground up. To be fair to Huawei, it too will be offering its customers access to servers running its new AI processors and resources as cloud-delivered SaaS.

This delivery model should also prove to be of interest to channel partners, and may indeed help to grow the market for them to add AI and machine learning services to their portfolios. It would give them direct access to supported AI and machine learning resources, without having to make significant investments building their own specialist facilities and resources.

Highly-optimised algorithms can be run, primarily tailored for applications such as retail and logistics in the Alibaba ecosystem. For example, around one billion product images are uploaded every day by merchants to Taobao, Alibabas e-commerce site. It used to take one hour to categorise them and tailor search and personalised recommendations for hundreds of millions of consumers.

With Hanguang 800, it now takes five minutes to complete the same task. It has recorded a single-chip performance of 78,563 IPS at peak moment, while the computation efficiency was 500 IPS/W during the Resnet-50 Inference test. According to the company, these figures demonstrate that it out-performs industry averages.

The Hanguang 800 was announced at Alibaba Clouds annual Apsara Computing Conference in Hangzhou. The company also used the event to launch a raft of new products, including the 3rd generation of its X-Dragon Architecture, claimed by the company as a driving factor in the growth of its operations across e-commerce, logistics, finance and what it calls New Retail.

X-Dragon provides seamless integration of different computing platforms - including the companys Elastic Container Service (ECS), bare metal servers and virtual machines within a single overall architecture. Its main goal is to improve the performance of cloud-native application and has achieved an increase in Queries Per Second handled of 30% and a decrease in latency of 60%. With power savings in running cloud native apps it claims a reduction in the unit computing cost of 50%.

Performance has become a focus for the company, as it has recently announced a self-developed data traffic control mechanism named High Precision Congestion Control (HPCC). The goal is to provide data transmission with ultra-low latency, high bandwidth and high stability.

Research has shown HPCC reacts faster to available bandwidth and congestion compared with other alternatives, while maintaining close-to-zero queues. In the simulations for under 50% traffic load, HPCC shortens flow completion times by up to 95%, causing little congestion even under large-scale incasts. By addressing challenges such as delayed INT information during congestion or overreaction to INT information, HPCC can quickly utilize free bandwidth to avoid this issue and can maintain near-zero in-network queues for ultra-low latency.

My take

Here is another pitch, again from China, at moving the goalposts of what technologies both hardware and software will be needed the make AI systems effective. It is certainly true that, if we think we have powerful and data-rich systems environments now, then when AI really takes off the only observation that can be made is, You aint seen nothing yet. Follow the logic of that and it is not unreasonable to speculate that the current technologies that hold sway in big and fast systems are unlikely to be key components of the future. Whether any of these Alibaba offerings will play a part is far too early to tell, but they will be in the mix.

Go here to see the original:

Alibaba joins the push to provide AI-ready infrastructures - Diginomica

Posted in Ai | Comments Off on Alibaba joins the push to provide AI-ready infrastructures – Diginomica

How Battlefield Medicine Will Change With Big Data, Augmented Reality, and AI – Defense One

Posted: at 7:49 am

Machine learning, sensors, and next-generation vision equipment will tell medics where to spend their resources before they get off the evac chopper.

SAN ANTONIOIn the warzones of the future, medics touching down amid heavy battlefield casualties will know who to treat first, how to approach every injury, and even who is most likely to live or die all before looking at a single woundedsoldier.

Thats the vision of Col. Dr. Jerome Buller, who leads the U.S. Army Institute of SurgicalResearch.

Buller says biometric data gleaned from soldier-borne sensors, combined with in-depth medical and training data and augmented reality lenses, will help medics in combat evaluate the battlefield and everyone in it from a safe distance. They will make their most important decisions before even seeing theirpatients.

Imagine that [the hypothetical future] medic is able to scan the battlefield and instead of seeing rubble, hes seeing red or green dots, or amber dots, and he knows where to apply resources or not, Buller said during the Defense One and NextGov Genius Machines event here onWednesday.

Subscribe

Receive daily email updates:

Subscribe to the Defense One daily.

Be the first to receive updates.

Lets say you and your fellow soldier have the same injury. Looks the same, pools of blood are the same. You may compensate [as in, survive injury] far better than she can, or vice versa. And if I only have two packets of blood, who do I give it to? So this technology will help us to far better use these really scarce resources, hesaid.

Thats a big change from the way battlefield field medicine is performed now, relying heavily on medics intuition. You have to literally determine which ones are going to live and die, so having some type of automated capability from a cognitive perspective to say, Yep, you know they are red, Im going to go to the next one, from a psychological perspective I think it would have a huge impact on a positive note than just the medic making thatcall.

There are three components Buller sees as essential to making that vision a reality. The first is using the vast amount of data that the military collects on soldier injuries, data that currently goes to the DOD trauma registry, and using it in a way that paints a predictive picture of future battlefield events via machinelearning.

We have a very rich DOD trauma registry, he said. In some cases, where they [the patient] didnt make it, we have imagery and autopsy data. Whats happening now is its being looked at on an individual basis lets say you can mine this data. Now you have very specific combat injuries in very specific locations against specific enemies that we can potentially use machine learning applications on, hesaid.

Still, he noted, the data would have to be cleaned and structured to be useful in trainingmachines.

The second piece is getting better data from soldiers, both before theyre on the battlefield and during, by shrinking the size of soldier-worn sensors and enabling those sensors to collect moredata.

Right now, we have technology, its in the form of pulse oximeter-type interface, he said, referring to a finger-worn sensor that checks oxygen levels in the blood. It lets a medic know how well this person is compensating and whether not they are likely to go intoshock.

For the past year, his institute has been working with the Mayo Clinic on an improved version of the same technology: a wristwatch-like comprehensive medical sensor dubbed a Compensatory Reserve Measure, orCRM.

The last piece is augmented reality, via new heads-down displays like the IVAS system that the Army is looking to push out to the field in the mid-2020s. The displays cameras and other lenses will let medics take notes on the injuries they see without writing them down hardly an attractive prospect in the middle of a gunfight. The IVAS would record and transmit what the medic was seeing to the medical record to inform decisions then and later. I think thats an area we can really exploit, hesaid.

He described the vision as a concept being explored for future development, not yet a program. But it in future wars, especially against more technologically-capable adversaries, it will beessential.

The changing nature of war and what we are projecting our next conflict to be, which is far more lethal and far more complex than [current combat engagements are today] when were facing a near or near-peer competitor, when we are challenged in every single domain, land, air and sea, cyber, space, and across the electromagnetic spectrum, thats going to require us to have our medics, not only our medics, but our warfighters more capable. Theres only such much training we can do, so leveraging technology is absolutelycritical.

More here:

How Battlefield Medicine Will Change With Big Data, Augmented Reality, and AI - Defense One

Posted in Ai | Comments Off on How Battlefield Medicine Will Change With Big Data, Augmented Reality, and AI – Defense One