AI bias detection (aka: the fate of our data-driven world) – ZDNet

Here's an astounding statistic: Between 2015 and 2019, global use of artificial intelligencegrew by 270%. It's estimated that85% of Americansare already using AI products daily, whether they now it or not.

It's easy to conflate artificial intelligence with superior intelligence, as though machine learning based on massive data sets leads to inherently better decision-making. The problem, of course, is that human choices undergird every aspect of AI, from the curation of data sets to the weighting of variables. Usually there's little or no transparency for the end user, meaning resulting biases are next to impossible to account for. Given that AI is now involved in everything from jurisprudence to lending, it's massively important for the future of our increasingly data-driven society that the issue of bias in AI be taken seriously.

This cuts both ways -- development in the technology class itself, which represents massive new possibilities for our species, will only suffer from diminished trust if bias persists without transparency and accountability. In one recent conversation, Booz Allen'sKathleen Featheringham, Director of AI Strategy & Training, told me that adoption of the technology is being slowed by what she identifies as historical fears:

Because AI is still evolving from its nascency, different end users may have wildly different understandings about its current abilities, best uses and even how it works. This contributes to a blackbox around AI decision-making. To gain transparency into how an AI model reaches end results, it is necessary to build measures that document the AI's decision-making process. In AI's early stage, transparency is crucial to establishing trust and adoption.

While AI's promise is exciting, its adoption is slowed by historical fear of new technologies. As a result, organizations become overwhelmed and don't know where to start. When pressured by senior leadership, and driven by guesswork rather than priorities, organizations rush to enterprise AI implementation that creates more problems.

One solution that's becoming more visible in the market is validation software.Samasource, a prominent supplier of solutions to a quarter of the Fortune 50, is launching AI Bias Detection, a solution that helps to detect and combat systemic bias in artificial intelligence across a number of industries. The system, which leaves a human in the loop, offers advanced analytics and reporting capabilities that help AI teams spot and correct bias before it's implemented across a variety of use-cases, from identification technology to self-driving vehicles.

"Our AI Bias Detection solution proves the need for a symbiotic relationship between technology and a human-in-the-loop team when it comes to AI projects," says Wendy Gonzalez, President and Interim CEO of Samasource. "Companies have a responsibility to actively and continuously improve their products to avoid the dangers of bias and humans are at the center of the solution."

That responsibility is reinforced by alarmingly high error rates in current AI deployments. One MITstudyfound that "gender classification systems sold by IBM, Microsoft, and Face++" were found to have "an error rate as much as 34.4 percentage points higher for darker-skinned females than lighter-skinned males." Samasource also references a Broward County, Florida, law enforcement program used to predict the likelihood of crime, which was found to "falsely flag black defendants as future criminals (...) at almost twice the rate as white defendants."

The company's AI Bias Detection looks specifically at labeled data by class and discriminates between ethically sourced, properly diverse data and sets that may lack diversity. It pairs that detection capability with a reporting architecture that provides details on dataset distribution and diversity so AI teams can pinpoint problem areas in datasets, training, or algorithms in order to root out biases

Pairing powerful detection tools with a broader understanding of how insidious AI bias can be will be an important step in the early days of AI/ML adoption. Part of the onus, certainly, will have to be on consumers of AI applications, particularly in spheres like governance and law enforcement, where the stakes couldn't possibly be higher.

View post:

AI bias detection (aka: the fate of our data-driven world) - ZDNet

Learn about Artificial Intelligence (AI) | Code.org

NEW AI and Machine Learning Module

Our new curriculum module focuses on AI ethics, examines issues of bias, and explores and explains fundamental concepts through a number of online and unplugged activities and full-group discussions.

AI and Machine Learning impact our entire world, changing how we live and how we work. That's why its critical for all of us to understand this increasingly important technology, including not just how its designed and applied, but also its societal and ethical implications.

Join us to explore AI in a new video series, train AI for Oceans in 25+ languages, discuss ethics, and more!

Learn about how AI works and why it matters with this series of short videos. Featuring Microsoft CEO Satya Nadella and a diverse cast of experts.

Students reflect on the ethical implications of AI, then work together to create an AI Code of Ethics resource for AI creators and legislators everywhere.

We thank Microsoft for supporting our vision and mission to ensure every child has the opportunity to learn computer science and the skills to succeed in the 21st century.

The AI and Machine Learning Module is roughly a five week curriculum module that can be taught as a standalone module or as an optional unit in CS Discoveries. It focuses on AI ethics, examines issues of bias, and explores and explains fundamental concepts

Because machine learning depends on large sets of data, the new unit includes real life datasets on healthcare, demographics, and more to engage students while exploring questions like, What is a problem Machine Learning can help solve? How can AI help society? Who is benefiting from AI? Who is being harmed? Who is involved? Who is missing?

Ethical considerations will be at the forefront of these discussions, with frequent discussion points and lessons around the impacts of these technologies. This will help students develop a holistic, thoughtful understanding of these technologies while they learn the technical underpinnings of how the technologies work.

With an introduction by Microsoft CEO Satya Nadella, this series of short videos will introduce you to how artificial intelligence works and why it matters. Learn about neural networks, or how AI learns, and delve into issues like algorithmic bias and the ethics of AI decision-making.

Go deeper with some of our favorite AI experts! This panel discussion touches on important issues like algorithmic bias and the future of work. Pair it with our AI & Ethics lesson plan for a great introduction to the ethics of artificial intelligence!

Resources to inspire students to think deeply about the role computer science can play in creating a more equitable and sustainable world.

This global AI for Good challenge introduces students to Microsofts AI for Good initiatives, empowering them to solve a problem in the world with the power of AI.

Levels 2-4 use a pretrained model provided by the TensorFlow MobileNet project. A MobileNet model is a convolutional neural network that has been trained on ImageNet, a dataset of over 14 million images hand-annotated with words such as "balloon" or "strawberry". In order to customize this model with the labeled training data the student generates in this activity, we use a technique called Transfer Learning. Each image in the training dataset is fed to MobileNet, as pixels, to obtain a list of annotations that are most likely to apply to it. Then, for a new image, we feed it to MobileNet and compare its resulting list of annotations to those from the training dataset. We classify the new image with the same label (such as "fish" or "not fish") as the images from the training set with the most similar results.

Levels 6-8 use a Support-Vector Machine (SVM). We look at each component of the fish (such as eyes, mouth, body) and assemble all of the metadata for the components (such as number of teeth, body shape) into a vector of numbers for each fish. We use these vectors to train the SVM. Based on the training data, the SVM separates the "space" of all possible fish into two parts, which correspond to the classes we are trying to learn (such as "blue" or "not blue").

[Back to top]

Read more:

Learn about Artificial Intelligence (AI) | Code.org

Ai Weiwei Is Creeping on New York with an Army of Drones, and Instagram Is Loving It – W Magazine

Last year, the artist Ai Weiwei celebrated the the Chinese government returning his passport by putting on no less than four exhibitions in New York, including even a thrift shop in Soho that was in fact stocked with the abandoned belongings of thousands of refugees forced to relocate to a camp on the border of Greece and Macedonia.

Ai's work continues to spotlight sociopolitical crises, of which there is no shortage these days. This week, he unveiled an expansive installation in collaboration with the architects Jacques Herzog and Pierre de Meuron inside the Park Avenue Armory on Manhattan's Upper East Side (this exhibition comes on the heels of the 13 Cate Blanchetts that were projected throughout the cavernous Drill Hall). (It's not the first time Ai has collaboarted with Herzog and de Meuron: They have worked together for the past 15 years on projects like the 2008 Beijing Olympic Stadium which Ai later said he regretted taking part in because the games were "merely a stage for a political party to advertise its glory to the world.")

Hansel & Gretel , as the installation is eerily called, also happens to be interactive, whether visitors like it or not. From the moment they step into Drill Hall, each of their movements is tracked and monitored via drones. Unlike the artist Jordan Wolfson's equally chilling yet slightly more menacing robot , which employed similar technology to lunge at viewers, though, each visitor is then simply projected back onto the installation, as a white light follows them to make sure they won't get lost in the darknessand so they can't avoid the cameras's glare. Still, many of them have taken to throwing up peace signsor, in the case of the artist himself, a middle fingerat the drones. And of course, they're posting about the chilling experience on Instagram . Witness their encounters, here.

Meet the Chameleons of the Art World, aka the Humans of Frieze New York:

Read more:

Ai Weiwei Is Creeping on New York with an Army of Drones, and Instagram Is Loving It - W Magazine

How AI is being used to socially distance audiences at ‘Tenet’ and why Netflix is no threat, according to this movie theater chain boss – MarketWatch

Elizabeth Debicki, left, and John David Washington in a scene from director Christopher Nolan's "Tenet." Melinda Sue Gordon/Associated Press

Sophisticated algorithms are being used by one of Europes biggest movie theater chains to help with social distancing.

Vue International, which has around 230 cinemas in the U.K., Germany, Taiwan, Italy, Poland and other European countries, has been using artificial intelligence to optimize screening times and ismaking adjustmentsto control the flow of audiences into auditoriums.

Tim Richards, who founded privately owned Vue cinemasaround20 years ago, said 10 years worth of data had been fed in computers pre-COVID to decide on the timing and frequency for screening movies.

This has now been adapted to control the flow of customers into the cinemas by staggering screening times. It is being linked withseating softwarethatcocoonscustomers within their family bubbles, or on their own, a safe distance away from other customers.

Read: Heres an overlooked way to play the stuck-at-home trend in the stock market

Richards, speaking at a press briefing on Monday evening,said: It took me 17 years to build the group up to 230 cinemas. What happened just a few months ago was apocalyptic.

We have planned for crises such as a cinema being shut and blockbusters tanking,but not all the cinemas being down. Our big [cost] exposures are studios, people, and rent we were quickly focused on our burn rate and liquidity.

Last month it was reported that Vue was lining up 100 million ($133 million) in additional debt financing. The firm is owned by Alberta Investment Management Corporation and pension fund Omers. Richards and other managers hold a 27% stake.

Vue has been slowly reopening its cinemas around Europe over the past few weeks.

We have been using AI to help determine what is played,at what screen,and at which cinemas[to optimize revenues], he said. Our operating systems have been tweaked to social distance customers. It recognizes if you are with family and it will cocoon you. At the moment we are probably able to use 50% of cinemascapacities.

We can control the number of people in the foyer at any one time. Crowds would not be conducive to helping customersfeel comfortablecoming back. Every member of staff went through two days of safety training.

Richards said when he did reopen his movie theaters there was pent-up demand from customers but no new movies to screen.

We still managed at 50% run rate with classic movies that were not onlyalready availableon streaming services but on terrestrial televisionas well. Peoplejustwanted to get out of their homes and have some kind of normalcy.

Christopher Nolans complex thriller Tenet is the first major new film to be released and Richards said: We are seeing Tenet performing at the same levels as Inception and Interstellar did which has been amazing.

It will be a bumpy road in some areas but we expect a return to normalcy in six months it will take a couple of months to get people comfortable again with their positions.

He said entertainment giant Disney DIS, -1.58% has a strong line up of movie theater releases, despite placing Mulan direct to its streaming channel.

Fears that streaming service Netflix NFLX, -4.90% is a threat to the industry,as movie lovers become used to watching films at home,are unfounded, he said.

Opinion:Is Mulan worth $30? The answer, and other streaming picks for September 2020

Netflix has been disruptive for everything in the home, he said. We are out ofthehome,so Netflix is complementary to us because most people who like film like film on all formats.

Ive seen the demise of the industry predicted definitely five or six times. We have been counter cyclical during downturns we are reasonably priced so people come out and enjoy what we have to offer.

Here is the original post:

How AI is being used to socially distance audiences at 'Tenet' and why Netflix is no threat, according to this movie theater chain boss - MarketWatch

3 Predictions For The Role Of Artificial Intelligence In Art And Design – Forbes

Christies made the headlines in 2018 when it became the first auction house to sell a painting created by AI. The painting, named Portrait of Edmond de Belamy, ended up selling for a cool $432,500, but more importantly, it demonstrated how intelligent machines are now perfectly capable of creating artwork.

3 Predictions For The Role Of Artificial Intelligence In Art And Design

It was only a matter of time, I suppose. Thanks to AI, machines have been able to learn more and more human functions, including the ability to see (think facial recognition technology), speak and write (chatbots being a prime example). Learning to create is a logical step on from mastering the basic human abilities. But will intelligent machines really rival humans remarkable capacity for creativity and design? To answer that question, here are my top three predictions for the role of AI in art and design.

1. Machines will be used to enhance human creativity (enhance being the key word)

Until we can fully understand the brains creative thought processes, its unlikely machines will learn to replicate them. As yet, theres still much we dont understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The eureka! moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines.

Typically, then, machines have to be told what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings.

The takeaway from this is that AI will largely be used to enhance human creativity, not replicate or replace it a process known as co-creativity." As an example of AI improving the creative process, IBM's Watson AI platform was used to create the first-ever AI-generated movie trailer, for the horror film Morgan. Watson analyzed visuals, sound, and composition from hundreds of other horror movie trailers before selecting appropriate scenes from Morgan for human editors to compile into a trailer. This reduced a process that usually takes weeks down to one day.

2. AI could help to overcome the limits of human creativity

Humans may excel at making sophisticated decisions and pulling ideas seemingly out of thin air, but human creativity does have its limitations. Most notably, were not great at producing a vast number of possible options and ideas to choose from. In fact, as a species, we tend to get overwhelmed and less decisive the more options were faced with! This is a problem for creativity because, as American chemist Linus Pauling the only person to have won two unshared Nobel Prizes put it, You cant have good ideas unless you have lots of ideas. This is where AI can be of huge benefit.

Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options the ones that best fit the human creatives vision. In this way, machines could help us come up with new creative solutions that we couldnt possibly have come up with on our own.

For example, award-winning choreographer Wayne McGregor has collaborated with Google Arts & Culture Lab to come up with new, AI-driven choreography. An AI algorithm was trained on thousands of hours of McGregors videos, spanning 25 years of his career and as a result, the program came up with 400,000 McGregor-like sequences. In McGregors words, the tool gives you all of these new possibilities you couldnt have imagined.

3. Generative design is one area to watch

Much like in the creative arts, the world of design will likely shift towards greater collaboration between humans and AI. This brings us to generative design a cutting-edge field that uses intelligent software to enhance the work of human designers and engineers.

Very simply, the human designer inputs their design goals, specifications, and other requirements, and the software takes over to explore all possible designs that meet those criteria. Generative design could be utterly transformative for many industries, including architecture, construction, engineering, manufacturing, and consumer product design.

In one exciting example of generative design, renowned designer Philippe Starck collaborated with software company Autodesk to create a new chair design. Starck and his team set out the overarching vision for the chair and fed the AI system questions like, "Do you know how we can rest our bodies using the least amount of material?" From there, the software came up with multiple suitable designs to choose from. The final design an award-winning chair named "AI" debuted at Milan Design Week in 2019.

Machine co-creativity is just one of 25 technology trends that I believe will transform our society. Read more about these key trends including plenty of real-world examples in my new books, Tech Trends in Practice: The 25 Technologies That Are Driving The 4th Industrial Revolution and The Intelligence Revolution: Transforming Your Business With AI.

Here is the original post:

3 Predictions For The Role Of Artificial Intelligence In Art And Design - Forbes

The man behind Android says AI is the next major operating system – CNBC

The heart of Home is Essential's operating system, Ambient OS. Rubin didn't share much about the new software, but he did share his thoughts about how AI will become the next big operating system.

"I think it's AI. It's a slightly different AI than we see today. Today we see pattern matching and vision tricks and automation for self-driving cars and assistants like Siri or Google Assistant, but I think there's a thing after that that will coalesce into something that's more of an operating platform."

Rubin knows his own hardware company can't create the master AI platform alone, which is why his incubator Playground is so important.

"We're investing in hardware companies because we think they're essential in training AI," Rubin said. "One of our invested companies is called Light House. They make a camera for your home like a Dropcam except it uses AI to analyze everything that's happening in your house. You can ask if the kids went to school on time and it can answer."

Essential Home will allow you to play music through popular services, check the weather and more, all through a circular touchscreen.

But unlike other systems, like the Amazon Echo or Google Home, his plan is to create an OS that works with everything else. It's an ambitious goal with serious technical challenges, but Rubin knows enough about operating systems that he shouldn't be ignored.

See the rest here:

The man behind Android says AI is the next major operating system - CNBC

Worlds first AI-generated arts festival program opens this Friday – The Next Web

The Edinburgh Fringeisthe worlds largest performing arts festival, but this years event has sadly been canceled due to COVID-19. Fortunately, art junkies can still get their fix ofthe Fringe at a virtual alternative curated by an AI called the ImprovBot.

The system analyzed the100-word text descriptions of every show staged at the festival from 2011 to 2019 a total ofmore than two million words. ImprovBot uses this data to generate ideas for newcomedies, plays, musicals, and cabaret.

The blurbs will then be handed to the Improverts the Fringes longest-running improvised comedy troupe who will stage their own takes on the shows overTwitter.

[Read:How an AI graphic designer convinced clients it was human]

The aim of ImprovBot is to explore the junction of human creativity and comedy, and to see how this is affected when an Artificial Intelligence enters into the mix, saidMelissa Terras,Professor of Digital Cultural Heritage at the University of Edinburgh. It is [a] reminder of the playfulness of the Fringe and we invite online audiences to rise to the provocation, and interact, remix, mashup, and play with the content.

In total, ImprovBot aims to create 350 show descriptions, which will be posted every houron Twitter from August 7 to 31. Its already provided a sample of itsoeuvre, which ranges from a terrifying tale of isolation titledCollection to Politics to a hilarious comedy called The Man Behind the Real Song Lovers.

Truth be told, most of the blurbs are pretty nonsensical, so the Improverts will have a tough job adapting the AIs words for the stage. You can judge their efforts for yourself from Friday on Twitter.

Published August 5, 2020 17:21 UTC

Read the original here:

Worlds first AI-generated arts festival program opens this Friday - The Next Web

In a world where machines and AI rule, re-skilling is the only way out – YourStory.com

Gartner says more than 3 million workers across the world will have a robo boss by 2018. High time businesses reorient skill development programs to help mid-level managers stay relevant.

In July, the Vodafone-Idea merger was approved by the Competition Commission of India (CCI). The mega deal will make the shareholders of both companies become part of the largest telecom company in India,and reward them in the future. It will also create a situation that can quickly escalate into a nightmare.

As many as 6,000 senior-level leaders will have overlapping roles in the new entity. Industry sources say at least 50 percent of these will have to be let go and will be not employable. These individuals, who have put in at least 20 years of work in various roles within the organisation, have not been trained to keep pace with the digital era. But turning unemployable at the age of 45 is scary.

If the scene in Mumbai is bleak, in Chennai, the offices of ZohoCorp seem to have a Zen vibe.

Co-founder Sridhar Vembu is analysing technologies that can impact his organisation and his employees. He leaves no stone unturned when it comes to upskilling employees and is betting on technical languages that work for Zohos applications. Sridhar spends a lot of time with his engineers and almost 400-odd engineers have moved from the coding server to building applications on Android. At Zoho, senior engineers are constantly relearning static languages (Scala and Java) and are even playing with dynamic languages (JavaScript, ActionScript, Ruby on Rails).Sridhar says:

We can build a global organisation the Indian way. Unfortunately, all organisations use the western concept of hiring and firing, and focus on boosting shareholder returns. It is the problem of leaders who dont understand the impact they have on employees fired; after all, they just followed what they were told.

He adds that it is the responsibility of leaders to ensure employees are up to date with new technologies.

Even if people are talking about AI, you need human capital to train these machines in understanding data. I believe in contextual learning and people in Zoho are learning from different teams at any given point of time, says Sridhar.

He continues to believe that human capital is the greatest advantage in this era than ever before.

Indians perform all rituals; unfortunately god has left the temple. Today, we have moved from being spiritual to being ritual, Sridhar says, implying that today everyone follows a leader or pursues a task, but neither the leader nor employees think about a holistic approach to learning and building systems.

Narayana Murthy, Chairman Emeritus of Infosys, at the founders farewell dinner in 2015 urged his organisation to follow compassionate capitalism. He had said, It was our belief that it is only through excellence in people that we could achieve such growth.

With machine learning and AI skills requiring a deeper understanding of the industry, a manager has to retrain himself and ensure that the death of old legacy of businesses like maintenance of code and quality testing jobs do not affect new hires or people five years into the job.

Manoj Thelakkat, Founder of Reflex Training Partners, says, Training modules have to change from time-based and certification programmes to contextual learning. He says his organisation is teaching senior managers through theatre and music to understand collaborating in the world of AI, preparing organisations to reskill staff rather than sack them by looking at an Excel sheet.

Corporates are reskilling and realising that AI does not mean losing jobs, but a realignment of jobs.

In a recent survey by PWC titled Bot.Me: A revolutionary partnership more than 50 percent of respondents believed AI could help better healthcare, financial management, security and education. Less than 40 percent believed that it could create income and gender equality. In the next five years, jobs such as tutors, travel agents, tax preparers, office and home assistants, health coaches, chauffeurs and general physicians will get replaced.

The survey was limited to developing markets where growth has stagnated and the population is aging. This falls right into the table for India as these AI programs will be built by engineers.

Vijay Ratnaparkhe, Managing Director of Robert Bosch Engineering and Business Solutions Limited (RBEI), believes that one must not forget that today India is building software for the world and we are the brains powering future solutions.

Robert Bosch and its 15,000-plus engineers are building software for cars, which are learning about living objects by using mono-chrome cameras, ultrasound, radar and LIDAR technology. These are the kind of roles that engineers must prepare for in the coming days.

As new technologies emerge, Infosys is investing in upskilling and has designed programs to help mid-management levels keep pace with change.

Richard Lobo, Executive VP and HR Head at Infosys, says: Automation and related technology are the way forward and must be embraced by employees, irrespective of their role or job level.

He adds that new avenues have evolved rapidly, which need the company to reskill people on newer technologies and hire from outside to meet gaps in the skill mix. These include areas like user experience, cloud-native development, AI and industrial IoT, Big Data, Analytics and Automation.

In this environment, it is important that employees showcase high learnability and the ability to re-skill themselves rapidly, Richard says.

Infosys has created game-based learning methodologies where the program focuses on taking disagreements and turning them into positive solutions. This program enables managers to consciously embrace differences of opinion and create an environment that cuts through the competitive nature of conflicts, promoting collaboration among teams and partners.

The company has invested in agile and feels that it is the only process they have devoted purely to the middle layer. Ever since Vishal Sikka took over as CEO he is preparing employees onDesign Thinking, where Infosys understands the entire technology requirement from the business perspective. The company has trained 1,42,218 employees so far and wants everyone in the company to go through that change.

Infosys also works with Stanford to train senior leaders. The Stanford Global Leadership Program had 36 graduates in the first quarter of this financial year. Seventy senior people have completed the program so far and another 40-plus are in the current batch.

This is one-of-a-kind program to build our next generation leaders, Richard says.

Last quarter, Infosys finished training 3,000 people on AI technology, 2,100 of them on the Nia platform. It has currently created a bank of 3,500 videos available and has also partnered with Udacity and Coursera for different skills.

There is a reason for this rush to train employees in new skills. Clients are now asking IT services to be more in line with client success in winning business.

Daimler AG, for example, is working with Bosch to launch a fleet of autonomous cars in a five-year time frame. For this form of business, a new framework of data analytics services, network and infrastructure needs to be created. It is here that IT Services want to take a bet. They will use the current set of resources to build these new IT requirements. The days of doling out CVs could reduce and the engineering community has begun living in fear. But it is an era of constant learning.

Rajesh Kumar R, Delivery Head, Retail, CPG and Manufacturing at Mindtree, the $900-million IT services company, says: With the advent of any new technology there will be some impact to certain jobs. However, the concern or fear is due to this short-term impact. Focusing on reskilling and technology education can help employees stay relevant.

He adds that irrespective of automation, collaboration is imperative to be successful in the current environment. For example, a startup ecosystem produces amazing innovation that corporates and governments can adopt. Similarly businesses of various sizes will address different segments of the industry and all these will need to work together to address demands. The future of the industry is moving towards a highly collaborative environment.

Automation is impacting mid-level managers because automation is now touching the so-called knowledge worker-related areas, once thought not automatable, Rajesh says. He adds that far more cognitive tasks will be automated in the coming years and automation will happen more rapidly as we progress.

At Mindtree, learning is driven by Yorbit, the companys online enterprise learning platform that has yielded great results in just a year of its launch. Multiple knowledge sources are brought into this platform to enable directed self-learning complemented with project-driven assisted learning to put knowledge into practice.

Automation is likely to impact cognitive routine tasks and will shift the focus of human intervention to cognitive non-routine areas. For example, with ATMs, banks are less worried about the mechanics of collecting and distributing cash, instead the workforce is focused on investment advices and customer relations. If we take the automobile industry, the human focus is more on innovation, design and less on core manufacturing where quality manufacturing is taken for granted with heavy robots driven automation.

Digital is prompting organisations across industries to reinvent and reimagine their employee enablement and engagement strategies for better business success. The correlation between employee engagement and business performance is becoming increasingly relevant.

According to Gartner Research, by 2018 50 percent of team collaboration and communication will occur through mobile group collaboration apps. Organisations will have unified observational, social and people analytics to discover, design and share better work practices. While the workplace is transforming at a rapid pace, it requires your workforce to reimagine their future by adopting newer technologies. The role of leadership now also includes making the change easier for employees.

David Raj, EVP and HR Head at CSS Corp, an IT Services company, says: In this context, the mid-level management, the future leader/CXOs, in organisations also need to evolve and reinvent as traditional roles and structures come under increasing strain.

Indias IT workforce comprises roughly 1.4 million mid-level managers, and they are finding themselves at the centre of reskilling and restructuring conversations across organisations.

NASSCOM believes the IT industrys current reskilling focus is on emerging technologies like Big Data, Analytics, Cloud, IoT, Mobility, and Design Thinking, while also investing in emerging skills like Machine Learning, Natural Language Processing, Artificial Intelligence, DevOps, Robotic Process Automation, and Cybersecurity.

However, as Mohandas Pai, Chairman of Manipal Global Education and former member of the board of directors of Infosys, says: If growth beats job losses, employment will continue to grow. But we need to be prepared for automation.

There needs to be a constant evolution of skills by embracing concepts like job rotation and fluid teams.

David believes that adopting a mix of traditional and new-age learning methodologies, digital skilling platforms, along with a thrust on building full-stack professionals and institutionalising continuous learning, will play a pivotal role in creating the right differentiation and staying ahead of the pack.

Technology is changing fast and it is imperative for mid-level managers to seek out continuous learning opportunities.

Anand Venkateswaran, Vice President, Finance and Member, Board of Directors, Target India, says: We expose our managers to the latest technology trends, and provide opportunities where they can leverage these learnings to support personal development and drive business outcomes.

He adds that senior managers are given the opportunity to mentor and interact with startups to be in touch with the latest and most relevant industry developments.

Some of the technologies that managers have to reskill for are Machine Learning, Natural Language Processing (NLP), Python, Java, open source technologies and Computer Generated Imagery (CGI).

However, all this boils down to three things: leadership, an employees ability to learn and the corporates ability to train people quickly.

Employees should keep in mind that if they are working for a CEO or a corporate that does not believe in reskilling them, they must quit before they are sacked, Sridhar says.

According to Accenture, companies will have to adapt their training, performance and talent acquisition strategies to account for a newfound emphasis on work that hinges on human judgment and skills, including experimentation and collaboration. Their survey on the impact of AI on management revealed the following:

AI will put an end to administrative management work. Managers spend most of their time on tasks at which they know AI will excel in the future. Specifically, surveyed managers expect that AIs greatest impact will be on administrative coordination and control tasks, such as scheduling, resource allocation and reporting.

There is both readiness and resistance in the ranks. Unlike their counterparts in the C-suite, lower-level managers are much more skeptical about AIs promise and express greater concern over issues related to privacy. Younger managers are more receptive than older ones. And managers in emerging economies seem ready to leapfrog the competition by embracing AI.

The next-generation manager will thrive on judgment work. AI-driven upheaval will place a higher premium on what we call judgment work the application of human experience and expertise to critical business decisions and practices when information available is insufficient to suggest a successful course of action. This kind of work will require new skills and mindsets.

A people-first strategy is essential. Replacing people with machines is not a goal in itself. While artificial intelligence enables cost-cutting automation of routine work, it also empowers value-adding augmentation of human capabilities. The findings suggest that augmentation putting people first and using AI to amplify what they can achieve holds the biggest potential for value creation in management settings.

Executives must start experimenting with AI. Its high time executives and organisations start experimenting with AI and learning from these experiences. If the labour markets shortage of analytical talent is any guide, executives can ill afford to wait and see if they and their managers are equipped to work with AI and capable of acquiring the essential skills and work approaches.

With smart automation, quick robots and intelligent software bots becoming an integral part of the workforce, its critical that organisations and employees collaborate to forge the path ahead. Its the only way to deal with the charge of the light brigade.

Link:

In a world where machines and AI rule, re-skilling is the only way out - YourStory.com

NASA are figuring out how to use AI to build autonomous space … – ScienceAlert

Adding artificial intelligence to the machines we send out to explore space makes a lot of sense, as it means they can make decisions without waiting for instructions from Earth, and now NASA scientists are trying to figure out how it could be done.

As we send out more and more probes into space, some of them may have to operate completely autonomously, reacting to unknown and unexplained scenarios when they get to their destination and that's where AI comes in.

Steve Chien and Kiri Wagstaff from NASA's Jet Propulsion Laboratory think that these machines will also have to learn as they go, adapting to what they find beyond the reaches of our most powerful telescopes.

"By making their own exploration decisions, robotic spacecraft can conduct traditional science investigations more efficiently and even achieve otherwise impossible observations, such as responding to a short-lived plume at a comet millions of miles from Earth," write the researchers.

One example they give is AI that can tell the difference between a storm and normal weather conditions on a distant planet, making the readings that are being taken much more useful to scientists back home.

Just like Google uses AI to recognise dogs and cats in photos, an explorer buggy could learn to tell the difference between snow and ice, or between running water and still water, adding extra value and meaning to the data it gathers.

The researchers suggest AI-enabled probes could reach as far as Alpha Centauri, some 4.24 light-years away from Earth. Communications across that distance would be received by the generation after the scientists who launched the mission in the first place, so giving the probe a mind of its own would certainly speed up the decision-making process.

The next generation of AI robots will have to be able to detect "features of interest", detect unforeseen features, process and analyse data, and adapt their original plans where necessary, say the researchers.

And when smart probes get the chance to work together, the effects of AI will be even more powerful, as these artificial minds will be able to put their heads together to overcome challenges.

We are already seeing some of this artificial intelligence and autonomy out in space today. The Mars Curiosity rover has software on board that helps it to pick promising targets for its ChemCam a device that studies rocks and other geological features on the Red Planet.

By making its own decisions rather than always waiting for instructions from Earth, Curiosity is now much better at finding significant targets and is able to gather a larger haul of data, according to researchers.

Meanwhile the next rover to be sent to Mars in 2020 will be able to adjust its data collection processes based on the resources available, report Chien and Wagstaff.

In time, AI is going to become more and more important to space travel, the researchers say, and as artificial intelligence makes big strides forward here on Earth it's also set to have a big role in how we explore the rest of the Universe.

The research has been published in Science Robotics.

Here is the original post:

NASA are figuring out how to use AI to build autonomous space ... - ScienceAlert

Microsoft AI’s next leap forward: Helping you play video games – CNET

Could you be playing the next big video game with your voice?

Voice assistants can seem supersmart. Ask my Amazon Alexa why the sky is blue, and you'll get a lesson in light refraction through the atmosphere.

Ask it what CNET is and things start to break down.

"In addition CNET currently has region-specific and language-specific editions."

Well, sure. Then I asked Alexa when the Super Bowl was, right before Sunday night's game. It responded:

"Super Bowl 50's winner is Denver Broncos."

That's one of the biggest contradictions with voice assistants. They can control your lights, play music and even tell you silly jokes. But despite their growing presence in our lives, their capabilities are still very limited.

So far, the way many companies have made them better is to hand-code each response. For example, someone at Amazon could go into Alexa's code and teach it what CNET is and when the next Super Bowl will take place.

Microsoft thinks it's found a different way. It's inviting app developers and companies to use its technology, feeding questions, giving responses and learning what needs to be fixed along the way.

The software giant isn't the only one looking for new uses for artificial intelligence, which, in shorthand, is essentially software that can learn, adapt and act in more subtle, sophisticated ways. Facebook is training its AI with all sorts of software tools, including one in its Oregon data center that's trying to teach a computer to create an original piece of art after looking at a series of pictures. Google, meanwhile, is teaching its AI to play board games. And IBM is refining its AI, called Watson, by feeding it data from all manner of businesses.

Microsoft has had its share of public AI efforts too. It offers a voice assistant in its Windows PC and phone software called Cortana, which will happily jot down reminders and answer trivia questions.

It has also released experiments like Tay, a Twitter chatbot that learned from conversations with people. The experiment, however, was quickly taken offline after people taught it to hate feminists, praise Adolf Hitler and solicit sex.

This time around, Microsoft is taking a more measured approach by offering its AI tools to developers. So far, the results have been encouraging.

A security footage startup called Prism has started using Microsoft's tools to help organize playback video. Prism identifies when there's an object in the video that wasn't there before. Then it sends an image from that clip to Microsoft to identify what's in the picture and gets responses back like "dog" or "package."

This could take hours for a person to do, but combining Prism's technology with Microsoft's AI means a search to see how many packages came to the front desk that day takes mere moments. "It's unfathomable to think about how much data there is," said Adam Planas, a creative director at Prism.

Microsoft's doing the same with voice commands, offering apps not just transcriptions of what I say, but an estimation of what it means, too. That is, if a video game is expecting to hear me say "how old are you" and I say "you look really young," it'll know I basically mean the same thing.

That's a big improvement over the voice command software Alexander Mejia and his team at Human Interact were using before they turned to Microsoft. Their project, Starship Commander, is a new virtual reality game entirely controlled by the player's voice.

"When people put on the headset, they start role-playing, they get into character," he said. "They want to be the starship commander and go forth and have an adventure."

The goal, he said, is to make players feel completely natural talking to the game. Part of that is by creating a slick-looking game that immerses the player to the point that they feel as though they are on a starship. Then, the game has to coax the player into talking enough that after a while, it's just natural. The only downside is that the game will require an internet connection to send your voice commands to Microsoft for processing.

But the upside is that process is "crazy fast," said Sophie Wright, vice president of business development at Human Interact (who also doubles as a character in the game).

Microsoft believes that by inviting developers to use its technology, they can help train its AI. Aside from the 5,000 engineers Microsoft has working on artificial intelligence, more than 424,000 outside developers have signed up to try it out too.

"I think we're on the cusp of a breakthrough," said Andrew Shuman, a corporate vice president at Microsoft who leads the company's AI research group. Once AI is able to understand us better, they can start truly helping in our daily lives. Imagine being able to ask a security camera where you left your car keys.

"You can set up for real user delight," Shuman said.

Does the Mac still matter? Apple execs tell why the MacBook Pro was over four years in the making, and why we should care.

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.

Read more here:

Microsoft AI's next leap forward: Helping you play video games - CNET

Intel will speed up Mukesh Ambanis 5G run, power him with AI and possibly a Jio laptop all in return for ma – Business Insider India

This move is more strategic rather than just an investment. Its more of a pull rather than a push strategy, Counterpoint analyst Tushar Pathak told Business Insider.

Experts believe that Intels investment in Jio can help Reliance across three main verticals Artificial Intelligence, the 5G push and a possible debut of a Jio laptop.

Advertisement

Using Intels AI expertise to manage data over 500 million subscribers

Bernsteins report from June estimated that Jio would capture nearly half the Indian market by 2025 calling the company the new king of Indian telecommunications. It forecasts that Jios subscriber base will jump from its current standing at 388 million subscribers to cross 500 million by 2023 to hit 609 million by 2025.

Advertisement

Most of Intel Capitals investments in the past have focussed on artificial intelligence (AI), including edge computing, cloud technology and network transformation going from 4G to 5G.

When 500 million subscribers will interact and create a massive database, the role of AI is going to be huge for a company like Jio. A lot of their users will have a cross-platform approach, Pathak noted.

Advertisement

According to Pathak, Intels impetus is to keep an eye on companies which have the ability to make the technological shift, something that only happens once every decade. In 2020, its the shift to 5G.

A Jio executive wrote on LinkedIn that the Jio and Intels leadership in ORAN and OpenRan will benefit the growth and transformation of next-generation networks. ORAN and OpenRan are open source 5G software. Intel has advanced Edge computing offerings across processors, analytics, AI and the access to this technology can help Jio Platforms engineering teams make significant pace with their 5G technology and IoT ecosystem rollouts, said a report by Greyhound research. Advertisement

Intels expertise in consumer electronics could also come in handy for the telecom giant. Jio had already marked its foray into the consumer electronics segment with mobiles and the partnership with Intel can help mark its debut in another segment laptops where Intel has a majority market share.

Intel can help Jio launch computing devices (Laptops and Tablets) and accessories like cameras. Might help to note that Jio is already aggressively pursuing its ambitions to become a smart city vendor in which cameras is a key ask. Also, the company has already launched IoT cameras for homes for both Smart TV and surveillance purposes, said the Greyhound report.Advertisement

SEE ALSO:

EXCLUSIVE: Instagram Reels is being quietly tested in India just days after the TikTok ban

ISRO's MOM captures Mars' biggest moon that's on a collision course for the Red Planet

Advertisement

Read the original here:

Intel will speed up Mukesh Ambanis 5G run, power him with AI and possibly a Jio laptop all in return for ma - Business Insider India

Mcubed: More speakers join machine learning and AI extravaganza – The Register

The speaker lineup for Mcubed our three-day dive into machine learning, AI and advanced analytics is virtually complete, meaning now would be a really good time to snap up a cut-price early-bird ticket.

Latest additions include Expero Inc's Steve Purves, who'll be discussing graph representations in machine learning, while Ben Chamberlain of ASOS will be discussing how the mega fashion etailer combines ML and social media information.

Steve and Ben join a lineup of experts who aren't just looking to the future, but are actually applying ML and AI principles to real business problems right now, at companies like Ocado and OpenTable.

Our aim is to show you how you can apply tools and methodologies to allow your business or organisation to take advantage of ML, AI and advanced analytics to solve the problems you face today, as well as prepare for tomorrow.

At the same time, we'll be looking at the organisational, legal and ethical implications of AI and ML, as well as taking a look at some of the most interesting applications, including autonomous vehicles and robotics.

And our keynote speakers, professor Mark Bishop of Goldsmiths, University of London, and Google's Melanie Warrick, will be grappling with the big issues and setting the tone for the event as a whole.

This all takes place at 30 Euston Square. As well as being easy to get to, this is simply a really pleasant environment in which to absorb some mind-expanding ideas, and discuss them on the sidelines with your fellow attendees and the speakers.

Of course, we'll ensure there's plenty of top-notch food and drink to fuel you through the formal and less formal parts of the programme.

Tickets will be limited, so if you want to ensure your place, head over to our website and snap up your early-bird ticket now.

More:

Mcubed: More speakers join machine learning and AI extravaganza - The Register

Facebook Killed an AI After It Came Up With Its Own Language – Nerdist

For decades, humanity has feared that the rise of artificial intelligence could cause unintended and even harmful side effects in the real world. While there are some who have predicted a robo-apocalypse, few would have suspected that the English language would be the first victim in the war between man and machine!

According to a report by Digital Journal, Facebook was experimenting with an artificial intelligence system that essentially gave up on using English in favor of creating its own more efficient language. The researchers on the project reportedly shut down the A.I. once they realized they could no longer understand its language. One of the reasons that the communication gap is significant is that it could theoretically mean that machines will be able to write their own languages and lock users out of their own systems. And you know where that leads

Well, were reasonably sure it wont come down to Terminators and the end of the world (probably). But Elon Musk has recently been offering warnings about letting AI run amok. And we dont entirely disagree with him. Its something that should be handled delicately. And killer robots on a battlefield will always be a bad idea.

As for Facebooks linguistic AI, it turns out that the bot may have been on to something. The sentencesI can i i everything else and balls have zero to me to me to me sound like nonsense to us, but they demonstrate how two of the AI bots negotiated with each other. The repeated words and letters apparently indicated a back-and-forth over the amounts that each bot should take in their negotiations. Essentially, it was shorthand.

Somehow, we doubt that use of language will catch on with humanity. But it is fascinating to see what the machines will come up with on their ownprovided that they dont kill us all in the process.

What do you think about Facebooks language altering AI? Download your thoughts to the comment section below!

Images: MGM/Skydance Productions

Read more:

Facebook Killed an AI After It Came Up With Its Own Language - Nerdist

How AI is shaping the new life in life sciences and pharmaceutical industry – YourStory

The pharma and life sciences industry is faced with increasing regulatory oversight, decreasing R&D productivity, challenges to growth and profitability, and the impact of artificial intelligence (AI) in the value chain. The regulatory changes led by the far-reaching Patient Protection and Affordable Care Act (PPACA) in the US are forcing the pharma and life sciences industry to change its status quo.

Besides the increasing cost of regulatory compliance, the industry is facing rising R&D costs, even though the health outcomes are deteriorating and new epidemics are emerging. Led by the regulatory changes, the customer demographics are also changing. The growth is being driven by emerging geographies of APAC and Latin American region.

Pharmaceutical organisations can leverage AI in a big way to drive insightful decisions on all aspects of their business, from product planning, design to manufacturing and clinical trials to enhance collaboration in the ecosystem, information sharing, process efficiency, cost optimisation, and to drive competitive advantage.

AI enables data mining, engineering, and real time- and algorithmic-driven decision-making solutions, which help in responding to the following key business value chain disruptions in the pharmaceutical industry:

Though genomics currently hogs the spotlight, there are plenty of other biotechnology fields wrestling with AI. In fact, when it comes to human microbes the bacteria, fungi, and viruses that live on or inside us we are talking about astronomical amounts of data. Scientists with the NIHs Human Microbiome Project have counted more than 100 trillion microbes in the human body.

To determine which microbes are most important to our well-being, researchers at the Harvard Public School of Health used unique computational methods to identify around 350 of the most important organisms in their microbial communities. With the help of DNA sequencing, they sorted through 3.5 terabytes of genomic data and pinpointed genetic name tags sequences specific to those key bacteria. They could then identify where and how often these markers occurred throughout a healthy population. This gave them the opportunity to catalogue over 100 opportunistic pathogens and understand where in the microbiome these organisms occur normally. Like genomics, there are also plenty of startups Libra Biosciences, Vedanta Biosciences, Seres Health, Onsel looking to leverage on new discoveries.

Perhaps the biggest data challenge for biotechnologists is synthesis. How can scientists integrate large quantities and diverse sets of data genomic, proteomic, phenotypic, clinical, semantic, social etc. into a coherent whole?

Many AI researchers are occupied to provide plausible responses:

Cambridge Semantics has a developed semantic web technologies that help pharmaceutical companies sort and select which businesses to acquire and which drug compounds to license.

Data scientists at the Broad Institute of MIT and Harvard have developed the Integrative Genomics Viewer (IGV), open source software that allows for the interactive exploration of large, integrated genomic datasets.

GNS Healthcare is using proprietary causal Bayesian network modeling and simulation software to analyse diverse sets of data and create predictive models and biomarker signatures.

Numbers-wise, each human genome is composed of 20,000-25,000 genes composed of three billion base pairs. Thats around three gigabytes of data. Genomics and the role of AI in personalising the healthcare experience:

Sequencing millions of human genomes would add up to hundreds of petabytes of data.

Analysis of gene interactions multiplies this data even further.

In addition to sequencing, massive amounts of information on structure/function annotations, disease correlations, population variations the list goes on are being entered into databanks. Software companies are furiously developing tools and products to analyse this treasure trove.

For example, using Google frameworks as a starting point, the AI team at NextBio have created a platform that allows biotechnologists to search life-science information, share data, and collaborate with other researchers. The computing resources needed to handle genome data will soon exceed those of Twitter and YouTube, says a team of biologists and computer scientists who are worried that their discipline is not geared to cope with the coming genomics flood.

By 2025, between 100 million and 2 billion human genomes could have been sequenced, which is published in the journal PLoS Biology. The data-storage demands for this alone could run to as much as 240 exabytes (1 exabyte is 1,018 bytes), because the number of data that must be stored for a single genome are 30 times larger than the size of the genome itself, to make up for errors incurred during sequencing and preliminary analysis.

The extensive data generation in pharma, genome, and microbiome serves as a clarion call that these fields are going to pose some severe challenges. Astronomers and high-energy physicists process much of their raw data soon after collection and then discard them, which simplifies later steps such as distribution and analysis. But fields like genomics do not yet have standards for converting raw sequence data into processed data.

The variety of analysis that biologists want to perform in genomics is also uniquely large, the authors write, and current methods for performing these analyses will not necessarily translate well as the volume of such data rises. For instance, comparing two genomes requires comparing two sets of genetic variants. If you have a million genomes, youre talking about a million-squared pairwise comparisons. The algorithms for doing that will be able to deliver this will be required with strong data engineering capabilities.

Theres a massive opportunity of AI transforming life sciences and pharmaceutical industry. The above mentioned disruptions in business value chains have already started making inroads and the CXOs in life sciences industry have realised the virtues of innovation and transformation regime led by AI . Brace up for more interventions in life sciences industry leveraged by AI.

(Edited by Evelyn Ratnakumar)

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)

Read the rest here:

How AI is shaping the new life in life sciences and pharmaceutical industry - YourStory

AI Engineers Need to Think Beyond Engineering – Harvard Business Review

Executive Summary

It is very, very easy for a well-intentioned AI practitioner to inadvertently do harm when they set out to do good AI has the power to amplify unfair biases, making innate biases exponentially more harmful. Because AI often interacts with complex social systems, where correlation and causation might not be immediately clear or even easily discernible AI practitioners need to build partnerships with community members, stakeholders, and experts to help them better understand the world theyre interacting with and the implications of making mistakes. Community-based system dynamics (CBSD) is a promising participatory approach to understanding complex social systems that does just that.

Artificial Intelligence (AI) has become one of the biggest drivers of technological change, impacting industries and creating entirely new opportunities. From an engineering standpoint, AI is just a more advanced form of data engineering. Most good AI projects function more like muddy pickup trucks than spotless race cars they are a workhorse technology that humbly makes a production line 5% safer or movie recommendations a little more on point. However, more so than many other technologies, it is very, very easy for a well-intentioned AI practitioner to inadvertently do harm when they set out to do good. AI has the power to amplify unfair biases, making innate biases exponentially more harmful.

As Google AI practitioners, we understand that how AI technology is developed and used will have a significant impact on society for many years to come. As such, its crucial to formulate best practices. This starts with the responsible development of the technology and mitigating any potential unfair bias which may exist, both of which require technologists to look more than one step ahead: not Will this delivery automation save 15% on the delivery cost? but How will this change affect the cities where we operate and the people at-risk populations in particular who live there?

This has to be done the old-fashioned way: by human data scientists understanding the process that generates the variables that end up in datasets and models. Whats more, that understanding can only be achieved in partnership with the people represented by and impacted by these variables community members and stakeholders, such as experts who understand the complex systems that AI will ultimately interact with.

How do we actually implement this goal of building fairness into these new technologies especially when they often work in ways we might not expect? As a first step, computer scientists need to do more to understand the contexts in which their technologies are being developed and deployed.

Despite our advances in measuring and detecting unfair bias, causation mistakes can still lead to harmful outcomes for marginalized communities. Whats a causation mistake? Take, for example, the observation during the Middle Ages that sick people attracted fewer lice, which led to an assumption that lice were good for you. In actual fact, lice dont like living on people with fevers. Causation mistakes like this, where a correlation is wrongly thought to signal a cause and effect, can be extremely harmful in high-stakes domains such as health care and criminal justice. AI system developers who usually do not have social science backgrounds typically do not understand the underlying societal systems and structures that generate the problems their systems are intended to solve. This lack of understanding can lead to designs based on oversimplified, incorrect causal assumptions that exclude critical societal factors and can lead to unintended and harmful outcomes.

For instance, the researchers who discovered that a medical algorithm widely used in the U.S. health care was racially biased against Black patients identified that the root cause was the mistaken causal assumption, made by the algorithm designers, that people with more complex health needs will have spent more money on health care. This assumption ignores critical factors such as lack of trust in the health care system and lack of access to affordable health care that tend to decrease spending on health care by Black patients regardless of the complexity of their health care needs.

Researchers make this kind of causation/correlation mistake all the time.But things are worse for a deep learning computer, which searches billions of possible correlations in order to find the most accurate way to predict data, and thus has billions of opportunities to make causal mistakes. Complicating the issue further, it is very hard, even with modern tools, such as Shapely analysis, to understand why such a mistake was made a human data scientist sitting in a lab with their supercomputer can never deduce from the data itself what the causation mistakes may be. This is why, among scientists, it is never acceptable to claim to have found a causal relationship in nature just by passively looking at data. You must formulate the hypothesis and then conduct an experiment in order to tease out the causation.

Addressing these causal mistakes requires taking a step back. Computer scientists need to do more to understand and account for the underlying societal contexts in which these technologies are developed and deployed.

Here at Google, we started to lay the foundations for what this approach might look like. In a recent paper co-written by DeepMind, Google AI, and our Trust & Safety team, we argue that considering these societal contexts requires embracing the fact that they are dynamic, complex, non-linear, adaptive systems governed by hard-to-see feedback mechanisms. We all participate in these systems, but no individual person or algorithm can see them in their entirety or fully understand them. So, to account for these inevitable blindspots and innovate responsibly, technologists must collaborate with stakeholders representatives from sociology, behavioral science, and the humanities, as well as from vulnerable communities to form a shared hypothesis of how they work. This process should happen at the earliest stages of product development even before product design starts and be done in full partnership with communities most vulnerable to algorithmic bias.

This participatory approach to understanding complex social systems called community-based system dynamics (CBSD) requires building new networks to bring these stakeholders into the process. CBSD isgrounded in systems thinking and incorporates rigorous qualitative and quantitative methods for collaboratively describing and understanding complex problem domains, and weve identified it as a promising practice in our research. Building the capacity topartner with communities in fair and ethical ways that provide benefits to all participants needs to be a top priority. It wont be easy. But the societal insights gained from a deep understanding of the problems that matter most to the most vulnerable in society can lead to technological innovations that are safer and more beneficial for everyone.

When communities are underrepresented in the product development design process, they are underserved by the products that result. Right now, were designing what the future of AI will look like. Will it be inclusive and equitable? Or will it reflect the most unfair and unjust elements of our society? The more just option isnt a foregone conclusion we have to work towards it. Our vision for the technology is one where a full range of perspectives, experiences and structural inequities are accounted for. We work to seek out and include these perspectives in a range of ways, including human rights diligence processes, research sprints, direct input from vulnerable communities and organizations focused on inclusion, diversity, and equity such as WiML (Women in ML) and Latinx in AI; many of these organizations are also co-founded and co-led by Googler researchers, such as Black in AI and Queer in AI.

If we, as a field, want this technology to live up to our ideals, then we need to change how we think about what were building to shift to our mindset from building because we can to building what we should. This means fundamentally shifting our focus to understanding deep problems and working to ethically partner and collaborate with marginalized communities. This will give us a more reliable view of both the data that fuels our algorithms and the problems we seek to solve. This deeper understanding could allow organizations in every sector to unlock new possibilities of what they have to offer while being inclusive, equitable and socially beneficial.

Go here to see the original:

AI Engineers Need to Think Beyond Engineering - Harvard Business Review

Shield AI Recognized As One of the Most Promising AI Companies – AiThority

Forbes includes emerging defense tech startup in its annual AI 50 list of companies using artificial intelligence in meaningful ways

Shield AI, the technology company focused on developing innovative AI technology to safeguard the lives of military service members and first responders, expressed its gratitude to Forbes for naming the company as one of the AI 50: Americas Most Promising Artificial Intelligence Companies for 2020. The five-year-old company has developed AI technology that enables unmanned systems to interpret signals and react autonomously in dynamic environments, including on the battlefield. Shield AIs products are already being utilized by the US Department of Defense to augment and extend service members ability to execute complex missions.

Shield AI co-founderBrandon Tseng, who served in the U.S. Navy for seven years, including as a SEAL, said Following my last deployment, I came home with the strong conviction that artificial intelligence could make a profound positive impact for our service members. This was the idea that Shield AI was founded upon, and a half-decade later, we are elated to have Forbes recognize our innovation of AI technology as both promising and meaningful.

Recommended AI News:Wipro Named A Worldwide Leader In Drug Safety Services By IDC MarketScape

Shield AI has grown from fewer than 30 employees at the end of 2017 to nearly 150 today, while producing revenue metrics on pace with the growth trajectory of the most promising venture-backed start-ups, including doubling its revenue between 2018 and 2019. In an adjoiningprofileForbes noted that Shield AI is is in prime position to capitalize on the nascent market consisting of autonomous technology linked to national security issues.

Recommended AI News:US Enterprises Look To SAPs S/4HANA To Transform Business Processes

Shield AI has developed three cutting-edge products for its range of customers, spanning both software and systems. ItsNova quadcopteris an unmanned artificially intelligent robotic system which can autonomously explore and map complex real-world environments without reliance on GPS or a human operator. Nova is powered byHivemind Edge, the companys intelligent software stack that enables machines to execute complex, unscripted tasks in denied and dynamic environments without direct operator inputs. The application is edge-deployed, with all processing and computation occurring without relying on a central intelligence hub, a critical need in environments lacking stable communications. The second software product,Hivemind Core, integrates data management and analysis, scalable simulation, and self-directed learning in order to radically accelerate product development workflows.

In the coming months, Shield AI will unveil a second generation Nova quadcopter aimed at bringing the power of resilient AI systems to an even wider array of mission sets, coupled with the ability to partner in real-time with operators to navigate tunnels beneath the earth and multi-level structures.

Link:

Shield AI Recognized As One of the Most Promising AI Companies - AiThority

Why AI Is The Perfect Drinking Buddy For The Alcoholic Beverage Industry – Analytics India Magazine

The use of AI-driven processes to increase efficiency in the F&B market is no longer an anomaly. A host of breweries and distilleries have incorporated the technology to not only develop flavour profiles faster, but also for other functions, including packaging, marketing, as well as to ensure they meet all food-safety regulations.

Although the intention is not to find a replacement for the brewmaster/distiller, it becomes a thrilling learning experiment that equips them with multiple data points that could help them come up with innovative ideas.

Here is a list of companies that have successfully blended technology into their beverages to make a heady cocktail:

The company claims to be the worlds first to use AI algorithms and machine learning to create innovative beers that adapt to users taste preferences. Based on customer feedback, the recipe for their brews goes through multiple iterations to generate various combinations. IntelligentX currently has four different varieties Black AI, Golden AI, Pale AI, and Amber AI.

How does it work?

Codes are printed on the cans which direct customers to the Facebook Messenger app. They are then asked to give feedback on the beer they tried by answering a series of 10 questions. The data points gathered are then fed into an AI algorithm to spot trends and inform the overall brewing process. Furthermore, using the feedback, the AI also learns to ask better questions each time to get better outcomes.

Although the insights gathered give brewmasters a window into understanding customer preferences better, the final decision to heed the AIs recommendations to create a fresh brew rests on them. But what is certain is that without technological intervention, such a large collection of data would not only be difficult to process, but also extremely time-consuming.

Multi-award-winning Swedish whiskey distillery Mackmyra Whisky collaborated with Microsoft and Finnish tech company Fourkind to create the worlds first AI-generated whiskey. Using Microsoft Azure and Machine Learning Studio, Fourkinds resulting AI solution was fed into Mackmyras existing recipes and customer feedback data to create thousands of different recipes.

Following this, the distillerys key master blender Angela DOrazio used her experience to review which ingredients would work well together, filtering down the recipes to more desirable combinations. Since this process was repeated multiple times over, the AI algorithm picked up on which combinations worked best and using machine learning, began producing more desirable mixes. Eventually, DOrazio was able to filter it down to five recipes, finally arriving at recipe number 36 which ultimately became the worlds first AI-generated whiskey that went into production.

This AI-generated, but human-curated whiskey has opened the doors to new and innovative combinations that would otherwise have never been discovered. Monikered Intelligens, the first batch of this blend was launched in September 2019.

The Copenhagen-based brewery started a multimillion-dollar project in 2017 to analyse different flavours in its beer using AI. Unlike IntelligentX which uses customer feedback to improve its brew, Carlsberg has accomplished this by developing a taste-sensing platform that helps identify the differential elements of the flavours.

Under the ongoing Beer Fingerprinting Project, 1000 different beer samples are created each day. With the help of advanced sensors, the flavour fingerprint of each sample is determined. Following this, different yeasts are analysed to map the flavours and help make a distinction between them. Thus, the data collected by this AI-powered system could potentially be used to develop new varieties of brews.

Launched in collaboration with Microsoft, Aarhus University and the Technical University of Denmark, the project marked a shift from conventional practices that did not involve any technology.

The brewers of Budweiser and Corona had also jumped on the AI bandwagon to shake up its business. The company had invested in a slew of initiatives to improve how it brews beer. The Beer Garage is one such initiative. Sitting at the interjection of a startup ecosystem and the AB InBev business, it focuses on developing technology-driven solutions. ZX Ventures another offshoot of its larger business was launched in 2015 with the objective of creating new products that address consumer needs.

Anchored around these enterprises, AB InBev is using machine learning capabilities to stay ahead of the curve in three broad areas:

This maker of Belgian-inspired ales has begun integrating AI and IoT into its brewing process to improve both the quality of the beer, as well as its manufacturing process. It started when a significant problem came to light at the packaging stage.

When the beer was loaded into bottles, it was observed that the level at which it was filled was inconsistent. Another problem was the excessive foaming inside the bottles. This spiked the oxygen levels in the beer, which is known to ruin the flavour and reduce the beers shelf life.

After partnering with IBM, the tech giant installed a camera at SCBs warehouse, which took pictures of the beer as it crossed the bottle line. When combined with other data collected during the packaging operations, the team of engineers at IBM uploaded it to the Cloud. At this point, brewers at SCB also provided specific criteria which they found to be useful and this was then left with Watson algorithms to interpret the large amount of data quickly and solve the problem.From losing more than $30,000 a month in beer spillage, SCB found a solution by building AI and IoT into its brewing processes.

See the original post:

Why AI Is The Perfect Drinking Buddy For The Alcoholic Beverage Industry - Analytics India Magazine

Dont leave it up to the EU to decide how we regulate AI – City A.M.

The war of words between Britain and the EU has begun ahead of next months trade talks.

But as Britain sets its own course on everything from immigration to fishing, there is one area where the battle for influence is only just kicking off: the future regulation of artificial intelligence.

As AI becomes a part of our everyday lives from facial recognition software to the use of black-box algorithms the need for regulation has become more apparent. But around the world, there is rigorous disagreement about how to do it.

Last Wednesday, the EU set out its approach in a white paper, proposing regulations on AI in line with European values, ethics and rules. It outlined a tough legal regime, including pre-vetting and human oversight, for high-risk AI applications in sectors such as medicine and a voluntary labelling scheme for the rest.

In contrast, across the Atlantic, Donald Trumps White House has so far taken a light-touch approach, publishing 10 principles for public bodies designed to ensure that regulation of AI doesnt needlessly get in the way of innovation.

Britain has still to set out its own approach, and we must not be too late to the party. If we are, we may lose the opportunity to influence the shaping of rules that will impact our own industry for decades to come.

This matters, because AI firms the growth generators of the future can choose where to locate and which market to target, and will do so partly based on the regulations which apply there.

Put simply, the regulation of AI is too important for Britains future prosperity to leave it up to the EU or anyone else.

That doesnt mean a race to the bottom. Regulation is meaningless if it is so lax that it doesnt prevent harm. But if we get it right, Britain will be able to maintain its position as the technology capital of Europe, as well as setting thoughtful standards that guide the rest of the western world.

So what should a British approach to AI regulation look like?

It is tempting for our legislators to simply give legal force to some of the many vague ethical codes currently floating around the industry. But the lack of specificity of these codes means that they would result in heavy-handed blanket regulation, which could have a chilling effect on innovation.

Instead, the aim must be to ensure that AI works effectively and safely, while giving companies space to innovate. With that in mind, we have created four principles which we believe a British approach to AI regulation should be designed around.

The first is that regulations should be context-specific. AI is not one technology, and it cannot be governed as such. Medical algorithms and recommender algorithms, for example, are likely to both be regulated, but to differing extents because of the impact of the outcomes the consequences of a diagnostic error are far greater than an algorithm pushing an irrelevant product advert into your social media feed.

Our second principle is that regulation must be precise; it should not be left up to tech companies themselves to interpret.

Fortunately, the latest developments in AI research including some which we are pioneering at Faculty allow for analysis of an algorithms performance across a range of important dimensions: accuracy (how good is an AI tool at doing its job?); fairness (does it have implicit biases?); privacy (does it leak peoples data?); robustness (does it fail unexpectedly?); and explainability (do we know how it is working?).

Regulators should set out precise thresholds for each of these according to the context in which the AI tool is deployed. For instance, an algorithm which hands out supermarket loyalty points might be measured only on whether it is fair and protects personal data, whereas one making clinical decisions in a hospital would be required to reach better-than-human-average standards in every area.

The third principle is that regulators must balance transparency with trust. For example, they might publish one set of standards for supermarket loyalty programmes, and another for radiology algorithms. Each would be subject to different licensing regimes: a light-touch one for supermarkets, and a much tougher inspection regime for hospitals.

Finally, regulators will need to equip themselves with the skills and know-how needed to design and manage this regime. That means having data scientists and engineers who can look under the bonnet of an AI tool, as well as ethicists and economists. They will also need the powers to investigate any algorithms performance.

These four principles offer the basis for a regulatory regime precise enough to be meaningful, nuanced enough to permit innovation, and robust enough to retain public trust.

We believe they offer a pragmatic guide for the UK to chart its own path and lead the debate about the future of the AI industry.

Main image credit: Getty

Read more:

Dont leave it up to the EU to decide how we regulate AI - City A.M.

How to Keep Your AI From Turning Into a Racist Monster – WIRED

Slide: 1 / of 1. Caption: Getty Images

Working on a new product launch? Debuting a new mobile site? Announcing a new feature? If youre not sure whether algorithmic bias could derail your plan, you should be.

About

Megan Garcia (@meganegarcia) is a senior fellow and director of New America California, where she studies cybersecurity, AI, and diversity in technology.

Algorithmic biaswhen seemingly innocuous programming takes on the prejudices either of its creators or the data it is fedcauses everything from warped Google searches to barring qualified women from medical school. It doesnt take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.

It took one little Twitter bot to make the point to Microsoft last year. Tay was designed to engage with people ages 18 to 24, and it burst onto social media with an upbeat hellllooooo world!! (the o in world was a planet earth emoji). But within 12 hours, Tay morphed into a foul-mouthed racist Holocaust denier that said feminists should all die and burn in hell. Tay, which was quickly removed from Twitter, was programmed to learn from the behaviors of other Twitter users, and in that regard, the bot was a success. Tays embrace of humanitys worst attributes is an example of algorithmic biaswhen seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed.

Tay represents just one example of algorithmic bias tarnishing tech companies and some of their marquis products. In 2015, Google Photos tagged several African-American users as gorillas, and the images lit up social media. Yonatan Zunger, Googles chief social architect and head of infrastructure for Google Assistant, quickly took to Twitter to announce that Google was scrambling a team to address the issue. And then there was the embarrassing revelation that Siri didnt know how to respond to a host of health questions that affect women, including, I was raped. What do I do? Apple took action to handle that as well after a nationwide petition from the American Civil Liberties Union and a host of cringe-worthy media attention.

One of the trickiest parts about algorithmic bias is that engineers dont have to be actively racist or sexist to create it. In an era when we increasingly trust technology to be more neutral than we are, this is a dangerous situation. As Laura Weidman Powers, founder of Code2040, which brings more African Americans and Latinos into tech, told me, We are running the risk of seeding self-teaching AI with the discriminatory undertones of our society in ways that will be hard to rein in, because of the often self-reinforcing nature of machine learning.

As the tech industry begins to create artificial intelligence, it risks inserting racism and other prejudices into code that will make decisions for years to come. And as deep learning means that code, not humans, will write code, theres an even greater need to root out algorithmic bias. There are four things that tech companies can do to keep their developers from unintentionally writing biased code or using biased data.

The first is lifted from gaming. League of Legends used to be besieged by claims of harassment until a few small changes caused complaints to drop sharply. The games creator empowered players to vote on reported cases of harassment and decide whether a player should be suspended. Players who are banned for bad behavior are also now told why they were banned. Not only have incidents of bullying dramatically decreased, but players report that they previously had no idea how their online actions affected others. Now, instead of coming back and saying the same horrible things again and again, their behavior improves. The lesson is that tech companies can use these community policing models to attack discrimination: Build creative ways to have users find it and root it out.

Second, hire the people who can spot the problem before launching a new product, site, or feature. Put women, people of color, and others who tend to be affected by bias and are generally underrepresented in tech companies development teams. Theyll be more likely to feed algorithms a wider variety of data and spot code that is unintentionally biased. Plus there is a trove of research that shows that diverse teams create better products and generate more profit.

Third, allow algorithmic auditing. Recently, a Carnegie Mellon research team unearthed algorithmic bias in online ads. When they simulated people searching for jobs online, Google ads showed listings for high-income jobs to men nearly six times as often as to equivalent women. The Carnegie Mellon team has said it believes internal auditing to beef up companies ability to reduce bias would help.

Fourth, support the development of tools and standards that could get all companies on the same page. In the next few years, there may be a certification for companies actively and thoughtfully working to reduce algorithmic discrimination. Now we know that water is safe to drink because the EPA monitors how well utilities keep it contaminant-free. One day we may know which tech companies are working to keep bias at bay. Tech companies should support the development of such a certification and work to get it when it exists. Having one standard will both ensure sectors sustain their attention to the issue and give credit to the companies using commonsense practices to reduce unintended algorithmic bias.

Companies shouldnt wait for algorithmic bias to derail their projects. Rather than clinging to the belief that technology is impartial, engineers and developers should take steps to ensure they dont accidentally create something that is just as racist, sexist, and xenophobic as humanity has shown itself to be.

Read the original:

How to Keep Your AI From Turning Into a Racist Monster - WIRED

Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class – Forbes


Forbes
Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class
Forbes
New, more-modern manufacturing processes, including the use of robots, have gutted the number of high-paying factory jobs in the U.S. and caused economic angst in large portions of the country. The movement of manufacturing plants overseas has ...

Here is the original post:

Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class - Forbes