A London AI Hub, a Facility Bigger than the Louvre, Are Among the Newest Footprint Expansions in the Life Sciences Industry – BioSpace

GlaxoSmithKline has opened a new $13 million research hub in London focused on artificial intelligence. The new hub is close to a similar research facility owned by Internet giant Google, which is using AI in its own life sciences research.

The GSK site will draw on the expertise of other AI-focused companies as it moves forward with drug discovery efforts, Pharmophorum reported. The drug developer intends to rely on AI companies to investigate the gene-related cause of some diseases, as well as screening for potential drugs, according to the report. GSKs new London facility will become the home of 30 scientists and engineers. The employees based at the facility are expected to begin collaborating with companies such as Cerebras, the Crick Institute and the Alan Turing Institute.

As GSK moves forward with its new AI-focused research, Chief Executive Officer Emma Walmsley told the London Evening Standard that it was her hope the new site will become a beacon for jobs and attract those machine learning experts and programmers who may traditionally eye Silicon Valley for jobs.

Using technologies like AI is a critical part of helping us to discover and develop medicines for serious diseases, Walmsley said, according to the report.

In addition to the AI employees in London, GSK also has other employees skilled in the discipline-based in San Francisco and Boston.

GSK isnt the only company expanding its footprint. Koreas Samsung Biologicsis spending $2 billion on a new manufacturing plant that is expected to become the largest of its kind across the globe. The Journal quipped that the Samsung site will be larger than The Louvre in Paris, the former royal residence and current museum in Paris that takes up 652,500 square feet.

The Samsung site, which will be approximately 230,000 square meters, will support biologics manufacturing that are used by some of the biggest drugmakers in the world, including Bristol Myers Squibb and GSK. In an interview with The Wall Street Journal, CEO Kim Tae-han said the demands for biologics used in the effort to combat COVID-19 highlighted the need for a larger-than-expected facility.

Covid-19 is giving us more opportunity than crisis,Kim said, according to the report.

There is also growth taking place in the United States. The Boston Business Journal reported that four life science companies are leasing a214,440-square-foot, four-story lab building in Lexington, Mass. The building will become the home for Dicerna Pharmaceuticals, Frequency Therapeutics, Integral Health and Voyager Therapeutics, the Journal reported.

The four-story building was constructed by King Street properties with life science companies in mind. Although the building was not built with a specific client in mind, it drew interest from prospective tenants across the region, King Street Properties told the Journal.

According to the Journal, the breakdown for the amount of space used by each company in the Lexington site is as follow:

Follow this link:

A London AI Hub, a Facility Bigger than the Louvre, Are Among the Newest Footprint Expansions in the Life Sciences Industry - BioSpace

An AI hiring firm promising to be bias-free wants to predict job hopping – MIT Technology Review

The firm in question is Australia-based PredictiveHire, founded in October 2013. It offers a chatbot that asks candidates a series of open-ended questions. It then analyzes their responses to assess job-related personality traits like drive, initiative, and resilience. According to the firms CEO, Barbara Hyman, its clients are employers that must manage large numbers of applications, such as those in retail, sales, call centers, and health care. As the Cornell study found, it also actively uses promises of fairer hiring in its marketing language. On its home page, it boldly advertises: Meet Phai. Your co-pilot in hiring. Making interviews SUPER FAST. INCLUSIVE, AT LAST. FINALLY, WITHOUT BIAS.

As weve written before, the idea of bias-free algorithms is highly misleading. But PredictiveHires latest research is troubling for a different reason. It is focused on building a new machine-learning model that seeks to predict a candidates likelihood of job hopping, the practice of changing jobs more frequently than an employer desires. The work follows the companys recent peer-reviewed research that looked at how open-ended interview questions correlate with personality (in and of itself a highly contested practice). Because organizational psychologists have already shown a link between personality and job hopping, Hyman says, the company wanted to test whether they could use their existing data for the prediction. Employee retention is a huge focus for many companies that we work with given the costs of high employee churn, estimated at 16% of the cost of each employees salary, she adds.

The study used the free-text responses from 45,899 candidates who had used PredictiveHires chatbot. Applicants had originally been asked five to seven open-ended questions and self-rating questions about their past experience and situational judgment. These included questions meant to tease out traits that studies have previously shown to correlate strongly with job-hopping tendencies, such as being more open to experience, less practical, and less down to earth. The company researchers claim the model was able to predict job hopping with statistical significance. PredictiveHires website is already advertising this work as a flight risk assessment that is coming soon.

PredictiveHires new work is a prime example of what Nathan Newman argues is one of the biggest adverse impacts of big data on labor. Newman, an adjunct associate professor at the John Jay College of Criminal Justice, wrote in a 2017 law paper that beyond the concerns about employment discrimination, big-data analysis had also been used in myriad ways to drive down workers wages.

Machine-learning-based personality tests, for example, are increasingly being used in hiring to screen out potential employees who have a higher likelihood of agitating for increased wages or supporting unionization. Employers are increasingly monitoring employees emails, chats, and other data to assess which might leave and calculate the minimum pay increase needed to make them stay. And algorithmic management systems like Ubers are decentralizing workers away from offices and digital convening spaces that allow them to coordinate with one another and collectively demand better treatment and pay.

None of these examples should be surprising, Newman argued. They are simply a modern manifestation of what employers have historically done to suppress wages by targeting and breaking up union activities. The use of personality assessments in hiring, which dates back to the 1930s in the US, in fact began as a mechanism to weed out people most likely to become labor organizers. The tests became particularly popular in the 1960s and 70s once organizational psychologists had refined them to assess workers for their union sympathies.

In this context, PredictiveHires fight-risk assessment is just another example of this trend. Job hopping, or the threat of job hopping, points out Barocas, is one of the main ways that workers are able to increase their income. The company even built its assessment on personality screenings designed by organizational psychologists.

Barocas doesnt necessarily advocate tossing out the tools altogether. He believes the goal of making hiring work better for everyone is a noble one and could be achieved if regulators mandate greater transparency. Currently none of them have received rigorous, peer-reviewed evaluation, he says. But if firms were more forthcoming about their practices and submitted their tools for such validation, it could help hold them accountable. It could also help scholars engage more readily with firms to study the tools impacts on both labor and discrimination.

Despite all my own work for the past couple of years expressing concerns about this stuff, he says, I actually believe that a lot of these tools could significantly improve the current state of affairs.

Read more:

An AI hiring firm promising to be bias-free wants to predict job hopping - MIT Technology Review

Flying high with AI: Alaska Airlines uses artificial intelligence to save time, fuel and money – TechRepublic

How Alaska Airlines executed the perfect artificial intelligence use case. The company has saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions, all from using AI.

Image: Alaska Air

Given the near 85% fail rate in corporate artificial intelligence projects, it was a pleasure to visit with Alaska Airlines, which launched a highly successful AI system that is helping flight dispatchers. I visited with Alaska to see what the "secret sauce" was that made its AI project a success. Here are some tips to help your company execute AI as well as Alaska Airlines has.

SEE: Hiring Kit: Video Game Programmer (TechRepublic Premium)

Initially, the idea of overhauling flight operations control existed in concept only. "Since the idea was highly conceptual, we didn't want to oversell it to management," said Pasha Saleh, flight operations strategy and innovation director for Alaska Airlines. "Instead, we got Airspace Intelligence, our AI vendor, to visit our network centers so they could observe the problems and build that into their development process. This was well before the trial period, about 2.5 years ago."

Saleh said it was only after several trials of the AI system that his team felt ready to present a concrete business use case to management. "During that presentation, the opportunity immediately clicked," Saleh said. "They could tell this was an industry-changing platform."

Alaska cut its teeth on having to innovate flight plans and operations in harsh arctic conditions, so it was almost a natural step for Alaska to become an innovator in advancing flight operations with artificial intelligence.

SEE:Digital transformation: A CXO's guide (free PDF)(TechRepublic)

"I could see a host of opportunities to improve the legacy system across the airline industry that could propel the industry into the future," Saleh said. "The first is dynamic mapping. Our Flyways system was built to offer a fully dynamic, real-time '4D' map with relevant information in one, easy-to-understand screen. The information presented includes FAA data feeds, turbulence reports and weather reports, which are all visible on a single, highly detailed map. This allows decision-makers to quickly assess the airspace. The fourth dimension is time, with the novel ability to scroll forward eight-plus hours into the future, helping to identify potential issues with weather or congestion."

"We saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions." Pasha Saleh, flight operations strategy and innovation director for Alaska Airlines

The Alaska Flyways system also has built-in monitoring and predictive abilities. The system looks at all scheduled and active flights across the U.S., scanning air traffic systemically rather than focusing on a single flight. It continuously and autonomously evaluates the operational safety, air-traffic-control compliance and efficiency of an airline's planned and active flights. The predictive modeling is what allows Flyways to "look into the future," helping inform how the U.S. airspace will evolve in terms of weather, traffic constraints, airspace closures and more.

SEE:9 questions to ask when auditing your AI systems(TechRepublic)

"Finally the system presents recommendations," Saleh said. "When it finds a better route around an issue like weather or turbulence, or simply a more efficient route, Flyways provides actionable recommendations to flight dispatchers. These alerts pop up onto the computer screen, and the dispatcher decides whether to accept and implement the recommended solution. In sum: The operations personnel always make the final call. Flyways is constantly learning from this."

Saleh recalled the early days when autopilot was first introduced. "There was fear it would replace pilots," he said. "Obviously, that wasn't the case, and autopilot has allowed pilots to focus on more things of value. It was our hope that Flyways would likewise empower our dispatchers to do the same."

SEE:Graphs, quantum computing and their future roles in analytics(TechRepublic)

One step Alaska took was to immediately engage its dispatchers in the design and operation of the Flyways system. Dispatchers tested the platform for a six-month trial period and provided feedback for enhancing it. This was followed by on-site, one-on-one training and learning sessions with the Airspace Intelligence team. "The platform also has a chat feature, so our dispatchers could share their suggestions with the Airspace Intelligence team in real time," Saleh said. "Dispatchers could have an idea, and within days, the feature would be live. And because Flyways uses AI, it also learned from our dispatchers, and got better because of it."

While Flyways can speed times to decisions on route planning and other flight operations issues, humans will always have the role in route planning, and will always be the final decision-makers. "This is a tool that enhances, rather than replaces, our operations," Saleh said. Because flight dispatchers were so integrally involved with the project's development and testing, they understood its fit as a tool and how it could enhance their work.

"With the end result, I would say satisfaction is an understatement," Saleh said. "We're all blown away by the efficiency and predictability of the platform. But what's more, is that we're seeing an incredible look into the future of more sustainable air travel.

"One of the coolest features to us is that this tool embeds efficiency and sustainability into our operation, which will go a long way in helping us meet our goal of net zero carbon emissions by 2040. We saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions. This was at a time when travel was down because of the pandemic. We anticipate Flyways will soon become the de facto system for all airlines. But it sure has been cool being the first airline in the world to do this!"

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Visit link:

Flying high with AI: Alaska Airlines uses artificial intelligence to save time, fuel and money - TechRepublic

Orange Logic Deploys a Second Generation of Machine Learning AI for Digital Asset Management – PR Newswire (press release)

"In the early days, we were all amused by the sheer novelty of A.I. Now with maturity, our users expect more concrete results with almost no errors. Our engineers have built arbitrage mechanisms that provide more confidence than any individual A.I. system could. Concretely, this means less work for our users but more work for the machines. That's ok, though, as the machines don't have to drive the kids back from school," said Karl Facredyn CEO of Orange Logic.

How it works:

Pass One: Detect The Content

The first pass of A.I. uses two separate machine learning instances. Each instance undergoes its own training and will interpret and produce results for an asset independently of the other machine.

Pass Two: Arbitrage Results

A third A.I. arbitrages the results from the first pass and only keeps the most accurate results of the two previous A.I.'s.

About Orange Logic

Established in 2000, Orange Logic initially operated as a software research company on a mission to innovate approaches in multiple fields including Digital Asset Management. Today, Orange Logic provides a premier Digital Asset Management solution CORTEX | DAM for any team, national or global, looking to efficiently manage and scale its digital media libraries.

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/orange-logic-deploys-a-second-generation-of-machine-learning-ai-for-digital-asset-management-300410174.html

SOURCE Orange Logic

http://www.orangelogic.com

Read this article:

Orange Logic Deploys a Second Generation of Machine Learning AI for Digital Asset Management - PR Newswire (press release)

Global Artificial Intelligence (AI) in Education Market Projected to Reach USD XX.XX billion by 2025- Google, IBM, Pearson, Microsoft, AWS, Nuance,…

The unprecedented onset of a pandemic crisis such as COVID-19 has been instrumenting dominant alterations in the global growth trajectory of the Artificial Intelligence (AI) in Education Market. The event marks a catastrophic influence affecting myriad facets of the Artificial Intelligence (AI) in Education market in a multi-dimensional setting. The growth course that has been quite unabashed in the historical times, seems to have been struck suddenly in various unparalleled ways and means, which is therefore also affecting the normal growth prospects in the Artificial Intelligence (AI) in Education market. This thoughtfully compiled research report underpinning the impact of COVID-19 on the growth trajectory is therefore documented to encourage a planned rebound.

A thorough analytical review of the pertinent growth trends influencing the Artificial Intelligence (AI) in Education market has been demonstrated in the report. Adequate efforts have been directed to influence an unbiased and time-efficient market related decision amongst versatile market participants, striving to find a tight grip in the competition spectrum of the aforementioned Artificial Intelligence (AI) in Education market. The report also illustrates minute details in the Artificial Intelligence (AI) in Education market governing micro and macroeconomic factors that seem to have a dominant and long-term impact, directing the course of popular trends in the global Artificial Intelligence (AI) in Education market.

The study encompasses profiles of major companies operating in the Artificial Intelligence (AI) in Education Market. Key players profiled in the report includes:GoogleIBMPearsonMicrosoftAWSNuanceCognizantMetacogQuantum Adaptive LearningQueriumThird Space LearningAleksBlackboardBridgeUCarnegie LearningCenturyCogniiDreamBox LearningElemental PathFishtreeJellynoteJenzabarKnewtonLuilishuo

The report is rightly designed to present multidimensional information about the current and past market occurrences that tend to have a direct implication on the onward growth trajectory of the Artificial Intelligence (AI) in Education market.

The following sections of this versatile report on the Artificial Intelligence (AI) in Education market specifically shed light on popular industry trends encompassing both market drivers as well as dominant trends that systematically affect the growth trajectory visibly. The report is a holistic, ready-to-use compilation of all major events and developments that replicate growth in the Artificial Intelligence (AI) in Education market. Besides presenting notable insights on Artificial Intelligence (AI) in Education market factors comprising above determinants, the report further in its subsequent sections of this detailed research report on Artificial Intelligence (AI) in Education market states information on regional segmentation.

Access Complete Report @ https://www.orbismarketreports.com/global-artificial-intelligence-ai-in-education-market-growth-analysis-by-trends-and-forecast-2019-2025?utm_source=Pooja

By the product type, the market is primarily split into Machine Learning and Deep LearningNatural Language Processing

By the end-users/application, this report covers the following segments Virtual Facilitators and Learning EnvironmentsIntelligent Tutoring SystemsContent Delivery SystemsFraud and Risk Management

In the subsequent sections of the report, readers are also presented with versatile understanding about the current state of geographical overview, encompassing various regional hubs that consistently keep witnessing growth promoting market developments directed by market veterans, aiming for ample competitive advantage, such that their footing remains strong and steady despite the cut throat competition characterizing the aforementioned Artificial Intelligence (AI) in Education market. Each of the market players profiled in the report have been analyzed on the basis of their company and product portfolios, to make logical deductions.

Global Artificial Intelligence (AI) in Education Geographical Segmentation Includes: North America (U.S., Canada, Mexico) Europe (U.K., France, Germany, Spain, Italy, Central & Eastern Europe, CIS) Asia Pacific (China, Japan, South Korea, ASEAN, India, Rest of Asia Pacific) Latin America (Brazil, Rest of L.A.) Middle East and Africa (Turkey, GCC, Rest of Middle East)

Some Major TOC Points: Chapter 1. Report Overview Chapter 2. Global Growth Trends Chapter 3. Market Share by Key Players Chapter 4. Breakdown Data by Type and Application Chapter 5. Market by End Users/Application Chapter 6. COVID-19 Outbreak: Artificial Intelligence (AI) in Education Industry Impact Chapter 7. Opportunity Analysis in Covid-19 Crisis Chapter 9. Market Driving ForceAnd Many More

Continued

Research Methodology Includes:

The report systematically upholds the current state of dynamic segmentation of the Artificial Intelligence (AI) in Education market, highlighting major and revenue efficient market segments comprising application, type, technology, and the like that together coin lucrative business returns in the Artificial Intelligence (AI) in Education market.

Do You Have Any Query or Specific Requirement? Ask Our Industry [emailprotected] https://www.orbismarketreports.com/enquiry-before-buying/81119?utm_source=Pooja

Target Audience:* Artificial Intelligence (AI) in Education Manufactures* Traders, Importers, and Exporters* Raw Material Suppliers and Distributors* Research and Consulting Firms* Government and Research Organizations* Associations and Industry Bodies

Customization Service of the Report:-

Orbis Market Reports Analysis gives customization of Reports as you want. This Report will be customized to satisfy all of your necessities. For those who have any query get in contact with our sales staff, who will assure you to get a Report that fits your requirements.

Looking forprovoke fruitful enterprise relationships with you!

About Us :

With unfailing market gauging skills, has been excelling in curating tailored business intelligence data across industry verticals. Constantly thriving to expand our skill development, our strength lies in dedicated intellectuals with dynamic problem solving intent, ever willing to mold boundaries to scale heights in market interpretation.

Contact Us :

Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

More here:

Global Artificial Intelligence (AI) in Education Market Projected to Reach USD XX.XX billion by 2025- Google, IBM, Pearson, Microsoft, AWS, Nuance,...

Sensei Ag Uses AI Platform and Hydroponic Technology to Grow Food – The Spoon

As the worlds population inches towards its estimated 10 billion people by 2050, finding more, not to mention more sustainable, ways to feed people becomes more and more important. High-tech, indoor agriculture is one solution getting a lot of attention lately, and recently, a new company joined the fast-growing sector. Sensei Ag is the brainchild of Oracles Larry Ellison and scientist Dr. David Agus, and the companys goal is to grow more greens using hydroponics and AI.

Based on the small Hawaiian island of Lnai, Sensei Ag has built a 100,000 sq. ft.hydroponic pilot greenhouse that is expected to grow 1 million pounds of food per year. I spoke with SenseiAg CEO Sonia Lo by phone this week, and she described the company as an integrated solution to indoor farming that uses the best practices in computer vision, germination, and seeding to optimize indoor growing.

I asked Lo about how the company incorporates AI into their greenhouses. She said that their AI platform will act as a data engine that harnesses global grower knowledge, and will create an algorithm for the best-practices in indoor growing. She did not go into the specifics of their platform, but did mention that this would be made available to other growers, and it would be embedded into each part of their agricultural system. Sensei Ag also uses advanced cameras within their greenhouses to identify pests, pathogens, plant health, and uneven growth in crops. The companys goal is to enable platforms within the greenhouse to make decisions on growing food autonomous of human intervention.

The COVID-19 pandemic, climate change, and a growing population has forced us to consider the possibility of global food insecurity. In response, companies like Phytoponics, Element Farms, and Gotham Greens all operate indoor farms that use hydroponic techniques to grow leafy greens. Meanwhile, companies like Verdeat, Rise Gardens, and Seedo offer at-home vertical farming products that allow you to leafy greens in your living room.

Sensei Ag grows cherry tomatoes, basil, and butter lettuce, and Lo said that they will definitely be expanding the crops they grow. They are currently scouting for a location in California or Nevada for their flagship farm, which will be used as a template for rolling out future farms.

Related

See original here:

Sensei Ag Uses AI Platform and Hydroponic Technology to Grow Food - The Spoon

Alibaba launches low-cost voice assistant amid AI drive – Reuters

BEIJING China's Alibaba Group Holding Ltd launched on Wednesday a cut-price voice assistant speaker, similar to Amazon.com Inc's "Echo", its first foray into artificially intelligent home devices.

The "Tmall Genie", named after the company's e-commerce platform Tmall, costs 499 yuan ($73.42), significantly less than western counterparts by Amazon and Alphabet Inc's Google, which range from $120 to $180.

These devices are activated by voice commands to perform tasks, such as checking calendars, searching for weather reports, changing music or control smart-home devices, using internet connectivity and artificial intelligence.

China's top tech firms have ambitions to become world leaders in artificial intelligence as companies, including Alibaba and Amazon, increasingly compete for the same markets.

Baidu, China's top search engine, which has invested in an artificial intelligence lab with the Chinese government, recently launched a device based on its own siri-like "Duer OS" system.

The Tmall Genie is currently programmed to use Mandarin as its language and will only be available in China. It is activated when a recognised user says "Tmall Genie" in Chinese.

In a streamed demonstration on Wednesday, engineers ordered the device to buy and deliver some Coca Cola, play music, add credit to a phone and activate a smart humidifier and TV.

The device, which comes in black and white, can also be tasked with purchasing goods from the company's Tmall platform, a function similar to Amazon's Echo device.

Alibaba has invested heavily in offline stores and big data capabilities in an effort to capitalise on the entire supply chain as part of its retail strategy, increasingly drawing comparisons with similar strategies adopted by Amazon.

It recently began rolling out unstaffed brick-and-motor grocery and coffee shops, using QR codes that users can scan to complete payment on its Alipay app, which has over 450 million users. Amazon launched a similar concept of stores in December. ($1=6.7962 yuan)

(Reporting by Cate Cadell; Editing by Neil Fullick)

BRUSSELS EU antitrust regulators are weighing another record fine against Google over its Android mobile operating system and have set up a panel of experts to give a second opinion on the case, two people familiar with the matter said.

KIEV The Ukrainian software firm used to launch last week's global cyber attack warned on Wednesday that all computers sharing a network with its infected accounting software had been compromised by hackers.

Read the original post:

Alibaba launches low-cost voice assistant amid AI drive - Reuters

How AI can help payers navigate a coming wave of delayed and deferred care – FierceHealthcare

So far insurers have seen healthcare use plummet since the onset of the COVID-19 pandemic.

But experts are concerned about a wave of deferred care that could hit as patients start to return to patients and hospitals putting insurers on the hook for an unexpected surge of healthcare spending.

Artificial intelligence and machine learning could lend insurers a hand.

Against Coronavirus, Knowledge is Power

For organizations with a need for affordable and convenient COVID-19 antibody testing, Truvian's Easy Check COVID-19 IgM/IgG antibody test empowers onsite testing at scale, with accurate results at 10 minutes from a small sample of blood. Hear from industry experts Dr. Jerry Yeo, University of Chicago and Dr. Stephen Rawlings, University of California, San Diego on the state of COVID antibody testing and Easy Check through our on-demand webinar.

We are using the AI approaches to try to protect future cost bubbles, said Colt Courtright, chief data and analytics officer at Premera Blue Cross, during a session with Fierce AI Week on Wednesday.

WATCH THE ON-DEMAND PLAYBACK:What Payers Should Know About How AI Can Change Their Business

He noted that people are not going in and getting even routine cancer screenings.

If people have delay in diagnostics and delay in medical care how is that going to play out in the future when we think about those individuals and the need for clinical programs and the cost and how do we manage that? he said.

Insurers have started in some ways to incorporate AI and machine learning in several different facets such as claims management and customer service, but insurers are also starting to explore how AI can be used to predict healthcare costs and outcomes.

In some ways, the pandemic has accelerated the use of AI and digital technologies in general.

If we can predict, forecast and personalize care virtually, then why not do that, said Rajeev Ronanki, senior vice president and chief digital officer for Anthem, during the session.

The pandemic has led to a boom in virtual telemedicine as the Trump administration has increased flexibility for getting Medicare payments for telehealth and patients have been scared to go to hospitals and physician offices.

But Ronanki said that AI cant just help with predicting healthcare costs, but also on fixing supply chains wracked by the pandemic.

He noted that the manufacturing global supply chain is extremely optimized, especially with just-in-time ordering that doesnt require businesses to have a large amount of inventory.

But that method doesnt really work during a pandemic when there is a vast imbalance in supply and demand with personal protective equipment, said Ronanki.

When you connect all those dots, AI can then be used to configure supply and demand better in anticipation of issues like this, he said.

Read more:

How AI can help payers navigate a coming wave of delayed and deferred care - FierceHealthcare

This backflipping noodle has a lot to teach us about AI safety – The Verge

AI isnt going to be a threat to humanity because its evil or cruel, AI will be a threat to humanity because we havent properly explained what it is we want it to do. Consider the classic paperclip maximizer thought experiment, in which an all-powerful AI is told, simply, make paperclips. The AI, not constrained by any human morality or reason, does so, eventually transforming all resources on Earth into paperclips, and wiping out our species in the process. As with any relationship, when talking to our computers, communication is key.

Thats why a new piece of research published yesterday by Googles DeepMind and the Elon Musk-funded OpenAI institute is so interesting. It offers a simple way for humans to give feedback to AI systems crucially, without the instructor needing to know anything about programming or artificial intelligence.

The method is a variation of whats known as reinforcement learning or RL. With RL systems, a computer learns by trial-and-error, repeating the same task over and over, while programmers direct its actions by setting certain reward criteria. For example, if you want a computer to learn how to play Atari games (something DeepMind has done in the past) you might make the games point system the reward criteria. Over time, the algorithm will learn to play in a way that best accrues points, often leading to super-human performance.

What DeepMind and OpenAIs researchers have done is replace this predefined reward criteria with a much simpler feedback system. Humans are shown an AI performing two versions of the same task and simply tell it which is better. This happens again and again, and eventually the systems learns what is expected of it. Think of it like getting an eye test, when youre looking through different lenses, and being asked over and over: better... or worse? Heres what that looks like when teaching a computer to play the classic Atari game Q*bert:

This method of feedback is surprisingly effective, and researchers were able to use it to train an AI to play a number of Atari video games, as well perform simulated robot tasks (like picking telling an arm to pick up a ball). This better / worse reward function could even be used to program trickier behavior, like teaching a very basic virtual robot how to backflip. Thats how we get to the GIF at the top of the page. The behavior you see has been created by watching the Hopper bot jump up and down, and telling it well done when it gets a bit closer to doing a backflip. Over time, it learns how.

Of course, no one is suggesting this method is a cure-all for teaching AI. There are a number of big downsides and limitations in using this sort of feedback. The first being that although it doesnt take much skill on behalf of the human operator, it does take time. For example, in teaching the Hopper bot to backflip, a human was asked to judge its behavior some 900 times a process that took about an hour. The bot itself had to work through 70 hours of simulated training time, which was sped up artificially.

For some simple tasks, says Oxford Robotics researcher Markus Wulfmeier (who was not involved in this research), it would be quicker for a programmer to simply define what it is they wanted. But, says Wulfmeier, its increasingly important to render human supervision more effective for AI systems, and this paper represents a small step in the right direction.

DeepMind and OpenAI say pretty much the same its a small step, but a promising one, and in the future, theyre looking to apply it to more and more complex scenarios. Speaking to The Verge over email, DeepMind researcher Jan Leike said: The setup described in [our paper] already scales from robotic simulations to more complex Atari games, which suggests that the system will scale further. Leike suggests the next step is to test it in more varied 3D environments. You can read the full paper describing the work here.

See more here:

This backflipping noodle has a lot to teach us about AI safety - The Verge

13 ways AI will change your life – TNW

From helping you take care of email to creating personalized online shopping experiences, AI promises to transform the way we live and work.

But with all the hype out there, how do we know which benefits well actually see? In order to learn more, I asked a few members of YECthe following question:

Run an early-stage company? We're inviting 250 to exhibit at TNW Conference and pitch on stage!

What is the top benefit you predict emerging from AI, and do you think the overall benefits will live up to the hype?

The greatest benefit of AI which is already emerging is the elimination of repetitive tasks. From chat bots that can free up human staffers times to work on more complex issues, to scheduling AIs like x.ai that eliminate the need to schedule meetings, AI will ultimately help humans spend more time focusing on creative and high-mental-effort activities. Brittany Hodak,ZinePak

I think the benefits of deeper personalization in terms of the ability to understand what each customer really wants and is interested in can be achieved through AI over time. It will live up to the hype because its already being used in some degree to illustrate how personalization is possible and how AI saves considerable time in getting to a deeper level of understanding of each customer. Angela Ruth, Due

AI will save companies considerable time by doing tasks and collecting data as well as providing decisions based on that data much faster than human beings can do. It seems quite possible that AI has the capability of doing so much more than we can on many levels. Its an exciting time to watch the changes that AI brings. Murray Newlands,Sighted

AI will enable us to interact with information as if were interacting with a knowledgeable individual. We wont have to look at a screen to learn about anything, we can simply converse with AI. SIRI is already a reliable personal assistant when it comes to setting reminders, alarm clocks, sending texts, etc. AI will make it possible for us to do virtually anything with voice command. Andrew Namminga,Andesign

The biggest change thats coming is the move from humans using software as a tool, to humans working with software as team members. Software will monitor things, alert humans, and execute basic tasks without human intervention. This will free human time for the really creative or interesting tasks and greatly improve business. A.I. is going to have a much larger impact than the hype. Brennan White,Cortex

I think the greatest advantage of AI is the automation of tasks that will free up employees to focus on strategic initiatives. On the other hand, I dont think it will be as big as predicted. There are still too many tasks that need a human touch to make them successful. Well see great benefit from AI in the more mundane areas, but youll always need the human brain for some tasks. Nicole Munoz,Start Ranking Now

One of the top benefits will be the emergence of personalized medicine. Rather than a one-size-fits-all approach, doctors will be able to tailor treatment on an individual basis and prescribe the right treatments and procedures based on your medical history. As far as living up to hype, yes definitely. Though as with many new technologies its more of a question of whenratherthan if. Kevin Yamazaki,Sidebench

No, tomorrows AI wont live up to the hype. Freeing ordinary folks from repetitive tasks and giving them personal assistants only allows people to busy themselves with other, more complex tasks. The resulting productivity will mark incremental gains for business owners, but nothing on par with the digital revolution and the industrial one before it. For that, well have to wait for the robots. Manpreet Singh,TalkLocal

With each wave of technology advancement, the quality of life for the world overall has increased. With AI, we will have better personalized healthcare, more efficient energy use, enhanced food production capabilities, improved jobs with less mundane work, and more. People will lead longer and more high quality lives. Adelyn Zhou,TOPBOTS

I believe it will be more like the science fiction movies, where we will maintain and work with the machines that do the work. However, these jobs will come with a level of prestige, as most people will probably live off a government sponsored socialism system. With AI and automation replacing so many jobs in the next 20 years, we will have to change social systems in order to adapt. Andy Karuza,FenSens

While AI is critical for self-driving cars, the military, commerce, AI-driven SEO and gaming, its poised to make the most human impact in medicine and human behavior. Imagine the UN leveraging neural networks and deep learning to discover what helps some communities thrive and others fall behind. Those lessons can then be leveraged into community builders, city planners, grants and projects. Gideon Kimbrell,InList Inc

Artificial intelligence based home automation is the future. If everyone in the United States installed Nest or a similar smart thermostat, they would collectively save hundreds of millions of dollars annually in wasted energy since Nest is able to learn when people are orare not home. Nest and others automatically adjust temperature saving on energy use and costs. Kristopher Jones,LSEO.com

Artificial Intelligence will do wonders to help automate processes that, today, take time and manual labor but dont contribute much to the bottom line or moving forward as a company. Automation will allow additional time and resources to be dedicated to what companies need to focus their energy on: customer experience. Andrew Kucheriavy,Intechnic

Read next: Heres everything you need to know about the state of autonomous cars

See more here:

13 ways AI will change your life - TNW

Westworld creators want to make a show about AI without ‘going straight to Skynet’ – Polygon

At times, Westworld is a show about the advancements and dangers of artificial intelligence, but the series creators never wanted it to be Skynet.

Jonathan Nolan and Lisa Joy spoke about the message they were trying to get across with the shows first season during a conference hosted by Wired. Nolan said that one of the tropes they wanted to avoid was turning AI into a terrifying enemy just because the rise of technology seemed scary. Nolan said that they never wanted their AI to be Skynet, the main antagonist and artificially intelligent death machines from the Terminator films.

Until now, AI has tended to lean into a dystopian perspective, Nolan said. It goes straight to Skynet, with the exception of Spike Jonze's Her, which is a beautiful movie. What's becoming increasingly clear is that's not how it's going to play out.

Nolan added that our relationship with the different forms of AI we interact with on a daily basis from Siri, Alexa and Goole to the AI that powers driverless cars is constantly changing, and thats where he finds inspiration. Nolan said the issue is that as technology rapidly changes, we expect more from the artificial intelligence in our lives. We come to rely on it without ever fully trusting it.

One of the goals of Westworld was to examine these relationships between machines and humans, co-creator Joy added. After talking to those who work in Silicon Valley and specialize on how artificial intelligence functions and grows it became clear to both creators that Westworld was introducing an important discussion to those who werent focused on it five days a week.

As technology increases exponentially and AI certainly grows were kind of leaping into the unfathomable with machines that can learn and process better than we can, Joy said. The thing that we've heard most is that it's almost good to have a prophylactic discussion; When is enough enough? What are the safeguards we need?

Orion Pictures

One of the ways that Nolan and Joy got away from turning their show into Terminator was by examining consciousness and storytelling from the perspective of artificial intelligence, instead of just humans. Nolan said they wanted to dive into the commonalities shared by humans and AI instead of just the differences to showcase just how powerful and resonating the technology can be.

We were attempting to look deeply at the question of consciousness and one of the things I was surprised about is that conscious is still largely a question for philosophers, Nolan said. It's not something the [computer science] folks [we talked to] want to get into.

Nolan added, however, that what became apparent during their conversations with experts in the field was the main question is quickly becoming whether consciousness is a necessary function going forward.

Consciousness is either very, very important and very hard to explore, or, as more than one computer scientist we talked to suggested, maybe it doesn't need to exist, Nolan said. It's not necessary.

Nolan and Joy didnt provide any hints as to what the second season of Westworld will focus on specifically, but Nolan said it will continue to examine the question of how stories are told and how important consciousness is to both humans and artificial intelligence.

Throughout history we have defined consciousness as that which others do not possess, Joy said. That bar has shifted. It's all subjective.

Westworld will return for a second season in 2018.

The rest is here:

Westworld creators want to make a show about AI without 'going straight to Skynet' - Polygon

These AI Agents Punched Holes in Their Virtual Universe While Playing Hide and Seek – Computer Business Review

Add to favorites

Bots removed opponents tools from the game space, and launched themselves into the air

Two teams of AI agents tasked with playing a game (or million) of hide and seek in a virtual environment developed complex strategies and counterstrategies and exploited holes in their environment that even its creators didnt even know that it had.

The game was part of an experiment by OpenAI designed to test the AI skills that emerge from multi-agent competition and standard reinforcement learning algorithms at scale. OpenAI described the outcome in a striking paper published this week.

The organisation, now heavily backed by Microsoft, described the outcome as further proof that skills, far more complex than the seed game dynamics and environment, can emerge (from such experiments/training exercises).

Some of its findings are neatly captured in the video below.

In a blog post, Emergent Tool Use from Multi-agent Interaction, OpenAI noted: These results inspire confidence that in a more open-ended and diverse environment, multi-agent dynamics could lead to extremely complex and human-relevantbehavior.

The AI hide and seek experiment, which pitted a team of finders against a team of seekers, made use of two core techniques in AI: multi-agent learning, which uses multiple algorithms in competition or coordination, and reinforcement learning; a form of programming that uses reward and punishment techniques to train algorithms.

In the game of AI hide and seek, the two opposing teams of AI agents created a range of complex hiding and seeking strategies compellingly illustrated in a series of videos by OpenAI that involved collaboration, tool use, and some creative pushing at the bounderies of the virtual parameters the world creators thought theyd set.

Another method to learn skills in an unsupervised manner is intrinsic motivation, which incentivizes agents to explore with various metrics such as model error or state counts, OpenAIs researchers Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew and Igor Mordatch noted.

We ran count-based exploration in our environment, in which agents keep an explicit count of states theyve visited and are incentivized to go to infrequently visited states, they added, detailing the outcomes which included the bots removing some of the tools their opponents were given entirely from the game space, and launching themselves into the air for a birds-eye view of their hiding opponent.

As they concluded: Building environments is not easy and it is quite often the case that agents find a way to exploit the environment you build in an unintended way.

Continued here:

These AI Agents Punched Holes in Their Virtual Universe While Playing Hide and Seek - Computer Business Review

Future Goals in the AI Race: Explainable AI and Transfer Learning – Modern Diplomacy

Recent years have seen breakthroughs in neural network technology:computers can now beat any living person at the most complex game invented by humankind, as well as imitate humanvoices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificialintelligence over human intelligence? And if not, what else do researchers anddevelopers need to achieve to make the winners in the AI race the kings of the world?

Background

Over the last 60 years, artificialintelligence (AI) has been the subject of muchdiscussion among researchers representing different approaches and schools ofthought. One of the crucial reasons for this is that there is no unifieddefinition of what constitutes AI, with differences persisting even now. Thismeans that any objective assessment of the current state and prospects of AI, andits crucial areas of research, in particular, will be intricately linked withthe subjective philosophical views of researchers and the practical experienceof developers.

In recent years, the term general intelligence, meaning the ability tosolve cognitive problems in general terms, adapting to the environment throughlearning, minimizing risks and optimizing the losses in achieving goals, hasgained currency among researchers and developers. This led to the concept of artificialgeneral intelligence (AGI), potentially vested not in a human,but a cybernetic system of sufficient computational power. Many refer to thiskind of intelligence as strong AI, as opposed to weak AI, which has becomea mundane topic in recent years.

As applied AI technology has developed over the last 60 years, we cansee how many practical applications knowledge bases, expert systems, imagerecognition systems, prediction systems, tracking and control systems forvarious technological processes are no longer viewed as examples of AI andhave become part of ordinary technology. The bar for what constitutes AIrises accordingly, and today it is the hypothetical general intelligence,human-level intelligence or strong AI, that is assumed to be the real thingin most discussions. Technologies that are already being used are broken downinto knowledge engineering, data science or specific areas of narrow AI thatcombine elements of different AI approaches with specialized humanities ormathematical disciplines, such as stock market or weather forecasting, speechand text recognition and language processing.

Different schools of research, each working within their own paradigms,also have differing interpretations of the spheres of application, goals,definitions and prospects of AI, and are often dismissive of alternativeapproaches. However, there has been a kind of synergistic convergence ofvarious approaches in recent years, and researchers and developers areincreasingly turning to hybrid models and methodologies, coming up withdifferent combinations.

Since the dawn of AI, two approaches to AI have been the most popular.The first, symbolic approach, assumes that the roots of AI lie inphilosophy, logic and mathematics and operate according to logical rules, signand symbolic systems, interpreted in terms of the conscious human cognitiveprocess. The second approach (biological in nature), referred to asconnectionist, neural-network, neuromorphic, associative or subsymbolic, isbased on reproducing the physical structures and processes of the human brainidentified through neurophysiological research. The two approaches have evolvedover 60 years, steadily becoming closer to each other. For instance, logicalinference systems based on Boolean algebra have transformed into fuzzy logic orprobabilistic programming, reproducing network architectures akin to neuralnetworks that evolved within the neuromorphic approach. On the other hand,methods based on artificial neural networks are very far from reproducing thefunctions of actual biological neural networks and rely more on mathematicalmethods from linear algebra and tensor calculus.

Are There Holes in Neural Networks?

In the last decade, it was the connectionist, or subsymbolic, approachthat brought about explosive progress in applying machine learning methods to awide range of tasks. Examples include both traditional statisticalmethodologies, like logistical regression, and more recent achievements inartificial neural network modelling, like deep learning and reinforcementlearning. The most significant breakthrough of the last decade was broughtabout not so much by new ideas as by the accumulation of a critical mass oftagged datasets, the low cost of storing massive volumes of training samplesand, most importantly, the sharp decline of computational costs, including thepossibility of using specialized, relatively cheap hardware for neural networkmodelling. The breakthrough was brought about by a combination of these factorsthat made it possible to train and configure neural network algorithms to makea quantitative leap, as well as to provide a cost-effective solution to a broadrange of applied problems relating to recognition, classification andprediction. The biggest successes here have been brought about by systems basedon deep learning networks that build on the idea of the perceptronsuggested 60 years ago by Frank Rosenblatt. However, achievements in the use ofneural networks also uncovered a range of problems that cannot be solved usingexisting neural network methods.

First, any classic neural network model, whatever amount of data it istrained on and however precise it is in its predictions, is still a black boxthat does not provide any explanation of why a given decision was made, letalone disclose the structure and content of the knowledge it has acquired inthe course of its training. This rules out the use of neural networks incontexts where explainability is required for legal or security reasons. Forexample, a decision to refuse a loan or to carry out a dangerous surgicalprocedure needs to be justified for legal purposes, and in the event that aneural network launches a missile at a civilian plane, the causes of thisdecision need to be identifiable if we want to correct it and prevent futureoccurrences.

Second, attempts to understand the nature of modern neural networks havedemonstrated their weak ability to generalize. Neural networks rememberisolated, often random, details of the samples they were exposed to duringtraining and make decisions based on those details and not on a real generalgrasp of the object represented in the sample set. For instance, a neuralnetwork that was trained to recognize elephants and whales using sets ofstandard photos will see a stranded whale as an elephant and an elephantsplashing around in the surf as a whale. Neural networks are good atremembering situations in similar contexts, but they lack the capacity tounderstand situations and cannot extrapolate the accumulated knowledge tosituations in unusual settings.

Third, neural network models are random, fragmentary and opaque, whichallows hackers to find ways of compromising applications based on these modelsby means of adversarial attacks. For example, a security system trained toidentify people in a video stream can be confused when it sees a person inunusually colourful clothing. If this person is shoplifting, the system may notbe able to distinguish them from shelves containing equally colourful items.While the brain structures underlying human vision are prone to so-calledoptical illusions, this problem acquires a more dramatic scale with modernneural networks: there are known cases where replacing an image with noiseleads to the recognition of an object that is not there, or replacing one pixelin an image makes the network mistake the object for something else.

Fourth, the inadequacy of the information capacity and parameters of theneural network to the image of the world it is shown during training andoperation can lead to the practical problem of catastrophic forgetting. This isseen when a system that had first been trained to identify situations in a setof contexts and then fine-tuned to recognize them in a new set of contexts maylose the ability to recognize them in the old set. For instance, a neuralmachine vision system initially trained to recognize pedestrians in an urbanenvironment may be unable to identify dogs and cows in a rural setting, butadditional training to recognize cows and dogs can make the model forget how toidentify pedestrians, or start confusing them with small roadside trees.

Growth Potential?

The expert community sees a number of fundamental problems that need tobe solved before a general, or strong, AI is possible. In particular, asdemonstrated by the biggest annual AI conference held in Macao, explainable AI and transfer learning are simply necessary in somecases, such as defence, security, healthcare and finance. Many leadingresearchers also think that mastering these two areas will be thekey to creating a general, or strong, AI.

Explainable AI allows for human beings (the user of theAI system) to understand the reasons why a system makes decisions and approvethem if they are correct, or rework or fine-tune the system if they are not.This can be achieved by presenting data in an appropriate (explainable) manneror by using methods that allow this knowledge to be extracted with regard tospecific precedents or the subject area as a whole. In a broader sense,explainable AI also refers to the capacity of a system to store, or at leastpresent its knowledge in a human-understandable and human-verifiable form. Thelatter can be crucial when the cost of an error is too high for it only to beexplainable post factum. And herewe come to the possibility of extracting knowledge from the system, either toverify it or to feed it into another system.

Transfer learning is the possibility of transferringknowledge between different AI systems, as well as between man and machine sothat the knowledge possessed by a human expert or accumulated by an individualsystem can be fed into a different system for use and fine-tuning.Theoretically speaking, this is necessary because the transfer of knowledge isonly fundamentally possible when universal laws and rules can be abstractedfrom the systems individual experience. Practically speaking, it is theprerequisite for making AI applications that will not learn by trial and erroror through the use of a training set, but can be initialized with a base ofexpert-derived knowledge and rules when the cost of an error is too high orwhen the training sample is too small.

How to Get the Best of Both Worlds?

There is currently no consensus on how to make an artificial general intelligence that is capable ofsolving the abovementioned problems or is based on technologies that couldsolve them.

One of the most promising approaches is probabilisticprogramming, which is a modern development ofsymbolic AI. In probabilistic programming, knowledge takes the form ofalgorithms and source, and target data is not represented by values ofvariables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinksthat this area is now in a state that deep learning technology was in about tenyears ago, so we can expect breakthroughs in the coming years.

Another promising symbolic area is Evgenii Vityaevs semantic probabilistic modelling, which makes it possible to build explainable predictive models basedon information represented as semantic networks with probabilistic inferencebased on Pyotr Anokhins theory of functional systems.

One of the most widely discussed ways to achieve this is throughso-called neuro-symbolic integration an attempt to get the best of bothworlds by combining the learning capabilities of subsymbolic deep neuralnetworks (which have already proven their worth) with the explainability ofsymbolic probabilistic modelling and programming (which hold significantpromise). In addition to the technological considerations mentioned above, thisarea merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction oftwo distinct but complementary systems: System 1 thinking is fast, unconscious,intuitive, unexplainable thinking, whereas System 2 thinking is slow,conscious, logical and explainable. System 1 provides for the effectiveperformance of run-of-the-mill tasks and the recognition of familiarsituations. In contrast, System 2 processes new information and makes sure wecan adapt to new conditions by controlling and adapting the learning process ofthe first system. Systems of the first kind, as represented by neural networks,are already reaching Gartnersso-called plateau of productivity in avariety of applications. But working applications based on systems of thesecond kind not to mention hybrid neuro-symbolic systems which the mostprominent industry players have only started to explore have yet to becreated.

This year, Russian researchers, entrepreneurs and government officialswho are interested in developing artificial general intelligence have a uniqueopportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, wherethey can learn about all the latest developments in the field from the worldsleading experts.

From ourpartner RIAC

Related

View post:

Future Goals in the AI Race: Explainable AI and Transfer Learning - Modern Diplomacy

Realeyes Announces The Development of Enhanced Emotion AI Technology Surpassing Industry Standards for Understanding People’s Attention and Emotions -…

NEW YORK, April 28, 2020 (GLOBE NEWSWIRE) -- Realeyes, a leading computer vision and emotion AI company, announced today the availability of its next-generation facial coding technology. Realeyes uses front-facing cameras and the latest in computer vision and machine learning technologies to detect attention and emotion among opt-in audiences as they watch video content. The enhanced classification system will provide customers with more sensitive, accurate insights into the emotional impact of their video content.

Realeyes continues to set the industry standard for facial coding accuracy. The improved classification system results in a 20% increase in emotion detection across all measured emotions from facial cues. It also reduces occasional false positive emotion readings by half. Realeyes is the most accurate emotion detection technology among leading API cloud providers, based on an internal benchmark study of thousands of videos.

Our technology has reached a new level of sophistication, with the accuracy of our detection beginning to rival that of humans across certain emotions like happiness and surprise, said Elnar Hajiyev, Chief Technology Officer at Realeyes. Realeyes is building transformational apps to enable companies to create more remarkable experiences for people, and it starts with a foundation of world-class core vision technology. Realeyes today holds 11 patents covering different aspects of building emotion AI technology, and has 29 pending.

The updated classifications are applied to all Realeyes products and improve on the platforms ability to accurately analyze a wider variety of viewers faces and emotions. The new classifications allow for more nuanced reading through more sensitive emotional curves and bring greater value to the data collected through facial coding.

Said Hajiyev: More accurate detection enables companies to better understand the pure attention and emotion response of their audiences. However, more accurate emotion detection also enables advertisers to better predict in-market outcomes such as video view-through rates, so they can create more engaging creative to maximize media spend.

Realeyes upgraded classifiers allow for a greater range of facial measurements across ethnicities, especially those of Asian heritage. Combined with improvements to Realeyes performance on mobile devices, the updates pave the way for an entirely new range of products and applications based on emotion AI, along with relevance in new markets around the world. Realeyes announced the appointment of its Japan country manager Kyoko Tanaka followed by last years strategic investment from notable international investors Draper Esprit, and NTT DOCOMO Ventures, Inc., the VC arm of NTT Group, Japans leading mobile operator.

Trained on the worlds richest database of facial coding data, Realeyes technology now incorporates more than 615 million emotional labels across more than 3.8 million video sessions to provide more nuanced insights into the emotional impact of video content. The recent update strengthens Realeyes predictive modeling for behaviors like view-through rate and responses like interest and likability, while providing best-in-class results 8x faster than its previous version.

About Realeyes

Using front-facing cameras and the latest in computer vision and machine learning technologies, Realeyes measures how people feel as they watch video content online, enabling brands, agencies and media companies to inform and optimize their content as well as target their videos at the right audiences. Realeyes technology applies facial coding to predictive, big-data analytics, driving bottom-line business outcomes for brands and publishers.

Founded in 2007, Realeyes has offices in New York, London, Tokyo and Budapest. Customers include brands such as Mars Inc, AT&T, Hersheys and Coca-Cola, agencies Ipsos, MarketCast and Publicis, and media companies such as Warner Media and Teads.

Media Contact:

Ben Billingsley

Broadsheet Communications

(917) 826 - 1103

ben@broadsheetcomms.com

Read the original here:

Realeyes Announces The Development of Enhanced Emotion AI Technology Surpassing Industry Standards for Understanding People's Attention and Emotions -...

3 Ethical Considerations When Investing in AI – Manufacturing Business Technology

While Artificial Intelligence (AI) has been prevalent in industries such as the financial sector, where algorithms and decision trees have long been used in approving or denying loan requests and insurance claims, the manufacturing industry is at the beginning of its AI journey. Manufacturers have started to recognize the benefits of embedding AI into business operationsmarrying the latest techniques with existing, widely used automation systems to enhance productivity.

A recent international IFS study polling 600 respondents, working with technology including Enterprise Resource Planning (ERP), Enterprise Asset Management (EAM), and Field Service Management (FSM), found more than 90 percent of manufacturers are planning AI investments. Combined with other technologies such as 5G and the Internet of Things (IoT), AI will allow manufacturers to create new production rhythms and methodologies. Real-time communication between enterprise systems and automated equipment will enable companies to automate more challenging business models than ever before, including engineer-to-order or even custom manufacturing.

Despite the productivity, cost-savings and revenue gains, the industry is now seeing the first raft of ethical questions come to the fore. Here are the three main ethical considerations companies must weigh-up when making AI investments.

At first, AI in manufacturing may conjure up visions of fully automated smart factories and warehouses, but the recent pandemic highlighted how AI can play a strategic role in the back-office, mapping different operational scenarios and aiding recovery planning from a finance standpoint. Scenario planning will become increasingly important. This is relevant as governments around the world start lifting lockdown restrictions and businesses plan back to work strategies. Those simulations require a lot of data but will be driven by optimization, data analysis and AI.

And of course, it is still relevant to use AI/Machine Learning to forecast cash. Cash is king in business right now. So, there will be an emphasis on working out cashflows, bringing in predictive techniques and scenario planning. Businesses will start to prepare ways to know cashflow with more certainty should the next pandemic or crisis occur.

For example, earlier in the year the conversation centered on the just-in-time scenarios, but now the focus is firmly on what-if planning at the macro supply chain level:

Another example is how you can use a Machine Learning service and internal knowledge base to facilitate Intelligent Process Automation allowing recommendations and predictions to be incorporated into business workflows, as well as AI-driven feedback on how business processes themselves can be improved or automated.

The closure of manufacturing organizations and reduction in operations due to depleting workforces highlight AI technology in the front-office isnt perhaps as readily available as desired, and that progress needs to be made before it can truly provide a level of operational support similar to humans.

Optimists suggest AI may replace some types of labor, with efficiency gains outweighing transition costs. They believe the technology will come to market at first as a guide-on-the-side for human workers, helping them make better decisions and enhancing their productivity, while having the potential to upskill existing employees and increase employment in business functions or industries that are not in direct competition with AI.

Indeed, recent IFS research points to an encouraging future for a harmonized AI and human workforce in manufacturing. The IFS AI study revealed that respondents saw AI as a route to create, rather than cull, jobs. Around 45 percent of respondents stated they expect AI to increase headcount, while 24 percent believe it wont impact workforce figures.

The pandemic has demonstrated AI hasnt developed enough to help manufacturers maintain digital-only operations during unforeseen circumstances, and decision makers will be hoping it can play a greater role to mitigate extreme situations in the future.

It is easy for organizations to say they are digitally transforming. They have bought into the buzzwords, read the research, consulted the analysts, and seen the figures about the potential cost savings and revenue growth.

But digital transformation is no small change. It is a complete shift in how you select, implement and leverage technology, and it occurs company-wide. A critical first step to successful digital transformation is to ensure that you have the appropriate stakeholders involved from the very beginning. This means manufacturing executives must be transparent when assessing and communicating the productivity and profitability gains of AI against the cost of transformative business changes to significantly increase margin.

When businesses first invested in IT, they had to invent new metrics that were tied to benefits like faster process completion or inventory turns and higher order completion rates. But manufacturing is a complex territory. A combination of entrenched processes, stretched supply chains, depreciating assets and growing global pressures makes planning for improved outcomes alongside day-to-day requirements a challenging prospect. Executives and their software vendors must go through a rigorous and careful process to identify earned value opportunities.

Implementing new business strategies will require capital spending and investments in process change, which will need to be sold to stakeholders. As such, executives must avoid the temptation of overpromising. They must distinguish between the incremental results they can expect from implementing AI in a narrow or defined process as opposed to a systemic approach across their organization.

There can be intended or unintended consequences of AI-based outcomes, but organizations and decision makers must understand they will be held responsible for both. We have to look no further than tragedies from self-driving car accidents and the subsequent struggles that followed as liability is assigned not on the basis of the algorithm or the inputs to AI, but ultimately the underlying motivations and decisions made by humans.

Executives therefore cannot afford to underestimate the liability risks AI presents. This applies in terms of whether the algorithm aligns with or accounts for the true outcomes of the organization, and the impact on its employees, vendors, customers and society as a whole. This is all while preventing manipulation of the algorithm or data feeding into AI that would impact decisions in ways that are unethical, either intentionally or unintentionally.

Margot Kaminski, associate professor at the University of Colorado Law School, raised the issue of automation biasthe notion that humans trust decisions made by machines more than decisions made by other humans. She argues the problem with this mindset is that when people use AI to facilitate decisions or make decisions, they are relying on a tool constructed by other humans, but often they do not have the technical capacity, or practical capacity, to determine if they should be relying on those tools in the first place.

This is where explainable AI will be criticalAI which creates an audit path so both before and after the fact, there is a clear representation of the outcomes the algorithm is designed to achieve and the nature of the data sources it is working form. Kaminski asserts explainable AI decisions must be rigorously documented to satisfy different stakeholdersfrom attorneys to data scientists through to middle managers.

Manufacturers will soon move past the point of trying to duplicate human intelligence using machines, and towards a world where machines behave in ways that the human mind is just not capable. While this will reduce production costs and increase the value organizations are able to return, this shift will also change the way people contribute to the industry, the role of labor, and civil liability law.

There will be ethical challenges to overcome, but those organizations who strike the right balance between embracing AI and being realistic about its potential benefits alongside keeping workers happy will usurp and take over. Will you be one of them?

Read more here:

3 Ethical Considerations When Investing in AI - Manufacturing Business Technology

Meet Five Synthetic Biology Companies Using AI To Engineer Biology – Forbes

AI is changing the field of synthetic biology and how we engineer biology. Its helping engineers design new ways to design genetic circuits -- and it could leave a remarkable impact on the future of humanity

TVs and radios blare that artificial intelligence is coming, and it will take your job and beat you at chess.

But AI is already here, and it can beat you and the worlds best at chess. In2012, it was also used by Google to identify cats in YouTube videos. Today, its the reason Teslas have Autopilot and Netflix and Spotify seem to read your mind. Now, AI is changing the field of synthetic biology and how we engineer biology. Its helping engineers design new ways to design genetic circuits and it could leave a remarkable impact on the future of humanity through the huge investment it has been receiving ($12.3b in the last 10 years) and the markets it is disrupting.

The idea of artificial intelligence is relatively straightforward it is the programming of machines with reasoning, learning, and decision-making behaviors. Some AI algorithms (which are just a set of rules that a computer follows) are so good at these tasks that they can easily outperform human experts.

Most of what we hear about artificial intelligence refers to machine learning, a subclass of AI algorithms that extrapolate patterns from data and then use that analysis to make predictions. The more data these algorithms collect, the more accurate their predictions become. Deep learning is a more powerful subcategory of machine learning, where a high number of computational layers called neural networks (inspired by the structure of the brain) operate in tandem to increase processing depth, facilitating technologies like advanced facial recognition (including FaceID on your iPhone).

[For a more detailed explanation of artificial intelligence and its various subcategories, check out this article and its flowchart.]

Regardless of the type of AI, or its application, we are in the midst of a computational revolution that is extending its tendrils beyond the computer world. Soon, AI will impact the medicines you take, the fuels you burn, and even the detergents that you use to wash your clothes.

Biology, in particular, is one of the most promising beneficiaries of artificial intelligence. From investigating genetic mutations that contribute to obesity to examining pathology samples for cancerous cells, biology produces an inordinate amount of complex, convoluted data. But the information contained within these datasets often offers valuable insights that could be used to improve our health.

In the field of synthetic biology, where engineers seek to rewire living organisms and program them with new functions, many scientists are harnessing AI to design more effective experiments, analyze their data, and use it to create groundbreaking therapeutics. Here are five companies that are integrating machine learning with synthetic biology to pave the way for better science and better engineering.

(Oakland, CA, founded in 2014, has raised $24.9M)

Machine learning algorithms must begin with large amounts of data but, in biology, good data is incredibly challenging to produce because experiments are time-consuming, tedious and hard to replicate. Fortunately, one company is addressing this bottleneck by making it easier for scientists to do exactly that.

Riffyns cloud-based software platform helps researchers standardize, define, and perform experiments and streamlines data analysis, which enables researchers to focus on doing the actual science and makes the use machine learning algorithms to extract deeper insights from their experiments an everyday reality.

With this platform, experiments can be conducted more efficiently, leading to massive decreases in cost, improvements in productivity and quality, and data that is primed to be further analyzed with sophisticated machine learning techniques. That means companies can use this technology to develop new proteins for cancer therapeutics, and they can do it much faster and better than before. Riffyn already works with 8 of the top 15 global biotech and biopharma firms and they were founded just five years ago.

(Cambridge, UK, officially launched in 2019)

There are a lot of moving parts in the synthetic biology world, which makes it difficult but vital to streamline and integrate operations as much as possible. For the last decade, the computational biology arm of Microsoft Research, Station B, has been developing machine learning models for biology to fix this problem and expedite research across a variety of fields, from medicine to construction.

Its efforts are paying off in the form of various new partnerships, too. With Synthace, it is developing software to automate and expedite experiments in the lab. Station B is additionally working with Princeton to research the mechanisms behind biofilms (relevant to how bacterial colonies develop antibiotic resistance) by utilizing machine learning-based methods that extract patterns from images taken during different stages of bacterial growth. Station B is also collaborating with Oxford Biomedica, a company harnessing these machine learning capabilities to improve a promising gene therapy for leukemia and lymphoma. This is perhaps one of synthetic biologys biggest areas for impact: designing therapeutics to combat a variety of diseases.

(Based in San Francisco, CA, founded in 2012, has raised $51M)

Atomwise is tackling drug development with their deep-learning platform, called AtomNet, that can rapidly model molecular structures. It can accurately analyze chemical interactions within small molecules to predict the efficacy of targeting diseases ranging from Ebola to multiple sclerosis. By utilizing data about atomic structure, Atomwise designs novel therapeutics that would otherwise be nearly impossible to develop.

They have numerous academic and corporate partnerships with institutions including Charles River Laboratories, Merck, University of Toronto, and Duke University School of Medicine, that are providing many of the real-world applications and opportunities to drive this research forward. They also recently announced an up-to $1.5B collaboration with the Jiangsu Hansoh Pharmaceutical Group, the Chinese company with one of this years biggest biopharma IPOs.

While Atomwises approach to designing molecules is powerful and well on its way to combatting multiple diseases, there is no one perfect method to computational discovery. Thats where Arzeda comes in.

(Seattle, WA, founded in 2008, has raised $15.2M)

Arzeda, a company originating from the Baker Lab at the University of Washington, uses its protein design platform (rooted, of course, in machine learning algorithms) to engineer proteins for everything from industrial enzymes to crops and their microbiomes.

Arzeda builds its molecules entirely from scratch (or de novo), rather than optimize existing ones, to perform new functions not found anywhere in nature; deep learning techniques are vital to ensure the proteins they design fold correctly (a very computationally demanding problem) and function as intended. Once the computational steps are complete, the new proteins are produced through fermentation (just like beer), bypassing natural evolution to efficiently produce brand-new molecules.

(South San Francisco, CA, founded in 2012, self-funded by licensing technologies)

On the other end of the design spectrum, Distributed Bio harnesses rational protein engineering to optimize existing antibodies, which are the proteins in your body that detect bacteria and fight off other disease-causing invaders, to create novel therapeutics.

Among the many immunology-engineering technologies that the company boasts (from a universal flu vaccine to a broad-coverage snake antivenom) is the Tumbler platform. Using machine learning methods, Tumbler creates over 500 million variations of a starting antibody to expand and quantify the search space of what changes to the molecule are most valuable; then, it scores sequences to predict how well they bind to their target in real life and uses the valuable change information to further improve the best-scoring sequences. The production cycle continues as the top sequences are synthesized and tested in the lab. Eventually, an archetypal molecule emerges to fulfill the intended therapeutic purpose something not necessarily observed in nature, but combining all of the best possible characteristics.

Tumbler has helped to enable a wide range of applications beyond traditional single-target drug development from designing antibodies that bind to multiple targets simultaneously to creating chimeric antigen receptor T-cell (CAR-T) therapies (together with Chimera Bioengineering) for cancer treatments with reduced toxicity, the power of this end-to-end optimization platform to generate ideal antibodies at scale is unprecedented.

While this progress is exciting, artificial intelligence is not a universal replacement for our investigations of the natural world, nor is it the only way to develop cures for human diseases. At times, it may not be technically useful or even ethically sound. As we continue to reap the benefits of this technology and increasingly incorporate it into our daily lives, we must continue having conversations about the design, implementation, and ethics of innovations in synthetic biology and AI; we stand on the precipice of a new age for science and humanity.

Thanks to Aishani Aatresh for additional research and reporting in this article. Aishani is also a researcher at Distributed Bio developing computational immunoengineering methods to generate superior antibodies. Please note: I am the founder of SynBioBeta, the innovation network for the synthetic biology industry, and some of the companies that I write about are sponsors of the SynBioBeta conference (click here for a full list of sponsors).

Continued here:

Meet Five Synthetic Biology Companies Using AI To Engineer Biology - Forbes

NVIDIA’s Accelerated Computing Platform To Power Japan’s Fastest AI Supercomputer – Forbes

NVIDIA's Accelerated Computing Platform To Power Japan's Fastest AI Supercomputer
Forbes
Tokyo Tech is in the process of building its next-generation TSUBAME supercomputer, featuring NVIDIA GPU technology and the company's Accelerated Computing Platform. TSUBAME 3.0, as the system will be known, will ultimately be used in tandem with ...
Japan announces AI supercomputerScientific Computing World

all 5 news articles »

Read more:

NVIDIA's Accelerated Computing Platform To Power Japan's Fastest AI Supercomputer - Forbes

Reveal Acquires NexLP to become the leading AI-powered eDiscovery Solution – PR Newswire India

"The future of eDiscovery is artificial intelligence. We've acquired the leader in this space to ensure our platform is powered by cutting-edge AI technology and NexLP's premier data science team," said Reveal CEO, Wendell Jisa. "This exclusive integration of NexLP AI into Reveal's solution provides our clients the opportunity to lead in the evolution of how law is practiced."

NexLP's artificial intelligence platform turns disparate, unstructured data - including email communications, business chat messages, contracts and legal documents - into meaningful insights that can be used to deliver operational efficiencies and proactive risk mitigation for legal, corporate and compliance teams.

Reveal clients have access to the next-generation solution now. The companies have worked to fully integrate NexLP's AI software into Reveal's review software for more than a year. All features, including the industry-exclusive ability to run multiple AI models, as well as all future functionality, become part of Reveal's standard software. NexLP's artificial intelligence platform will remain available as a stand-alone application for current clients.

With the acquisition, Jay Leib, Co-Founder and CEO of NexLP, joins the leadership team of Reveal as its EVP of Innovation & Strategy.

"We chose Reveal, after considering all the major players in the space, because they offer by-far, the most comprehensive, solutions-oriented technology on the market and we have a shared vision for the future of legal technology," said Jay Leib, Reveal EVP of Innovation & Strategy. "Reveal's global footprint and ability to deploy the Reveal solution in the cloud or on-premise enables us to rapidly expand the adoption of AI to tens of thousands of legal, risk and compliance professionals overnight. Our existing clients and partners should all be thrilled with our ability to expand our capabilities by joining Reveal."

The NexLP acquisition is Reveal's second major investment since Gallant Capital Partners, a Los Angeles-based investment firm, acquired a majority stake in Reveal in 2018. In June 2019, Reveal acquired Mindseye Solutions, an industry-leading processing and early case assessment software solution.

About Reveal Data Corporation

Reveal helps legal professionals solve complex discovery problems. As a cloud-based provider of eDiscovery, risk and compliance software, Reveal offers the full range of processing, early case assessment, review and artificial intelligence capabilities. Reveal clients include Fortune 500 companies, legal service providers, government agencies and financial institutions in more than 40 countries across five continents. Featuring deployment options in the cloud or on-premise, an intuitive user design, multilingual user interfaces and the automatic detection of more than 160 languages, Reveal accelerates legal review, saving users time and money. For more information, visit http://www.revealdata.com.

About NexLP

NexLP's Story Engine uses AI and machine learning to derive actionable insight from structured and unstructured data to help legal, corporate and compliance teams proactively mitigate risk and untapped opportunities faster and with a greater understanding of context. In 2014, NexLP was selected to be a member of TechStars Chicago. For more information, visit:http://www.nexlp.com.

Contact

Jennifer Fournier[emailprotected]

Photo - https://mma.prnewswire.com/media/1226822/Jisa_and_Leib_Announcement.jpg

Home

SOURCE Reveal

Original post:

Reveal Acquires NexLP to become the leading AI-powered eDiscovery Solution - PR Newswire India

A Super Smash Bros-playing AI has taught itself how to stomp professional players – Quartz

A Super Smash Bros-playing AI has taught itself how to stomp professional players
Quartz
The AI, nicknamed Phillip, had been built by a Ph.D student from MIT, with help from a friend at New York University, and it honed its craft inside an MIT supercomputer. By the time Gravy stopped playing, the bot had killed him eight times, compared to ...

and more »

Read more:

A Super Smash Bros-playing AI has taught itself how to stomp professional players - Quartz