James Webb Space Telescope images expected at White House unveiling …

President Biden will unveil the first color image from the James Webb Space Telescope at the White House on Monday, heralding the end of tests and checkout and the beginning of science operations by the world's most powerful space observatory.

"We're going to give humanity a new view of the cosmos, and it's a view that we've never seen before," NASA Administrator Bill Nelson, who will join Biden at the White House, told reporters in a preview briefing.

"One of those images ... is the deepest image of our universe that has ever been taken," he said. "And we're only beginning to understand what Webb can and will do."

NASA plans to release additional "first light" images Tuesday, photos designed to show off Webb's ability to capture light from the first generation of stars and galaxies; to chart the details of stellar evolution, from starbirth to death by supernova; and to study the chemical composition of exoplanet atmospheres.

For the past 30 years, the Hubble Space Telescope has become one of the most iconic instruments in astronomical history, helping astronomers pin down the age of the universe, confirming the presence of supermassive black holes, capturing the deepest views of the cosmos ever collected and providing fly-by class images of planets in Earth's solar system.

But Webb, operating at just a few degrees above absolute zero behind a tennis-court size sunshade, promises to push the boundaries of human knowledge even deeper with a 21.3-foot-wide segmented primary mirror capable of detecting the faint, stretched-out infrared light from the era when stars began "turning on" in the wake of the Big Bang.

Launched on Christmas Day, Webb is stationed in a gravitationally stable orbitnearly 1 million miles from Earth. For the past six months, engineers and scientists have been working through a complex series of deployments, activations and checkouts, fine tuning the telescope's focus and optimizing the performance of its four science instruments.

The initial images released Monday and Tuesday, selected by an international team of astronomers, will "demonstrate to the world that Webb is, in fact, ready for science, and that it produces excellent and spectacular results," said Klaus Pontoppidan, Webb project scientist at the Space Telescope Science Institute.

"And it's also to highlight the breadth, the sheer breadth of science that can be done with Webb and to highlight all of the four science instruments," he added. "And last but not least, to celebrate the beginning of normal science operations."

The targets for Webb's first public images include:

"The first images will include observations that span the range of Webb science themes," said Pontoppidan. "From the early universe, the deepest infrared view of the cosmos to date. We will also see an example of how galaxies interact and grow, and how these cataclysmic collisions between galaxies drive the process of star formation.

"We'll see a couple of examples from the life cycle of stars, starting from the birth of stars, where Webb can reveal new, young stars emerging from their natal cloud of gas and dust, to the death of stars, like a dying star seeding the galaxy with new elements and new dust that may one day become part of new planetary systems."

Last but not least, he said, the team will show off the first chemical fingerprints from the atmosphere of an exoplanet.

One of the Hubble Space Telescope's most astonishing images was its initial "deep field" look at a tiny patch of seemingly empty sky over a 10-day period in 1995. To the amazement of professionals and the public alike, that long-exposure image revealed more than 3,000 galaxies of every shape, size and age, some of them the oldest, most distant ever seen.

Subsequent Hubble deep fields pushed even farther back in time, detecting the faint light of galaxies that were shining within about 500 million years of the Big Bang. How stars formed and got organized so quickly into galactic structures is still a mystery, as is the development of the supermassive black holes at their cores.

Webb's four instruments are expected to push the boundaries still closer to the beginning of galaxy formation. A test image from the telescope's Canadian-built Fine Guidance Sensor, an image that wasn't optimized for the detection of extremely faint objects, nonetheless revealed thousands of galaxies.

Webb's look at SMACS 0723 is expected to demonstrate the enormous reach of the observatory.

"This is really only the beginning, we're only scratching the surface," Pontoppidan said. "We have in the first images, a few days worth of observations. Looking forward, we have many years of observation, so we can only imagine what that will be."

Bill Harwood has been covering the U.S. space program full-time since 1984, first as Cape Canaveral bureau chief for United Press International and now as a consultant for CBS News. He covered 129 space shuttle missions, every interplanetary flight since Voyager 2's flyby of Neptune and scores of commercial and military launches. Based at the Kennedy Space Center in Florida, Harwood is a devoted amateur astronomer and co-author of "Comm Check: The Final Flight of Shuttle Columbia."

Read more from the original source:

James Webb Space Telescope images expected at White House unveiling ...

First James Webb Telescope photo to be unveiled by Biden

President Joe Biden will unveil the much-anticipated first full-color image from NASA's James Webb Space Telescope on Monday, agency officials confirmed.

The image, known as "Webb's First Deep Field," will be the deepest and highest-resolution infrared view of the universe ever captured, showing myriad galaxies as they appeared up to 13 billion years in the past, according to NASA.

The agency and its partners, the European Space Agency and the Canadian Space Agency, are set to release a separate batch of full-color images from the Webb telescope on Tuesday, but Biden, Vice President Kamala Harris and the public will get a sneak peek a day early.

NASA will brief the president and the vice president on Monday, agency officials said, and the first image will be revealed at an event at 5 p.m. ET at the White House.

The $10 billion James Webb Space Telescope ishumanitys largest and most powerful space telescope, and experts have said it could revolutionize our understanding of the cosmos.

After the White House event, NASA will unveil more images in an event streamed live Tuesday at 10:30 a.m. ET. NASA officials said that batch will include the Webb telescopes first spectrum of an exoplanet, showing light emitted at different wavelengths from a planet in another star system. The images could offer new insights into the atmospheres and chemical makeups of other exoplanets in the cosmos.

Some images included in the Tuesday release will show how galaxies interact and grow, and others will depict the life cycle of stars, from the emergence of new ones to violent stellar deaths.

The Webb telescope launched into space on Dec. 25.The tennis-court-size observatory is able to peer deeper into the cosmos and in greater detail than any telescope that has come before it.

Denise Chow is a reporter for NBC News Science focused on general science and climate change.

See more here:

First James Webb Telescope photo to be unveiled by Biden

NASA’s James Webb Space Telescope: Here’s What You’ll See in the First …

NASA, along with the European and Canadian space agencies, will be releasing the first science images from the brand new James Webb Space Telescopeon Tuesday, and now we know what celestial bodies we'll be seeing in those historic pictures.

JWST is the long-awaited successor to the Hubble Space Telescope that finally launched on Christmas Day after years of delays.

On Friday, NASA revealed the list of cosmic objects that JWST will target for its first batch of full-color images offering unprecedented and detailed views of deep space. If the telescope's stunning first test image is any indication, it's going to be as good as any Instagram feed out there.

The targets include the Carina Nebula and Southern Ring Nebula, which are bright areas of gas and other material. The Carina Nebula (pictured above) is a so-called stellar nursery where stars are forming, and it's filled with massive stars that help make it one of the largest and brightest nebulas in the sky. The Southern Ring Nebula is a planetary nebula -- in this case, a wide cloud of gas half a light-year in diameter surrounding a dying star -- and relatively close on a cosmic scale, at just 2,000 light-years away.

The southern ring nebula is also known as the "Eight-Burst" Nebula because of it appears to be a figure-8 when seen through some telescopes.

Two other targets we'll see in fantastic high resolution next week are the galaxy group Stephan's Quintet, a particularly photogenic grouping of galaxies that seem to be dancing around each other for eternity, and SMACS 0723, which is a massive galaxy cluster that can act as a so-called gravitational lens to help scientists see deeper into space and observe fainter galaxies.

This quintet of galaxies is made up of four galaxies that are actually near each other and a fifth that appears nearby but is really in the foreground and much closer to Earth.

JWST also is taking a look at the planet WASP-96b, a gas giant world about half the mass of Jupiter and located 1,150 light-years from Earth. The powerful new instruments on the space telescope should be able to provide new insights into the composition of the planet's atmosphere and a fun teaser of what we'll soon discover about other exoplanets, including those that are more Earth-like.

The images that the space agencies will unveil on July 12 are just the beginning. Scientists have applied to use the telescope through a competitive process, and the first year of observations have already been scheduled. It's quite likely that JWST will change our perspective on some aspects of the universe in the months and years to come.

See more here:

NASA's James Webb Space Telescope: Here's What You'll See in the First ...

NASA says the James Webb Space Telescope will be hit by meteorites – TweakTown

The James Webb Space Telescope is slated to unlock the universe to researchers with powerful instruments capable of looking further back in time than ever before.

A new report published in Nature last Friday estimates that the James Webb Space Telescope (JWST) will be slapped by at least one meteorite per month for the rest of its life. Notably, NASA stated in early June that its next-generation space telescope was struck by a micrometeorite in May, and that the impact didn't cause any significant damage. However, over time NASA estimates that micrometeorite impacts will reduce the total lifespan on if the observatory.

So far, Webb has been smacked by five micrometeorites, with the fifth being the most recent and the largest. When Webb was being designed, engineers knew the large observatory would be prone to fast-moving dust particles impacting the mirrors, which is why they took a large amount of time testing how much Webb's mirrors could endure micrometeorite impacts.

According to estimations, on average, Webb will be hit with one micrometeorite per month, and after 10 years, only 0.1% of the primary mirror would be damaged. Webb has an anticipated lifespan of 20 years.

The rest is here:

NASA says the James Webb Space Telescope will be hit by meteorites - TweakTown

Video Puts into Perspective How Powerful the James Webb Telescope Is – PetaPixel

Last week, NASA shared a photo captured by the James Webb Space Telescopes guidance camera that, while imperfect, is the deepest image ever captured of the universe so far.

While it is one thing to state that a photo is the farthest ever captured of the universe, it can be difficult to understand just how monumental that achievement is without some context. Ethan Gone, a self-described amateur astrophotographer who goes by the name k2qogir on Youtube, puts the photo in a more easily digestible perspective that truly showcases the incredible distance that James Webb is able to image.

In order to see the area of the sky that James Webb captured, Gone took a six-hour exposure of the same area and compared the results.

The recent James Webb Space Telescope(JWST) guide cameras test image looks really similar to Hubbles deep fields, which are my favorite. I decided to take a long exposure to the same target to see what my telescope can see and compare it to JWSTs image. I found one really faint galaxy 26 to 32 million light-years away, and a cute planetary nebula called Abell 39, Gone explains.

From his perspective, the rest of the area around that region was just empty space.

Match my image with the JWSTs color and zoom in to the same region, [and] my telescope can only offer a handful of faint stars, he says.

By comparison, James Webbs infrared telescope revealed what appears to be thousands of galaxies and stars in this small region of the night sky.

To better understand how impressive Webbs view of this region of space is, Gone shows that the area it imaged is approximately the size of the Mare Crisium on the Moon, for those looking into the sky from a perspective on Earth.

In short, what Webb imaged with its guidance camera is just one astronomically tiny portion of the sky that looks nearly empty to those on Earth, yet through not even its main camera it was able to see a huge number of stars and galaxies. It showcases the sheer vastness of space and how much more humans can understand about the universe thanks to the exceptional power of the new observatory.

NASA is set to release the first full-color photos captured by the James Webb Space Telescope this week. The first will be released later today by President Joe Biden at 5:00 PM ET, with the other four to be released on July 12 starting at 10:40 AM EDT.

See the original post here:

Video Puts into Perspective How Powerful the James Webb Telescope Is - PetaPixel

James Webb Space Telescope’s 1st photos | Space

An image captured by the James Webb Space Telescope's Fine Guidance Sensor reveals hundreds of distant galaxies.

(Image credit: NASA, CSA, and FGS team)

Update: Late on Sunday night (July 10), NASA announced that President Joe Biden would unveil the first of the new science-quality images on Monday (July 11) at 5 p.m. EDT (2100 GMT). You can watch the event live here on Space.com courtesy of the agency.

Original story: NASA will unveil the first science-quality images from its next-generation James Webb Space Telescope on Tuesday (July 12). You can watch the event live here on Space.com courtesy of the agency beginning at 10:30 a.m. EDT (1430 GMT).

As highly anticipated as these images will be, they aren't the first photos from the massive space observatory. The James Webb Space Telescope, also known as JWST or Webb, launched on Dec. 25, 2021, and since then, NASA and its partners on the project have offered tantalizing peeks at what is to come.

The image above, which NASA released on Wednesday (July 6), represents 32 hours of observing time from JWST's Fine Guidance Sensor. That device is not one of the telescope's four key science instruments; instead, it keeps the observatory pointing steadily at its target. Still, the image is the deepest field ever captured a superlative that NASA Administrator Bill Nelson hinted one of the formal first images would steal.

We'll be updating this gallery live on Tuesday to share the official first images as they are unveiled.

See the rest here:

James Webb Space Telescope's 1st photos | Space

President Biden will reveal the first James Webb Space Telescope image today at 5PM ET – Yahoo! Voices

NASA has decided to reveal the first James Webb Space Telescope (JWST) image today rather than waiting until tomorrow as planned, it announced in a tweet. President Joe Biden will do the honor at 5PM ET, with a live stream of the event available on NASA TV and images available simultaneously on NASA's website.

Anticipation has been building for the first images, to say the least. NASA stoked that on Friday by announcing the targets to be shown, including the Carina and Southern Ring Nebulae, the gas exoplanet WASP-96b and a deep field view of the SMACS 0723 galaxy clusters. Only a select group of scientists and administrators have viewed the images so far. "What I have seen moved me, as a scientist, as an engineer, and as a human being," said NASA deputy administrator Pam Melroy.

It appears that just a single image will be revealed today, but NASA didn't say which one. The rest are still slated to arrive tomorrow, starting at 9:45 with remarks by NASA and Webb leadership. That'll be followed by live coverage of the image release slated for 10:30 AM ET on NASA TV, YouTube, Facebook, Twitter, Twitch and Daily Motion.

View post:

President Biden will reveal the first James Webb Space Telescope image today at 5PM ET - Yahoo! Voices

James Webb Telescope’s first ‘stunning’ science images set to be revealed – Welland Tribune

After two decades and more than $10 billion, the most powerful telescope yet built will finally be making its splashy public debut Monday with a little help from the president of the United States.

On Monday evening, U.S. President Joe Biden will release one of the first science images captured by the James Webb Space Telescope (JWST), arguably the most complex machine that humanity has ever built, according to one of its creators.

Launched in late December, the Webb telescope is 100 times more powerful than its astronomy-altering predecessor, the Hubble, thanks primarily to a mirror that has 6.25 times the area. Its designed to observe the celestial skies in infrared, not only allowing it to pierce the veils of cosmic dust that often obscure visible light, but better equipping it to see objects in the furthest reaches of the universe.

Its spent the past six months travelling to its orbit 1.5 million kilometres from Earth, deploying, testing and calibrating its instruments. The picture to be released Monday, is one of the first five science images the rest to be released Tuesday selected to show off the new telescopes astronomic observation chops.

When you see the images, first of all, theyre just stunning, like very visually beautiful, says Sarah Gallagher, science adviser to the Canadian Space Agency president, whos already had a sneak peek.

They have some of the elements that youve seen from Hubble with the richness of the structures and the details. But theyre really next-level because Webb is so much more sensitive. Its beautiful, just exquisite imaging. Itll really jump out at you.

The five images to be released were selected not only to show off the capabilities of the new telescope, but also to highlight the four major themes of Webb telescope research, she says.

One is of an exoplanet, a gas giant like Jupiter, only about half the size. Another image will be looking across the furthest reaches of the universe to see galaxies in their earliest stages of development. Another will probe deeper into our own Milky Way to see how stars are formed. And yet another will be looking at what happens when those stars begin to die.

While those first science images to be released arent connected to any particular research, Gallagher says theyre already showing us things that weve never seen before.

I have colleagues who are going to jump on these data the second that theyre available and download them and start working on them, she says. I expect theres going to be papers that start being submitted within days with this new data. The astronomy community is definitely very excited.

Bidens late arrival on the Webb bandwagon should not detract from the veritable army that has spent the last two decades getting the telescope off the ground.

When the JWST launched on Christmas Day last year, it carried with it the hopes, dreams and left behind the furrowed brows and chewed fingernails of thousands of scientists, engineers and technicians from 14 countries and three space agencies: NASA, the Canadian Space Agency and the European Space Agency.

The release of that first image marks the completion of one of mankinds most herculean tasks that of conceptualizing, designing, manufacturing, testing, launching, deploying and calibrating an instrument that makes the famed Hubble telescope pale in comparison.

Its now orbiting the Sun in the deep cold of space, some 1.5 million kilometres from this planet, four times further away from us than the moon, and far enough to minimize interference from the Earth and Sun.

But with the completion of that task, the work that the JWST was intended to do the scientific work has only just begun.

Researchers across the world are rubbing their hands together with glee at the thought of the what the telescope could produce.

When the research begins, thousands of astronomers will be using the Webb telescope to probe back in time to an era only a few hundred million years after the Big Bang itself, a time astronomers refer to as the Dark Age, when the first stars began to appear.

The light collected by the telescope will have been travelling toward it for more than 13 billion years, giving researchers a picture of what the universe looked like when that light began its journey.

The Webb telescope is the most sophisticated, complex space science instrument that has ever been created, says Gallagher. And it works. It works beautifully. It works exactly as expected.

There have been people thinking about what this telescope is going to do for years. And the fact that its delivering, and in some areas delivering better than expected, means theyre going realize those expectations of what they were hoping to do.

The first five science images

Carina Nebula: Nebulae are the stellar nurseries in which stars are born. Carina is one of the largest and brightest in the night skies, approximately 7,600 light-years away, in the constellation Carina, and home to many stars several more times massive than our sun. Its also home to the most luminous star we know of in the Milky Way the primary star of WR25, a binary star system. Its about 2.4 million times brighter than our sun.

WASP-96 b: a giant, mostly gaseous planet nearly 1,150 light-years from Earth. Its about half the mass of Jupiter, and it orbits its star every 3.4 days. The JWST will allow scientists to analyze the atmosphere of the planet, by looking at the spectrum of light from distant stars passing through it.

Southern Ring Nebula: Also called the Eight-Burst Nebula, its an expanding cloud of gas surrounding a dying star. Its about 2,000 light-years from Earth and is about half a light-year in diameter. An analysis of some of the elements present in the outer layers of the nebula may give us a clue to some of the processes involved in the formation of new solar systems.

SMACS 0723: This is an area where the gravity of a cluster of galaxies in the foreground distorts space in such a way that they act like a lens, enabling astronomers to have a better view of objects in the background, objects that, because of their extreme distance, appear as they did in the earliest days of the universe. The process is called gravitational lensing.

Stephans Quintet: A grouping of five galaxies, about 290 million light-years away, in the constellation Pegasus. Four of the five galaxies are gravitationally bound to each other, resulting in a series of close encounters. The fifth galaxy is actually a foreground galaxy, about seven times closer to Earth than the rest. The cluster was first identified in the 1800s its been studied extensively since.

Canadian contributions

Near Infrared Imager and Slitless Spectrograph: The NIRISS, which observes infrared wavelengths, also includes a spectrograph, which allows astronomers to look at the atmospheres of planets, to determine whether there are traces of gases such as oxygen, carbon dioxide or methane which might indicate the possibility that life might exist on those planets.

Fine Guidance Sensor: The FGS targets a series of stars as reference points and, measuring their positions 16 times per second, uses them to keep the telescope pointed at its target. Right now, its being used to help scientists calibrate the mirror segments.

Its so accurate that it can detect the telescope being off target by the equivalent of the width of a human hair at a distance of a kilometre.

While its not an observational instrument per se, it is capable of capturing data images that help with its main function keeping the Webb telescope pointed in exactly the right direction.

Go here to see the original:

James Webb Telescope's first 'stunning' science images set to be revealed - Welland Tribune

The Loop: Djokovic defeats Kyrgios in Wimbledon final, Steve Bannon reportedly agrees to testify at January 6 hearing, first James Webb Space…

Good morning. It's Monday, July 11,and you're reading The Loop, a quick wrap-up of today's news.

Novak Djokovic has defeated Australian Nick Kyrgios in the Wimbledon men's singles final but capped off an eventful match with some generous praise for his opponent.

Djokovic who has won the last four finals at the All England Club and seven in total said in a post-matchinterview that he believes this won't be the last we see of Kyrgios in grand slam finals.

"I wish you all the best. I really respect you a lot. I think you are a phenomenal tennis player and athlete," Djokovic said to Kyrgios.

If you missed the match, look back on the best bits here.

We're less than 24 hours away from seeing the first full-colour images from the new James Webb Space Telescope (JWST).

The images are expected to bethe most-detailedsnapshots ever taken of our cosmos.

Loading

Date from the JWST will also include the chemical fingerprint of an atmosphere from a hellishly alien planet about halfthe massof Jupiter, known as WASP-96b.

"What I've already seen has moved me as a scientist, as an engineer, as a human,"NASA deputy administrator Pam Melroy says.

Loading

Loading

See you again soon.

ABC/wires

See the original post:

The Loop: Djokovic defeats Kyrgios in Wimbledon final, Steve Bannon reportedly agrees to testify at January 6 hearing, first James Webb Space...

Covid-19 drug development to include AI by Iktos and SRI. – Pharmaceutical Technology

]]> The companies plan to use AI to identify potential Covid-19 drug candidates. Credit: SRI International.

Visit our Covid-19 microsite for the latest coronavirus news, analysis and updates

Follow the latest updates of the outbreakon ourtimeline.

Artificial intelligence (AI) technology provider Iktos and research centre SRI International have partnered to discover and develop drugs to treat various viruses, including the novel coronavirus that causes Covid-19 and influenza.

Iktos will combine its generative modelling technology with SRIs fully automated synthetic chemistry platform called SynFini to design compounds and speed-up the identification of drug candidates.

The Iktos AI technology leverages deep generative models for the accelerated drug discovery process, made possible via the automatic design of virtual molecules with the required characteristics of a new drug candidate.

Iktos co-founder and CEO Yann Gaston-Math said: Iktos generative AI technology has proven its value and potential to accelerate drug discovery programs in multiple collaborations with renowned pharmaceutical companies.

We are eager to apply it to SRIs endonuclease programme and hope our collaboration can make a difference and speed up the identification of promising new therapeutic option for the treatment of Covid-19.

The SynFini platform is intended to speed-up chemical discovery and development, advancing drugs to the clinic quickly and affordably, said SRI.

The closed-loop platform is said to automate the design, reaction screening and optimisation (RSO), as well as generation of target molecules.

SRIs ongoing programme is working towards drugs that can block endonuclease enzymes, known to be prevalent to several viruses.

These enzymes are associated with viral replication and inhibition of host resistance to infection.

Covid-19 sequence analysis suggests the presence of an endonuclease that it is nearly 97% genetically similar to the SARS virus.

According to findings from recent studies, inhibition of the SARS virus endonuclease blocks the virus pathogenesis, said to demonstrate a 100% survival rate in preclinical models.

Based on research, Covid-19 endonuclease should be a beneficial therapeutic target.

Read the original here:

Covid-19 drug development to include AI by Iktos and SRI. - Pharmaceutical Technology

A London AI Hub, a Facility Bigger than the Louvre, Are Among the Newest Footprint Expansions in the Life Sciences Industry – BioSpace

GlaxoSmithKline has opened a new $13 million research hub in London focused on artificial intelligence. The new hub is close to a similar research facility owned by Internet giant Google, which is using AI in its own life sciences research.

The GSK site will draw on the expertise of other AI-focused companies as it moves forward with drug discovery efforts, Pharmophorum reported. The drug developer intends to rely on AI companies to investigate the gene-related cause of some diseases, as well as screening for potential drugs, according to the report. GSKs new London facility will become the home of 30 scientists and engineers. The employees based at the facility are expected to begin collaborating with companies such as Cerebras, the Crick Institute and the Alan Turing Institute.

As GSK moves forward with its new AI-focused research, Chief Executive Officer Emma Walmsley told the London Evening Standard that it was her hope the new site will become a beacon for jobs and attract those machine learning experts and programmers who may traditionally eye Silicon Valley for jobs.

Using technologies like AI is a critical part of helping us to discover and develop medicines for serious diseases, Walmsley said, according to the report.

In addition to the AI employees in London, GSK also has other employees skilled in the discipline-based in San Francisco and Boston.

GSK isnt the only company expanding its footprint. Koreas Samsung Biologicsis spending $2 billion on a new manufacturing plant that is expected to become the largest of its kind across the globe. The Journal quipped that the Samsung site will be larger than The Louvre in Paris, the former royal residence and current museum in Paris that takes up 652,500 square feet.

The Samsung site, which will be approximately 230,000 square meters, will support biologics manufacturing that are used by some of the biggest drugmakers in the world, including Bristol Myers Squibb and GSK. In an interview with The Wall Street Journal, CEO Kim Tae-han said the demands for biologics used in the effort to combat COVID-19 highlighted the need for a larger-than-expected facility.

Covid-19 is giving us more opportunity than crisis,Kim said, according to the report.

There is also growth taking place in the United States. The Boston Business Journal reported that four life science companies are leasing a214,440-square-foot, four-story lab building in Lexington, Mass. The building will become the home for Dicerna Pharmaceuticals, Frequency Therapeutics, Integral Health and Voyager Therapeutics, the Journal reported.

The four-story building was constructed by King Street properties with life science companies in mind. Although the building was not built with a specific client in mind, it drew interest from prospective tenants across the region, King Street Properties told the Journal.

According to the Journal, the breakdown for the amount of space used by each company in the Lexington site is as follow:

Follow this link:

A London AI Hub, a Facility Bigger than the Louvre, Are Among the Newest Footprint Expansions in the Life Sciences Industry - BioSpace

An AI hiring firm promising to be bias-free wants to predict job hopping – MIT Technology Review

The firm in question is Australia-based PredictiveHire, founded in October 2013. It offers a chatbot that asks candidates a series of open-ended questions. It then analyzes their responses to assess job-related personality traits like drive, initiative, and resilience. According to the firms CEO, Barbara Hyman, its clients are employers that must manage large numbers of applications, such as those in retail, sales, call centers, and health care. As the Cornell study found, it also actively uses promises of fairer hiring in its marketing language. On its home page, it boldly advertises: Meet Phai. Your co-pilot in hiring. Making interviews SUPER FAST. INCLUSIVE, AT LAST. FINALLY, WITHOUT BIAS.

As weve written before, the idea of bias-free algorithms is highly misleading. But PredictiveHires latest research is troubling for a different reason. It is focused on building a new machine-learning model that seeks to predict a candidates likelihood of job hopping, the practice of changing jobs more frequently than an employer desires. The work follows the companys recent peer-reviewed research that looked at how open-ended interview questions correlate with personality (in and of itself a highly contested practice). Because organizational psychologists have already shown a link between personality and job hopping, Hyman says, the company wanted to test whether they could use their existing data for the prediction. Employee retention is a huge focus for many companies that we work with given the costs of high employee churn, estimated at 16% of the cost of each employees salary, she adds.

The study used the free-text responses from 45,899 candidates who had used PredictiveHires chatbot. Applicants had originally been asked five to seven open-ended questions and self-rating questions about their past experience and situational judgment. These included questions meant to tease out traits that studies have previously shown to correlate strongly with job-hopping tendencies, such as being more open to experience, less practical, and less down to earth. The company researchers claim the model was able to predict job hopping with statistical significance. PredictiveHires website is already advertising this work as a flight risk assessment that is coming soon.

PredictiveHires new work is a prime example of what Nathan Newman argues is one of the biggest adverse impacts of big data on labor. Newman, an adjunct associate professor at the John Jay College of Criminal Justice, wrote in a 2017 law paper that beyond the concerns about employment discrimination, big-data analysis had also been used in myriad ways to drive down workers wages.

Machine-learning-based personality tests, for example, are increasingly being used in hiring to screen out potential employees who have a higher likelihood of agitating for increased wages or supporting unionization. Employers are increasingly monitoring employees emails, chats, and other data to assess which might leave and calculate the minimum pay increase needed to make them stay. And algorithmic management systems like Ubers are decentralizing workers away from offices and digital convening spaces that allow them to coordinate with one another and collectively demand better treatment and pay.

None of these examples should be surprising, Newman argued. They are simply a modern manifestation of what employers have historically done to suppress wages by targeting and breaking up union activities. The use of personality assessments in hiring, which dates back to the 1930s in the US, in fact began as a mechanism to weed out people most likely to become labor organizers. The tests became particularly popular in the 1960s and 70s once organizational psychologists had refined them to assess workers for their union sympathies.

In this context, PredictiveHires fight-risk assessment is just another example of this trend. Job hopping, or the threat of job hopping, points out Barocas, is one of the main ways that workers are able to increase their income. The company even built its assessment on personality screenings designed by organizational psychologists.

Barocas doesnt necessarily advocate tossing out the tools altogether. He believes the goal of making hiring work better for everyone is a noble one and could be achieved if regulators mandate greater transparency. Currently none of them have received rigorous, peer-reviewed evaluation, he says. But if firms were more forthcoming about their practices and submitted their tools for such validation, it could help hold them accountable. It could also help scholars engage more readily with firms to study the tools impacts on both labor and discrimination.

Despite all my own work for the past couple of years expressing concerns about this stuff, he says, I actually believe that a lot of these tools could significantly improve the current state of affairs.

Read more:

An AI hiring firm promising to be bias-free wants to predict job hopping - MIT Technology Review

Flying high with AI: Alaska Airlines uses artificial intelligence to save time, fuel and money – TechRepublic

How Alaska Airlines executed the perfect artificial intelligence use case. The company has saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions, all from using AI.

Image: Alaska Air

Given the near 85% fail rate in corporate artificial intelligence projects, it was a pleasure to visit with Alaska Airlines, which launched a highly successful AI system that is helping flight dispatchers. I visited with Alaska to see what the "secret sauce" was that made its AI project a success. Here are some tips to help your company execute AI as well as Alaska Airlines has.

SEE: Hiring Kit: Video Game Programmer (TechRepublic Premium)

Initially, the idea of overhauling flight operations control existed in concept only. "Since the idea was highly conceptual, we didn't want to oversell it to management," said Pasha Saleh, flight operations strategy and innovation director for Alaska Airlines. "Instead, we got Airspace Intelligence, our AI vendor, to visit our network centers so they could observe the problems and build that into their development process. This was well before the trial period, about 2.5 years ago."

Saleh said it was only after several trials of the AI system that his team felt ready to present a concrete business use case to management. "During that presentation, the opportunity immediately clicked," Saleh said. "They could tell this was an industry-changing platform."

Alaska cut its teeth on having to innovate flight plans and operations in harsh arctic conditions, so it was almost a natural step for Alaska to become an innovator in advancing flight operations with artificial intelligence.

SEE:Digital transformation: A CXO's guide (free PDF)(TechRepublic)

"I could see a host of opportunities to improve the legacy system across the airline industry that could propel the industry into the future," Saleh said. "The first is dynamic mapping. Our Flyways system was built to offer a fully dynamic, real-time '4D' map with relevant information in one, easy-to-understand screen. The information presented includes FAA data feeds, turbulence reports and weather reports, which are all visible on a single, highly detailed map. This allows decision-makers to quickly assess the airspace. The fourth dimension is time, with the novel ability to scroll forward eight-plus hours into the future, helping to identify potential issues with weather or congestion."

"We saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions." Pasha Saleh, flight operations strategy and innovation director for Alaska Airlines

The Alaska Flyways system also has built-in monitoring and predictive abilities. The system looks at all scheduled and active flights across the U.S., scanning air traffic systemically rather than focusing on a single flight. It continuously and autonomously evaluates the operational safety, air-traffic-control compliance and efficiency of an airline's planned and active flights. The predictive modeling is what allows Flyways to "look into the future," helping inform how the U.S. airspace will evolve in terms of weather, traffic constraints, airspace closures and more.

SEE:9 questions to ask when auditing your AI systems(TechRepublic)

"Finally the system presents recommendations," Saleh said. "When it finds a better route around an issue like weather or turbulence, or simply a more efficient route, Flyways provides actionable recommendations to flight dispatchers. These alerts pop up onto the computer screen, and the dispatcher decides whether to accept and implement the recommended solution. In sum: The operations personnel always make the final call. Flyways is constantly learning from this."

Saleh recalled the early days when autopilot was first introduced. "There was fear it would replace pilots," he said. "Obviously, that wasn't the case, and autopilot has allowed pilots to focus on more things of value. It was our hope that Flyways would likewise empower our dispatchers to do the same."

SEE:Graphs, quantum computing and their future roles in analytics(TechRepublic)

One step Alaska took was to immediately engage its dispatchers in the design and operation of the Flyways system. Dispatchers tested the platform for a six-month trial period and provided feedback for enhancing it. This was followed by on-site, one-on-one training and learning sessions with the Airspace Intelligence team. "The platform also has a chat feature, so our dispatchers could share their suggestions with the Airspace Intelligence team in real time," Saleh said. "Dispatchers could have an idea, and within days, the feature would be live. And because Flyways uses AI, it also learned from our dispatchers, and got better because of it."

While Flyways can speed times to decisions on route planning and other flight operations issues, humans will always have the role in route planning, and will always be the final decision-makers. "This is a tool that enhances, rather than replaces, our operations," Saleh said. Because flight dispatchers were so integrally involved with the project's development and testing, they understood its fit as a tool and how it could enhance their work.

"With the end result, I would say satisfaction is an understatement," Saleh said. "We're all blown away by the efficiency and predictability of the platform. But what's more, is that we're seeing an incredible look into the future of more sustainable air travel.

"One of the coolest features to us is that this tool embeds efficiency and sustainability into our operation, which will go a long way in helping us meet our goal of net zero carbon emissions by 2040. We saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions. This was at a time when travel was down because of the pandemic. We anticipate Flyways will soon become the de facto system for all airlines. But it sure has been cool being the first airline in the world to do this!"

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Visit link:

Flying high with AI: Alaska Airlines uses artificial intelligence to save time, fuel and money - TechRepublic

Orange Logic Deploys a Second Generation of Machine Learning AI for Digital Asset Management – PR Newswire (press release)

"In the early days, we were all amused by the sheer novelty of A.I. Now with maturity, our users expect more concrete results with almost no errors. Our engineers have built arbitrage mechanisms that provide more confidence than any individual A.I. system could. Concretely, this means less work for our users but more work for the machines. That's ok, though, as the machines don't have to drive the kids back from school," said Karl Facredyn CEO of Orange Logic.

How it works:

Pass One: Detect The Content

The first pass of A.I. uses two separate machine learning instances. Each instance undergoes its own training and will interpret and produce results for an asset independently of the other machine.

Pass Two: Arbitrage Results

A third A.I. arbitrages the results from the first pass and only keeps the most accurate results of the two previous A.I.'s.

About Orange Logic

Established in 2000, Orange Logic initially operated as a software research company on a mission to innovate approaches in multiple fields including Digital Asset Management. Today, Orange Logic provides a premier Digital Asset Management solution CORTEX | DAM for any team, national or global, looking to efficiently manage and scale its digital media libraries.

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/orange-logic-deploys-a-second-generation-of-machine-learning-ai-for-digital-asset-management-300410174.html

SOURCE Orange Logic

http://www.orangelogic.com

Read this article:

Orange Logic Deploys a Second Generation of Machine Learning AI for Digital Asset Management - PR Newswire (press release)

Global Artificial Intelligence (AI) in Education Market Projected to Reach USD XX.XX billion by 2025- Google, IBM, Pearson, Microsoft, AWS, Nuance,…

The unprecedented onset of a pandemic crisis such as COVID-19 has been instrumenting dominant alterations in the global growth trajectory of the Artificial Intelligence (AI) in Education Market. The event marks a catastrophic influence affecting myriad facets of the Artificial Intelligence (AI) in Education market in a multi-dimensional setting. The growth course that has been quite unabashed in the historical times, seems to have been struck suddenly in various unparalleled ways and means, which is therefore also affecting the normal growth prospects in the Artificial Intelligence (AI) in Education market. This thoughtfully compiled research report underpinning the impact of COVID-19 on the growth trajectory is therefore documented to encourage a planned rebound.

A thorough analytical review of the pertinent growth trends influencing the Artificial Intelligence (AI) in Education market has been demonstrated in the report. Adequate efforts have been directed to influence an unbiased and time-efficient market related decision amongst versatile market participants, striving to find a tight grip in the competition spectrum of the aforementioned Artificial Intelligence (AI) in Education market. The report also illustrates minute details in the Artificial Intelligence (AI) in Education market governing micro and macroeconomic factors that seem to have a dominant and long-term impact, directing the course of popular trends in the global Artificial Intelligence (AI) in Education market.

The study encompasses profiles of major companies operating in the Artificial Intelligence (AI) in Education Market. Key players profiled in the report includes:GoogleIBMPearsonMicrosoftAWSNuanceCognizantMetacogQuantum Adaptive LearningQueriumThird Space LearningAleksBlackboardBridgeUCarnegie LearningCenturyCogniiDreamBox LearningElemental PathFishtreeJellynoteJenzabarKnewtonLuilishuo

The report is rightly designed to present multidimensional information about the current and past market occurrences that tend to have a direct implication on the onward growth trajectory of the Artificial Intelligence (AI) in Education market.

The following sections of this versatile report on the Artificial Intelligence (AI) in Education market specifically shed light on popular industry trends encompassing both market drivers as well as dominant trends that systematically affect the growth trajectory visibly. The report is a holistic, ready-to-use compilation of all major events and developments that replicate growth in the Artificial Intelligence (AI) in Education market. Besides presenting notable insights on Artificial Intelligence (AI) in Education market factors comprising above determinants, the report further in its subsequent sections of this detailed research report on Artificial Intelligence (AI) in Education market states information on regional segmentation.

Access Complete Report @ https://www.orbismarketreports.com/global-artificial-intelligence-ai-in-education-market-growth-analysis-by-trends-and-forecast-2019-2025?utm_source=Pooja

By the product type, the market is primarily split into Machine Learning and Deep LearningNatural Language Processing

By the end-users/application, this report covers the following segments Virtual Facilitators and Learning EnvironmentsIntelligent Tutoring SystemsContent Delivery SystemsFraud and Risk Management

In the subsequent sections of the report, readers are also presented with versatile understanding about the current state of geographical overview, encompassing various regional hubs that consistently keep witnessing growth promoting market developments directed by market veterans, aiming for ample competitive advantage, such that their footing remains strong and steady despite the cut throat competition characterizing the aforementioned Artificial Intelligence (AI) in Education market. Each of the market players profiled in the report have been analyzed on the basis of their company and product portfolios, to make logical deductions.

Global Artificial Intelligence (AI) in Education Geographical Segmentation Includes: North America (U.S., Canada, Mexico) Europe (U.K., France, Germany, Spain, Italy, Central & Eastern Europe, CIS) Asia Pacific (China, Japan, South Korea, ASEAN, India, Rest of Asia Pacific) Latin America (Brazil, Rest of L.A.) Middle East and Africa (Turkey, GCC, Rest of Middle East)

Some Major TOC Points: Chapter 1. Report Overview Chapter 2. Global Growth Trends Chapter 3. Market Share by Key Players Chapter 4. Breakdown Data by Type and Application Chapter 5. Market by End Users/Application Chapter 6. COVID-19 Outbreak: Artificial Intelligence (AI) in Education Industry Impact Chapter 7. Opportunity Analysis in Covid-19 Crisis Chapter 9. Market Driving ForceAnd Many More

Continued

Research Methodology Includes:

The report systematically upholds the current state of dynamic segmentation of the Artificial Intelligence (AI) in Education market, highlighting major and revenue efficient market segments comprising application, type, technology, and the like that together coin lucrative business returns in the Artificial Intelligence (AI) in Education market.

Do You Have Any Query or Specific Requirement? Ask Our Industry [emailprotected] https://www.orbismarketreports.com/enquiry-before-buying/81119?utm_source=Pooja

Target Audience:* Artificial Intelligence (AI) in Education Manufactures* Traders, Importers, and Exporters* Raw Material Suppliers and Distributors* Research and Consulting Firms* Government and Research Organizations* Associations and Industry Bodies

Customization Service of the Report:-

Orbis Market Reports Analysis gives customization of Reports as you want. This Report will be customized to satisfy all of your necessities. For those who have any query get in contact with our sales staff, who will assure you to get a Report that fits your requirements.

Looking forprovoke fruitful enterprise relationships with you!

About Us :

With unfailing market gauging skills, has been excelling in curating tailored business intelligence data across industry verticals. Constantly thriving to expand our skill development, our strength lies in dedicated intellectuals with dynamic problem solving intent, ever willing to mold boundaries to scale heights in market interpretation.

Contact Us :

Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

More here:

Global Artificial Intelligence (AI) in Education Market Projected to Reach USD XX.XX billion by 2025- Google, IBM, Pearson, Microsoft, AWS, Nuance,...

Sensei Ag Uses AI Platform and Hydroponic Technology to Grow Food – The Spoon

As the worlds population inches towards its estimated 10 billion people by 2050, finding more, not to mention more sustainable, ways to feed people becomes more and more important. High-tech, indoor agriculture is one solution getting a lot of attention lately, and recently, a new company joined the fast-growing sector. Sensei Ag is the brainchild of Oracles Larry Ellison and scientist Dr. David Agus, and the companys goal is to grow more greens using hydroponics and AI.

Based on the small Hawaiian island of Lnai, Sensei Ag has built a 100,000 sq. ft.hydroponic pilot greenhouse that is expected to grow 1 million pounds of food per year. I spoke with SenseiAg CEO Sonia Lo by phone this week, and she described the company as an integrated solution to indoor farming that uses the best practices in computer vision, germination, and seeding to optimize indoor growing.

I asked Lo about how the company incorporates AI into their greenhouses. She said that their AI platform will act as a data engine that harnesses global grower knowledge, and will create an algorithm for the best-practices in indoor growing. She did not go into the specifics of their platform, but did mention that this would be made available to other growers, and it would be embedded into each part of their agricultural system. Sensei Ag also uses advanced cameras within their greenhouses to identify pests, pathogens, plant health, and uneven growth in crops. The companys goal is to enable platforms within the greenhouse to make decisions on growing food autonomous of human intervention.

The COVID-19 pandemic, climate change, and a growing population has forced us to consider the possibility of global food insecurity. In response, companies like Phytoponics, Element Farms, and Gotham Greens all operate indoor farms that use hydroponic techniques to grow leafy greens. Meanwhile, companies like Verdeat, Rise Gardens, and Seedo offer at-home vertical farming products that allow you to leafy greens in your living room.

Sensei Ag grows cherry tomatoes, basil, and butter lettuce, and Lo said that they will definitely be expanding the crops they grow. They are currently scouting for a location in California or Nevada for their flagship farm, which will be used as a template for rolling out future farms.

Related

See original here:

Sensei Ag Uses AI Platform and Hydroponic Technology to Grow Food - The Spoon

Alibaba launches low-cost voice assistant amid AI drive – Reuters

BEIJING China's Alibaba Group Holding Ltd launched on Wednesday a cut-price voice assistant speaker, similar to Amazon.com Inc's "Echo", its first foray into artificially intelligent home devices.

The "Tmall Genie", named after the company's e-commerce platform Tmall, costs 499 yuan ($73.42), significantly less than western counterparts by Amazon and Alphabet Inc's Google, which range from $120 to $180.

These devices are activated by voice commands to perform tasks, such as checking calendars, searching for weather reports, changing music or control smart-home devices, using internet connectivity and artificial intelligence.

China's top tech firms have ambitions to become world leaders in artificial intelligence as companies, including Alibaba and Amazon, increasingly compete for the same markets.

Baidu, China's top search engine, which has invested in an artificial intelligence lab with the Chinese government, recently launched a device based on its own siri-like "Duer OS" system.

The Tmall Genie is currently programmed to use Mandarin as its language and will only be available in China. It is activated when a recognised user says "Tmall Genie" in Chinese.

In a streamed demonstration on Wednesday, engineers ordered the device to buy and deliver some Coca Cola, play music, add credit to a phone and activate a smart humidifier and TV.

The device, which comes in black and white, can also be tasked with purchasing goods from the company's Tmall platform, a function similar to Amazon's Echo device.

Alibaba has invested heavily in offline stores and big data capabilities in an effort to capitalise on the entire supply chain as part of its retail strategy, increasingly drawing comparisons with similar strategies adopted by Amazon.

It recently began rolling out unstaffed brick-and-motor grocery and coffee shops, using QR codes that users can scan to complete payment on its Alipay app, which has over 450 million users. Amazon launched a similar concept of stores in December. ($1=6.7962 yuan)

(Reporting by Cate Cadell; Editing by Neil Fullick)

BRUSSELS EU antitrust regulators are weighing another record fine against Google over its Android mobile operating system and have set up a panel of experts to give a second opinion on the case, two people familiar with the matter said.

KIEV The Ukrainian software firm used to launch last week's global cyber attack warned on Wednesday that all computers sharing a network with its infected accounting software had been compromised by hackers.

Read the original post:

Alibaba launches low-cost voice assistant amid AI drive - Reuters

How AI can help payers navigate a coming wave of delayed and deferred care – FierceHealthcare

So far insurers have seen healthcare use plummet since the onset of the COVID-19 pandemic.

But experts are concerned about a wave of deferred care that could hit as patients start to return to patients and hospitals putting insurers on the hook for an unexpected surge of healthcare spending.

Artificial intelligence and machine learning could lend insurers a hand.

Against Coronavirus, Knowledge is Power

For organizations with a need for affordable and convenient COVID-19 antibody testing, Truvian's Easy Check COVID-19 IgM/IgG antibody test empowers onsite testing at scale, with accurate results at 10 minutes from a small sample of blood. Hear from industry experts Dr. Jerry Yeo, University of Chicago and Dr. Stephen Rawlings, University of California, San Diego on the state of COVID antibody testing and Easy Check through our on-demand webinar.

We are using the AI approaches to try to protect future cost bubbles, said Colt Courtright, chief data and analytics officer at Premera Blue Cross, during a session with Fierce AI Week on Wednesday.

WATCH THE ON-DEMAND PLAYBACK:What Payers Should Know About How AI Can Change Their Business

He noted that people are not going in and getting even routine cancer screenings.

If people have delay in diagnostics and delay in medical care how is that going to play out in the future when we think about those individuals and the need for clinical programs and the cost and how do we manage that? he said.

Insurers have started in some ways to incorporate AI and machine learning in several different facets such as claims management and customer service, but insurers are also starting to explore how AI can be used to predict healthcare costs and outcomes.

In some ways, the pandemic has accelerated the use of AI and digital technologies in general.

If we can predict, forecast and personalize care virtually, then why not do that, said Rajeev Ronanki, senior vice president and chief digital officer for Anthem, during the session.

The pandemic has led to a boom in virtual telemedicine as the Trump administration has increased flexibility for getting Medicare payments for telehealth and patients have been scared to go to hospitals and physician offices.

But Ronanki said that AI cant just help with predicting healthcare costs, but also on fixing supply chains wracked by the pandemic.

He noted that the manufacturing global supply chain is extremely optimized, especially with just-in-time ordering that doesnt require businesses to have a large amount of inventory.

But that method doesnt really work during a pandemic when there is a vast imbalance in supply and demand with personal protective equipment, said Ronanki.

When you connect all those dots, AI can then be used to configure supply and demand better in anticipation of issues like this, he said.

Read more:

How AI can help payers navigate a coming wave of delayed and deferred care - FierceHealthcare

This backflipping noodle has a lot to teach us about AI safety – The Verge

AI isnt going to be a threat to humanity because its evil or cruel, AI will be a threat to humanity because we havent properly explained what it is we want it to do. Consider the classic paperclip maximizer thought experiment, in which an all-powerful AI is told, simply, make paperclips. The AI, not constrained by any human morality or reason, does so, eventually transforming all resources on Earth into paperclips, and wiping out our species in the process. As with any relationship, when talking to our computers, communication is key.

Thats why a new piece of research published yesterday by Googles DeepMind and the Elon Musk-funded OpenAI institute is so interesting. It offers a simple way for humans to give feedback to AI systems crucially, without the instructor needing to know anything about programming or artificial intelligence.

The method is a variation of whats known as reinforcement learning or RL. With RL systems, a computer learns by trial-and-error, repeating the same task over and over, while programmers direct its actions by setting certain reward criteria. For example, if you want a computer to learn how to play Atari games (something DeepMind has done in the past) you might make the games point system the reward criteria. Over time, the algorithm will learn to play in a way that best accrues points, often leading to super-human performance.

What DeepMind and OpenAIs researchers have done is replace this predefined reward criteria with a much simpler feedback system. Humans are shown an AI performing two versions of the same task and simply tell it which is better. This happens again and again, and eventually the systems learns what is expected of it. Think of it like getting an eye test, when youre looking through different lenses, and being asked over and over: better... or worse? Heres what that looks like when teaching a computer to play the classic Atari game Q*bert:

This method of feedback is surprisingly effective, and researchers were able to use it to train an AI to play a number of Atari video games, as well perform simulated robot tasks (like picking telling an arm to pick up a ball). This better / worse reward function could even be used to program trickier behavior, like teaching a very basic virtual robot how to backflip. Thats how we get to the GIF at the top of the page. The behavior you see has been created by watching the Hopper bot jump up and down, and telling it well done when it gets a bit closer to doing a backflip. Over time, it learns how.

Of course, no one is suggesting this method is a cure-all for teaching AI. There are a number of big downsides and limitations in using this sort of feedback. The first being that although it doesnt take much skill on behalf of the human operator, it does take time. For example, in teaching the Hopper bot to backflip, a human was asked to judge its behavior some 900 times a process that took about an hour. The bot itself had to work through 70 hours of simulated training time, which was sped up artificially.

For some simple tasks, says Oxford Robotics researcher Markus Wulfmeier (who was not involved in this research), it would be quicker for a programmer to simply define what it is they wanted. But, says Wulfmeier, its increasingly important to render human supervision more effective for AI systems, and this paper represents a small step in the right direction.

DeepMind and OpenAI say pretty much the same its a small step, but a promising one, and in the future, theyre looking to apply it to more and more complex scenarios. Speaking to The Verge over email, DeepMind researcher Jan Leike said: The setup described in [our paper] already scales from robotic simulations to more complex Atari games, which suggests that the system will scale further. Leike suggests the next step is to test it in more varied 3D environments. You can read the full paper describing the work here.

See more here:

This backflipping noodle has a lot to teach us about AI safety - The Verge

13 ways AI will change your life – TNW

From helping you take care of email to creating personalized online shopping experiences, AI promises to transform the way we live and work.

But with all the hype out there, how do we know which benefits well actually see? In order to learn more, I asked a few members of YECthe following question:

Run an early-stage company? We're inviting 250 to exhibit at TNW Conference and pitch on stage!

What is the top benefit you predict emerging from AI, and do you think the overall benefits will live up to the hype?

The greatest benefit of AI which is already emerging is the elimination of repetitive tasks. From chat bots that can free up human staffers times to work on more complex issues, to scheduling AIs like x.ai that eliminate the need to schedule meetings, AI will ultimately help humans spend more time focusing on creative and high-mental-effort activities. Brittany Hodak,ZinePak

I think the benefits of deeper personalization in terms of the ability to understand what each customer really wants and is interested in can be achieved through AI over time. It will live up to the hype because its already being used in some degree to illustrate how personalization is possible and how AI saves considerable time in getting to a deeper level of understanding of each customer. Angela Ruth, Due

AI will save companies considerable time by doing tasks and collecting data as well as providing decisions based on that data much faster than human beings can do. It seems quite possible that AI has the capability of doing so much more than we can on many levels. Its an exciting time to watch the changes that AI brings. Murray Newlands,Sighted

AI will enable us to interact with information as if were interacting with a knowledgeable individual. We wont have to look at a screen to learn about anything, we can simply converse with AI. SIRI is already a reliable personal assistant when it comes to setting reminders, alarm clocks, sending texts, etc. AI will make it possible for us to do virtually anything with voice command. Andrew Namminga,Andesign

The biggest change thats coming is the move from humans using software as a tool, to humans working with software as team members. Software will monitor things, alert humans, and execute basic tasks without human intervention. This will free human time for the really creative or interesting tasks and greatly improve business. A.I. is going to have a much larger impact than the hype. Brennan White,Cortex

I think the greatest advantage of AI is the automation of tasks that will free up employees to focus on strategic initiatives. On the other hand, I dont think it will be as big as predicted. There are still too many tasks that need a human touch to make them successful. Well see great benefit from AI in the more mundane areas, but youll always need the human brain for some tasks. Nicole Munoz,Start Ranking Now

One of the top benefits will be the emergence of personalized medicine. Rather than a one-size-fits-all approach, doctors will be able to tailor treatment on an individual basis and prescribe the right treatments and procedures based on your medical history. As far as living up to hype, yes definitely. Though as with many new technologies its more of a question of whenratherthan if. Kevin Yamazaki,Sidebench

No, tomorrows AI wont live up to the hype. Freeing ordinary folks from repetitive tasks and giving them personal assistants only allows people to busy themselves with other, more complex tasks. The resulting productivity will mark incremental gains for business owners, but nothing on par with the digital revolution and the industrial one before it. For that, well have to wait for the robots. Manpreet Singh,TalkLocal

With each wave of technology advancement, the quality of life for the world overall has increased. With AI, we will have better personalized healthcare, more efficient energy use, enhanced food production capabilities, improved jobs with less mundane work, and more. People will lead longer and more high quality lives. Adelyn Zhou,TOPBOTS

I believe it will be more like the science fiction movies, where we will maintain and work with the machines that do the work. However, these jobs will come with a level of prestige, as most people will probably live off a government sponsored socialism system. With AI and automation replacing so many jobs in the next 20 years, we will have to change social systems in order to adapt. Andy Karuza,FenSens

While AI is critical for self-driving cars, the military, commerce, AI-driven SEO and gaming, its poised to make the most human impact in medicine and human behavior. Imagine the UN leveraging neural networks and deep learning to discover what helps some communities thrive and others fall behind. Those lessons can then be leveraged into community builders, city planners, grants and projects. Gideon Kimbrell,InList Inc

Artificial intelligence based home automation is the future. If everyone in the United States installed Nest or a similar smart thermostat, they would collectively save hundreds of millions of dollars annually in wasted energy since Nest is able to learn when people are orare not home. Nest and others automatically adjust temperature saving on energy use and costs. Kristopher Jones,LSEO.com

Artificial Intelligence will do wonders to help automate processes that, today, take time and manual labor but dont contribute much to the bottom line or moving forward as a company. Automation will allow additional time and resources to be dedicated to what companies need to focus their energy on: customer experience. Andrew Kucheriavy,Intechnic

Read next: Heres everything you need to know about the state of autonomous cars

See more here:

13 ways AI will change your life - TNW