Starlink snag forces users to build idiotic contraptions to access Elon Musks space internet – The Independent

Elon Musks Starlink space internet is running into an unusual adversary: trees.

The SpaceX satellite internet service entered beta testing in June 2020 for areas in high latitudes such as Seattle, but some users have been experiencing issues.

We want to get Starlink but the sky above our house is almost completely covered with trees over 40 feet tall, one user posted on the r/Starlink subreddit. Is it possible to get Starlink to work in our area or are we just out of luck?

Another expressed similar issues, asking for advice about using mounts to get the Starlink antenna six to 10 feet higher to get signal above the nearby trees, but potential masts don't seem to appear to accommodate the dish. One beta tester managed to get above the trees via a tripod mounted to the top of their roof, something that they described as an idiotic contraption.

In order to set up a Starlink internet connection users require a 439 satellite dish and pay an 84 monthly fee, but also need a direct line of sight between the dish and the satellite, as well as a 100-degree cone with a 25 degree elevation minimum around the centre of the dish.

This means that trees, neighbouring buildings, and other obstacles provide a severe challenge - with one user installing his dish nearly five meters above his chimney.

If you could see the connection between a Starlink satellite and your Starlink, it would look like a single beam between the two objects. As the satellite moves, the beam also moves. The area within which this beam moves is the field of view, the Starlink website explains.

Some obstructions are worse than others. Obstructions low in the sky will cause more outages because satellites are in this area of the sky more frequently. The best guidance we can give is to install your Starlink at the highest elevation possible where it is safe to do so, with a clear view of the sky. Starlink also notes that a single tree can interrupt users service.

(SpaceX)

As early reviews have pointed out, Starlink provides an app to help users check for obstructions but the phone needs to be at knee height to operate counter to the high altitude that will actually get users the best service from the internet service. SpaceX did not respond to a request for comment from The Independent before time of publication.

Starlink, a service which remains in beta and is set to improve with the launch of more satellites, is not designed for urban environments due to interference from buildings; but in rural areas trees are likely to remain a bigger problem, Mark Jackson, the editor in chief of UK internet service provider website ISPreview, told The Independent.

Some people may be able to get around that by professionally mounting the dish higher up on their roof, although there have also been some questions about the kit's durability in high winds if you mount it high up, then you might need to take it down for a storm [which is] not ideal or safe.

Only time will tell whether they can truly resolve all of these issues, but they do stand a good chance of being able to overcome them. A bigger challenge will be in making the whole thing profitable, while also trying not to completely wreck observational science (astronomy) in the process.

Starlink satellites currently in orbit have disrupted astronomical observationsical

(Victoria Girgis/Lowell Observatory)

However for many users especially those in the United States Starlink will still be a compelling alternative over traditional internet providers due to long-running issues with service and competition.

Phone companies originally used existing wires to provide internet service, and were required by law to lease wires to competitors; but in 1996, the Telecommunications Act made it easier for cable companies to consolidate, and in 2005 that leasing requirement was removed. This meant that they were basically trading off areas so they wouldn't compete, according to University of Virginia media studies professor Christopher Ali.

Alongside policy issues, there are population problems with the internet experience in the United States.

I wouldn't characterise US internet as bad so much as I would characterise it as inconsistent, said Jamie Steven, Chief Innovation Officer at Speedtest creator Ookla. And while cities and populated areas have great access, this is lacking in rural and remote areas.

Astronomers-Satellite Pollution

(AP)

The USs lower population density is a big reason, especially in the West. It can be very expensive to run fiber optic networks for communities with only a few hundred residents. New satellite options such as Starlink provide a desirable alternative to the aging copper-based connectivity (DSL & cable) in those communities, Steven told The Independent.

Im a Starlink beta customer and live in a heavily wooded rural area. Ive had some minor problems with obstructions from the very tall trees in my yard, but overall the service is a significant and welcome improvement over the unreliable DSL service I had previously.

More:

Starlink snag forces users to build idiotic contraptions to access Elon Musks space internet - The Independent

Squid, cotton and ‘water bears’ among cargo headed to the International Space Station – WTSP.com

June 3 will mark the 22nd SpaceX cargo resupply mission of scientific research and technology demonstrations.

CAPE CANAVERAL, Fla It's that time again. The International Space Station is in need of a delivery, and SpaceX is ready to lend a hand with its 22nd cargo resupply mission.

A collection of scientific research and technology demonstrations will fly to the orbiting laboratory on SpaceX's upgraded Dragon spacecraft on June 3.

The commercial space company is targeting a 1:29 p.m. liftoff from Kennedy Space Center's historic Launch Complex 39A.

Among the dozens of experiments heading into space to support the Expedition 65 and 66 crews are tardigrades or "water bears" that NASA says can tolerate more extreme environments than most life forms.

Research involving the organisms will advance astronauts' understanding of stress factors impacting them while in space and allow researchers to develop countermeasures.

Spaceflight can be a really challenging environment for organisms, including humans, who have evolved to the conditions on Earth, said principal investigator Thomas Boothby. One of the things we are really keen to do is understand how tardigrades are surviving and reproducing in these environments and whether we can learn anything about the tricks that they are using and adapt them to safeguard astronauts.

Joining the microscopic will be the equally small symbiotic squid, which will interact with microbes to help develop protective measures to preserve astronaut health while on long-duration missions in space.

Researchers will also be looking to give cotton a boost by examining stressors that can toughen the material-producing plants.

"We are hoping to reveal features of root system formation that can be targeted by breeders and scientists to improve characteristics such as drought resistance or nutrient uptake, both key factors in the environmental impacts of modern agriculture, principal investigator Simon Gilroy said. "Improved understanding of cotton root systems and associated gene expression could enable development of more robust cotton plants and reduce water and pesticide use."

NASA noted a portable ultrasound device, Pilote, tissue chip and new solar panels to help increase the energy available for activities at the ISS will also join the cargo headed to the orbiting laboratory.

You can catch the mission live by tuning into 10 Tampa Bay where we will be streaming on Facebookand YouTube.

What other people are reading right now:

Breaking news and weather alerts:Get the free 10 Tampa Bay app

Read the rest here:

Squid, cotton and 'water bears' among cargo headed to the International Space Station - WTSP.com

SpaceX cargo mission to carry water bears, baby squids to space station – UPI News

May 26 (UPI) -- SpaceX's 22nd cargo resupply mission, slated to launch no earlier than June 3, will see several unique science experiments -- involving water bears, baby squids and kidney stones -- ferried to the International Space Station.

Like so many experiments before them, the bulk of the experimental setups being carried aboard SpaceX CRS-22 are designed to illuminate the health risks facing astronauts.

One of these will use tardigrades, or water bears, to do so.

"Tardigrades are renown for their ability to withstand a number of extreme stresses," Thomas Boothby, an assistant professor of molecular biology at the University of Wyoming and the principal investigator on the Cell Science-04 experiment, told reporters during a press call on Wednesday.

Tardigrades can survive absolute zero and boiling water, withstand intense air and water pressures and persist in a dormant state for up to 30 years without food and water.

"Importantly for this mission, they've been shown to survive and reproduce during spaceflight and can even survive prolonged exposure to the vacuum of outer space," Boothby said.

For the experiment, scientists are planning to conduct both short-term and long-term tests, exposing single generations of water bears, as well as multiple generations, to the stresses of spaceflight, such as microgravity and radiation.

"We're going to be recovering those animals and looking at what genes they've turned on and off while aboard the ISS to get a sense of how they're coping with the stresses," Boothby said.

A separate experiment, called UMAMI, will use another tiny creature, baby bobtail squids, Euprymna scolopes, to study the impacts of spaceflight and microgravity on animal-microbe symbiosis.

"I'm very interested in how beneficial microbes communicate with animal tissues in space," said UMAMI principal investigator Jamie Foster.

"It's really important to understand how those microbes and their relationship with tissue and each other cause the microbiome to change in the space environment," Foster said.

Healthy adult squid maintain a symbiotic relationship with the bacterium Vibrio fischeri. For the UMAMI experiment, several dozen immature bobtail squid -- all of which will be bacteria-free -- will be flown to the International Space Station.

Once onboard, the bacteria will be introduced to the squid paralarvae and allowed to colonize their light organs, an extra set of primitive eyes.

Researchers will monitor the onset of symbiosis for 12 hours before freezing the squid for tissue analysis back on Earth.

Astronauts face a variety of health challenges in space, one of which is an increased risk of kidney stones.

"On Earth, gravity helps maintain bone structure, and so bones tend to demineralize in space," said principal investigator Ed Kelly, associate professor of pharmaceutics at the University of Washington.

"Where does that calcium phosphate and magnesium go? It goes to the kidneys. This is one of the big reasons why astronauts are so susceptible to kidney stones," Kelly said.

But studying kidney stone formation on Earth is complicated by gravity. In lab experiments, kidney stones tend to sink to the bottom of solutions as they grow, complicating observation and measurement efforts.

Kelly and his research partners hope the kidney cell models -- installed on 3D tissue chips -- will help them solve this problem.

"We think the microgravity environment will allow us to model how kidney stones form and what can be done to prevent them from forming," Kelly said.

Additionally, SpaceX will carry experiments testing robotic arm and solar panel technologies, as well as more resilient cotton crop varieties, according to NASA.

20 years aboard the International Space Station

The International Space Station is photographed by Expedition 56 crew members from a Soyuz spacecraft after undocking on October 4, 2018. NASA astronauts Andrew Feustel and Ricky Arnold and Roscosmos cosmonaut Oleg Artemyev executed a fly-around of the orbiting laboratory to take pictures of the space station before returning home after spending 197 days in space. Photo courtesy of NASA/Roscosmos

Guinness World Records announced on October 19, 2020, that NASA astronauts Christina Koch (R) and Jessica Meir, who made history with the first all-female spacewalk on October 18, 2019, are being honored for this achievement with a feature in the Guinness World Records 2021 edition. The historic spacewalk took place at the ISS, where they worked on maintenance and upgrades. While this was Koch's fourth spacewalk, it was Meir's first. Photo by NASA/UPI | License Photo

Expedition 64 NASA astronaut Kate Rubins is seen having her Russian Sokol suit pressure checked as she and fellow crewmates Sergey Kud-Sverchkov and Sergey Ryzhikov of Roscosmos prepare for their Soyuz launch to the ISS on October 14, 2020, at the Baikonur Cosmodrome in Kazakhstan. The trio was launched at 1:45 a.m. EDT to begin a six-month mission aboard the ISS. Photo by Andrey Shelepin/GCTC/NASA | License Photo

NASA astronaut Scott Kelly is happy to be aboard the ISS after the hatch opening of the Soyuz spacecraft March 28, 2015. Kelly traveled with Expedition 43 Russian cosmonauts Mikhail Kornienko and Gennady Padalka on the Soyuz TMA-16M that was launched the day before from Baikonur, Kazakhstan. Kelly and Kornienko each spent a year in space and returned to Earth on Soyuz TMA-18M in March 2016. Photo courtesy of NASA

Astronaut William Shepherd (C), the Expedition 1 mission commander, looks on while Soyuz commander cosmonaut Yuri Gidzenko (L) and the flight engineer, cosmonaut Sergei Krikalev, apply final touches to his full pressure entry suit as he lies on a couch of a Johnson Space Center trainer on May 12, 2000. Scheduled to come back from his space station stay aboard the Space Shuttle Discovery, the three were participating in a rehearsal of their duties during shuttle descent. Photo courtesy of NASA

Kelly cared for two crops in the Veggie Plant Growth Facility during his year in space. Understanding the most effective ways to grow plants in microgravity is a key piece of the future journey to Mars. Growing plants in space provides crew members with fresh foods to supplement their diets, as well as a positive effect on morale and well-being. Photo courtesy of NASA

Tim Kopra photographed his breakfast floating inside of the Unity module aboard the ISS on April 16, 2016. In a tweet, he remarked "#Breakfast taco on#ISS:refried beans, shredded pork, pepper jack cheese, eggs and salsa on a tortilla. Awesome." Photo courtesy of NASA

Flight controllers at the ISS Mission Control at Johnson Space Center monitor systems aboard the orbiting laboratory during a number of dynamic events for Expedition 44 on August 10, 2015. Screens in the front of the room show the camera views from two spacewalking Russian cosmonauts, while NASA astronaut Kjell Lindgren is seen harvesting lettuce from the veggie experiment that would become the first food grown in space to be eaten. Photo by Bill Stafford/NASA

Expedition 61 crew members, from left to right, NASA flight engineers Meir, Andrew Morgan and Koch with Commander Luca Parmitano of ESA unpack fresh fruit and other goodies from a stowage bag delivered aboard Japan's HTV-8 cargo craft on the ISS on October 7, 2019. Photo courtesy of NASA

Kelly corrals the supply of fresh fruit that arrived on the Kounotori 5 H-II Transfer Vehicle (HTV-5) on August 25, 2015. Visiting cargo ships often carry a small cache of fresh food for crew members aboard the ISS. Photo courtesy of NASA

NASA astronauts Jeff Williams (shown here) and Rubins successfully installed a new international docking adapter on August 19, 2016, during a 5-hour, 58-minute spacewalk. Japanese astronaut Takuya Onishi assisted the duo from inside the space station, while all three then cleaned up the Quest airlock, where they stowed their spacesuits and tools. Photo courtesy of NASA

Koch worked in the vacuum of space 265 miles above the Atlantic Ocean off the coast of Africa aboard the ISS on January 15, 2020. She and Meir conducted a spacewalk to install new lithium-ion batteries that store and distribute power collected from solar arrays on the stations Port-6 truss structure. Photo courtesy of NASA

Astronauts aboard the ISS captured these star trail images as they orbited the Earth at 17,500 mph on October 3, 2016. Photo courtesy of NASA

NASA astronaut Chris Cassidy embarks on a spacewalk outside the ISS on June 16, 2020. Photo courtesy of NASA

Behnken and Cassidy completed the first of two scheduled spacewalks on June 16, 2020, to replace batteries on one of two power channels on the far starboard truss (S6 Truss) of the ISS. Of this image posted by Behnken on Twitter, he said: "Yesterday, @Astro_SEAL snapped this shot from our worksite on @Space_Station @SpaceX's Crew Dragon and @JAXA_en's HTV in clear view. Not bad for a view while working. " Photo courtesy of NASA/Twitter | License Photo

Koch (L) and Meir work on their U.S. spacesuits ahead of a spacewalk they conducted to install new lithium-ion batteries that store and distribute power collected from solar arrays on the stations Port-6 truss structure on the ISS on January 15, 2020. Photo courtesy of NASA

Commander Peggy Whitson works to change the media in a BioCell for the OsteoOmics experiment inside the Microgravity Sciences Glovebox in the Destiny U.S. Laboratory on the ISS on May 3, 2017. Photo courtesy of NASA

Rubins checks a sample for air bubbles before loading it in the biomolecule sequencer in September 2016. Photo courtesy of NASA

Roscosmos cosmonaut and Expedition 63 Flight Engineer Ivan Vagner transfers biological samples into a science freezer for stowage and later analysis aboard the ISS on October 7, 2020. Photo courtesy of NASA

Cassidy (L) and Behnken work on U.S. spacesuits inside the ISS's Quest airlock. The two conduct spacewalks on June 26 and July 1, 2020, to begin the replacement of batteries for one of the power channels on the orbiting laboratory. They replaced aging nickel-hydrogen batteries for one of two power channels on the far starboard truss (S6 Truss) of the station with new lithium-ion batteries that arrived to the station on a Japanese cargo ship. This was the culmination of power upgrade spacewalks that began in January 2017. Photo courtesy of NASA | License Photo

See the original post:

SpaceX cargo mission to carry water bears, baby squids to space station - UPI News

UFC’s Steven Peterson boycotts fighting in Texas, says ‘commission is f*cked’ – MMA Junkie

UFC featherweight Steven Peterson is preparing to take on Chase Hooper at UFC 263 on June 12 in an exciting matchup that will be in front of a full crowd in Glendale, Ariz.

The UFC has already resumed packing arenas for pay-per-view events, with its most recent outing in Houston for UFC 262. Even though Peterson (18-9 MMA, 2-3 UFC) lives in the state and trains out of Fortis MMA in Dallas, dont expect to see the hard-hitting featherweight compete in the Lone Star State in the future.

During an interview with MMA Island, Peterson explained why he will not be taking a fight in Texas, should the UFC come back before the end of the year.

Ill gladly attend and hopefully corner some of our guys, but I will not be fighting in Texas any time soon, Peterson said. I am boycotting the Texas commission.

Putting it bluntly, Peterson stated, The Texas commission is f*cked, man.

He continued, I have nine losses in my career, five of which I would highly contest, sit with you, watch the tape and argue with you how I won the fight. Those decisions were all lost in Texas.

The most recent example occurred in 2019 when Peterson took on Alex Caceres at UFC on ESPN 4 in San Antonio. Caceres won a unanimous decision that evening, despite some in the media believing the result could have gone the other way. I thought I won that fight, hands down, Peterson said.

While Peterson is soured on Texas for the time being, he is happy to compete in other locations until changes are made.

Im not going to put myself in the line of fire if I dont have to, so Ill be sitting out any Texas shows, and hopefully something changes with the commission and we get things worked out over here, he said.

Read more:

UFC's Steven Peterson boycotts fighting in Texas, says 'commission is f*cked' - MMA Junkie

Keston Hiura returns to Brewers, Alec Bettinger optioned and Jace Peterson DFAd – Brew Crew Ball

Keston Hiura will make his return to the Milwaukee Brewers tonight after officially being called up following a couple weeks away from the big league roster.

The move was rumored earlier Monday and made sense, considering the Brewers are set to face a tough lefty in Blake Snell against San Diego tonight and the team did not have a viable right-handed first base option on the roster.

Alec Bettinger was sent down to make room on the 25-man roster, while Jace Peterson was also activated from the 10-day disabled list and DFAed.

Hiura was given some time away from baseball before starting his stint in Nashville, which included spending time with his mother who was diagnosed with cancer just as Spring Training was getting underway.

As anyone who has had a loved one fighting cancer can attest, that can take a lot of focus and energy away from the day job. But Hiura says it was a good visit to clear his head, and it seems to have worked out for him on the baseball field as well.

He was named the Sounds player of the week after going 10-for-16 with 3 home runs, 2 doubles, 2 steals and 7 RBI.

In 9 games at Triple-A, Hiura still struck out in many of the outs he made, but he clearly started hitting the ball hard more consistently something that he couldnt seem to do in the majors to start the year with a .513 OPS.

Hiura is hitting 5th for the Brewers tonight against San Diego, as the Brewers did their best to stack the lineup with righties against Snell.

Read the original here:

Keston Hiura returns to Brewers, Alec Bettinger optioned and Jace Peterson DFAd - Brew Crew Ball

Michigan State Baseball: Bailey Peterson wins Big Ten Player of the Week – The Only Colors

Fresh off of Michigan States two games to one series win over the Rutgers Scarlet Knights over the weekend, the MSU baseball team received more good news on Tuesday, as senior first baseman Bailey Peterson was named as the Big Ten Conference Baseball Co-Player of the Week along with Kellen Sarver of the University of Illinois.

Peterson, a native of Grandville, Michigan, was a machine on offense in Piscataway, New Jersey last weekend. Overall, he went 6-for-14 with eight RBIs, five runs, and three home runs for the weekend. In Sundays 14-8 win over Rutgers, Peterson got four hits on five at-bats, including a single, a double and two home runs.

This award is the first of Petersons career, and the second Big Ten Player of the Week Honor for the Spartans in 2021. Sophomore right field Zaid Walker won the award on March 10. In addition, freshman pitcher Nick Powers received the Co-Freshman of the Week award on March 30, and senior pitcher Sam Benschoter was bestowed the Co-Pitcher of the Week honor on April 27.

Peterson was also featured on the Collegiate Baseball National Players of the Week list, announced by the publication on Monday. He also becomes the second Spartan to receive this honor in 2021 as pitcher Sam Benschoter made the list in late April as well.

Peterson and the Spartans (17-24) return to action this weekend in the final series of the year against the Iowa Hawkeyes (23-18). The action starts on Friday evening and wraps up on Sunday, which will also be Senior Day. Peterson will participate in the Senior Day celebration as he plays his final game in the Green and White.

More here:

Michigan State Baseball: Bailey Peterson wins Big Ten Player of the Week - The Only Colors

Anthropic is the new AI research outfit from OpenAIs Dario Amodei, and it has $124M to burn – TechCrunch

As AI has grown from a menagerie of research projects to include a handful of titanic, industry-powering models like GPT-3, there is a need for the sector to evolve or so thinks Dario Amodei, former VP of research at OpenAI, who struck out on his own to create a new company a few months ago. Anthropic, as its called, was founded with his sister Daniela and its goal is to create large-scale AI systems that are steerable, interpretable, and robust.

The challenge the siblings Amodei are tackling is simply that these AI models, while incredibly powerful, are not well understood. GPT-3, which they worked on, is an astonishingly versatile language system that can produce extremely convincing text in practically any style, and on any topic.

But say you had it generate rhyming couplets with Shakespeare and Pope as examples. How does it do it? What is it thinking? Which knob would you tweak, which dial would you turn, to make it more melancholy, less romantic, or limit its diction and lexicon in specific ways? Certainly there are parameters to change here and there, but really no one knows exactly how this extremely convincing language sausage is being made.

Its one thing to not know when an AI model is generating poetry, quite another when the model is watching a department store for suspicious behavior, or fetching legal precedents for a judge about to pass down a sentence. Today the general rule is: the more powerful the system, the harder it is to explain its actions. Thats not exactly a good trend.

Large, general systems of today can have significant benefits, but can also be unpredictable, unreliable, and opaque: our goal is to make progress on these issues, reads the companys self-description. For now, were primarily focused on research towards these goals; down the road, we foresee many opportunities for our work to create value commercially and for public benefit.

The goal seems to be to integrate safety principles into the existing priority system of AI development that generally favors efficiency and power. Like any other industry, its easier and more effective to incorporate something from the beginning than to bolt it on at the end. Attempting to make some of the biggest models out there able to be picked apart and understood may be more work than building them in the first place. Anthropic seems to be starting fresh.

Anthropics goal is to make the fundamental research advances that will let us build more capable, general, and reliable AI systems, then deploy these systems in a way that benefits people, said Dario Amodei, CEO of the new venture, in a short post announcing the company and its $124 million in funding.

That funding, by the way, is as star-studded as you might expect. It was led by Skype co-founder Jaan Tallinn, and included James McClave, Dustin Moskovitz, Eric Schmidt and the Center for Emerging Risk Research, among others.

The company is a public benefit corporation, and the plan for now, as the limited information on the site suggests, is to remain heads-down on researching these fundamental questions of how to make large models more tractable and interpretable. We can expect more information later this year, perhaps, as the mission and team coalesces and initial results pan out.

The name, incidentally, is adjacent to anthropocentric, and concerns relevancy to human experience or existence. Perhaps it derives from the Anthropic principle, the notion that intelligent life is possible in the universe because well, were here. If intelligence is inevitable under the right conditions, the company just has to create those conditions.

See the article here:

Anthropic is the new AI research outfit from OpenAIs Dario Amodei, and it has $124M to burn - TechCrunch

AI Is Part of Marketing. Are You Up to Speed? – CMSWire

PHOTO:Danielle MacInnes

Artificial intelligence (AI) is fast becoming as fundamental to customer experience (CX) as CX has become to the business.

According to IDC, the global AI market is poised to break the $500 billion mark by 2024. AI is surging as data size and diversity continue to grow and the cloud becomes a feasible option for quickly and economically scaling compute power and data storage.

AI and its subcomponents (machine learning, computer vision, natural language processing and even forecasting) are being woven into the analytics arsenal of marketing departments at organizations across industries. Marketers today use AI at different levels: AI-enhanced campaigns to build brand preference; AI-enabled smart agents to continuously engage consumers; and AI-powered marketing technologies to drive efficiency.

"Start by doing what's necessary; then do what's possible; and suddenly you are doing the impossible." This inspirational maxim is also an effective principle for marketing and CX pros to help build out AI capabilities.

To explore how AI can be used to enhance marketing, help marketers better understand their customers and deliver a great customer experience, start with high-friction areas:

Related Article: Use AI Thinking to Improve Customer Experience

The three high-friction areas on their own are solid starting points for improving customer experience. Prioritize use cases that check two or three of these areas to compound the benefits even more.

But how you can differentiate between gimmicks and actual transformative use cases that deliver both customer and business value?

AI marketing initiatives can fall into three interrelated layers:

Use video or image analytics to make product recommendations based on facial recognition. Or enable redemption of loyalty points based on voice recognition and natural language processing (NLP).

For example, Louis Vuitton uses facial recognition within the Baidu ecommerce platform to match consumers with fragrances.

Discount supermarket chain, Lidl, uses NLP in its conversational chatbot Margot on Facebook Messenger. Margot helps shoppers get the best out of its wine selection.

Related Article: The New Wave of Web Chat: Here's What Has Changed

Conversational AI can provide shortcuts to content (e.g., how-to tutorials) and status updates to consumer accounts (e.g., points balance or orders). Pre-trained vertical AI agents can assist with product research (e.g., comparison tools for financial investments, apparel, etc.).

For example, the Bank of America chatbot Erica has served more than 10 million users and is able to understand close to 500,000 question variations.

1-800 Flowers has an AI-powered concierge named Gwyn (Gifts When You Need). Gwyn can successfully reply to customer questions, help customers find the best gifts and assist them through the entire shopping experience for individually tailored offers.

Use AIs optimization capabilities to improve marketing efficiency and continuously lift marketing performance over the long term.

Machine learning and optimization models can automate audience targeting and personalized product recommendations over multiple media channels. Forecasting and optimization techniques can tailor campaigns on-the-fly, and even discover new segments.

Coca-Cola installed AI-powered vending machines that use the Coca-Cola mobile app in tandem with facial recognition in some countries to deliver customized experiences. These new vending machines increased channel revenue by 6%, with 15% fewer restocking trips owing to personalization and better stock management and inventory optimization.

Related Article: Personalization at Scale: Is AI the Most Realistic Way Forward?

AI will be an essential part of modern marketing. Marketers must ramp up their AIQ (artificial intelligence quotient) to learn from, adapt to, collaborate with and generate business results from AI. AI will continue to replace mundane, repetitive marketing tasks. Human skills like creativity, communication, collaboration, empathy and judgement will become increasingly important. Already, new roles such as data artists and data storytellers are emerging, signaling the beginning of this transformation.

Marketers are under pressure to deliver ROI and often find it difficult to justify big AI investments. Take an experimental, testing approach and increase the variables to optimize simultaneously: web design, incentives, messages, timing, etc. To effectively demonstrate ROI, start with a small campaign or project with clear success metrics. Focus first on two areas that have clear goals, for example, increase customer service response rates by X% (a specified percent).

First, use existing value- based metrics to see if AI improves marketing performance and delivers business outcomes. Common metrics today include cost per acquisition, sales conversion rates, customer lifetime value and return on marketing investment. Second, determine whether AI increases the efficiency of marketing measurement. Dont measure the success of AI. Measure the success of your marketing initiatives.

AI promises to enhance every aspect of customer experience. To prevent disillusionment, marketing leaders must pursue AI in the context of brand differentiation, profitable growth and efficiency gains.

Wilson Raj is the Global Director of Customer Intelligence at SAS, responsible for the marketing of SAS AI-powered marketing solutions. Data-inspired and creatively-driven, Raj has built brand value, engagement and loyalty through expertise in strategy and analytical marketing.

Read more:

AI Is Part of Marketing. Are You Up to Speed? - CMSWire

AI is learning how to create itself – MIT Technology Review

But theres another crucial observation here. Intelligence was never an endpoint for evolution, something to aim for. Instead, it emerged in many different forms from countless tiny solutions to challenges that allowed living things to survive and take on future challenges. Intelligence is the current high point in an ongoing and open-ended process. In this sense, evolution is quite different from algorithms the way people typically think of themas means to an end.

Its this open-endedness, glimpsed in the apparently aimless sequence of challenges generated by POET, that Clune and others believe could lead to new kinds of AI. For decades AI researchers have tried to build algorithms to mimic human intelligence, but the real breakthrough may come from building algorithms that try to mimic the open-ended problem-solving of evolutionand sitting back to watch what emerges.

Researchers are already using machine learning on itself, training it to find solutions to some of the fields hardest problems, such as how to make machines that can learn more than one task at a time or cope with situations they have not encountered before. Some now think that taking this approach and running with it might be the best path to artificial general intelligence.We could start an algorithm that initially does not have much intelligence inside it, and watch it bootstrap itself all the way up potentially to AGI, Clune says.

The truth is that for now, AGI remains a fantasy. But thats largely because nobody knows how to makeit.Advances in AI are piecemeal and carried out by humans, with progress typically involving tweaks to existing techniques or algorithms, yielding incremental leaps in performance or accuracy. Clune characterizes these efforts as attempts to discover the building blocks for artificial intelligence without knowing what youre looking for or how many blocks youll need. And thats just the start. At some point, we have to take on the Herculean task of putting them all together, he says.

Asking AI to find andassemble those building blocks for usis a paradigm shift. Its saying we want to create an intelligent machine, but we dont care what it might look likejust give us whatever works.

Even if AGI is never achieved, the self-teaching approach may still change what sorts of AI are created. The world needsmore than a very good Go player, says Clune. For him, creating a supersmart machine means building a system that invents its own challenges, solves them, and then invents new ones. POET is a tiny glimpse of this in action. Clune imagines a machine that teaches a bot to walk, then to play hopscotch, then maybe to play Go. Then maybe it learns math puzzles and starts inventing its own challenges, he says. The system continuously innovates, and the skys the limit in terms of where it might go.

More:

AI is learning how to create itself - MIT Technology Review

On Thinking Machines, Machine Learning, And How AI Took Over Statistics – Forbes

Sixty-five years ago, Arthur Samuel went on TV to show the world how the IBM 701 plays checkers. He was interviewed on a live morning news program, sitting remotely at the 701, with Will Rogers Jr. at the TV studio, together with a checkers expert who played with the computer for about an hour. Three years later, in 1959, Samuel published Some Studies in Machine Learning Using the Game of Checkers, in the IBM Journal of Research and Development, coining the term machine learning. He defined it as the programming of a digital computer to behave in a way which, if done by human beings or animals, would be described as involving the process of learning.

On February 24, 1956, Arthur Samuels Checkers program, which was developed for play on the IBM 701, ... [+] was demonstrated to the public on television

A few months after Samuels TV appearance, ten computer scientists convened in Dartmouth, NH, for the first-ever workshop on artificial intelligence, defined a year earlier by John McCarthy in the proposal for the workshop as making a machine behave in ways that would be called intelligent if a human were so behaving.

In some circles of the emerging discipline of computer science, there was no doubt about the human-like nature of the machines they were creating. Already in 1949, computer pioneer Edmund Berkeley wrote inGiant Brains or Machines that Think: Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill... These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.

Maurice Wilkes, a prominent developer of one of those giant brains, retorted in 1953: Berkeley's definition of what is meant by a thinking machine appears to be so wide as to miss the essential point of interest in the question, Can machines think? Wilkes attributed this not-very-good human thinking to a desire to believe that a machine can be something more than a machine. In the same issue of the Proceeding of the I.R.E that included Wilkes article, Samuel published Computing Bit by Bit or Digital Computers Made Easy. Reacting to what he called the fuzzy sensationalism of the popular press regarding the ability of existing digital computers to think, he wrote: The digital computer can and does relieve man of much of the burdensome detail of numerical calculations and of related logical operations, but perhaps it is more a matter of definition than fact as to whether this constitutes thinking.

Samuels polite but clear position led Marvin Minsky in 1961 to single him out, according to Eric Weiss, as one of the few leaders in the field of artificial intelligence who believed computers could not think and probably never would. Indeed, he pursued his life-long hobby of developing checkers-playing computer programs and professional interest in machine learning not out of a desire to play God but because of the specific trajectory and coincidences of his career. After working for 18 years at Bell Telephone Laboratories and becoming an internationally recognized authority on microwave tubes, he decided at age 45 to move on, as he was certain, says Weiss in his review of Samuels life and work, that vacuum tubes soon will be replaced by something else.

The University of Illinois came calling, asking him to revitalize their EE graduate research program. In 1948, the project to build the Universitys first computer was running out of money. Samuel thought (as he recalled in an unpublished autobiography cited by Weiss) that it ought to be dead easy to program a computer to play checkers and that if their program could beat a checkers world champion, the attention it would generate will also generate the required funds.

The next year, Samuel started his 17-year tenure with IBM, working as a senior engineer on the team developing the IBM 701, IBMs first mass-produced scientific computer. The chief architect of the entire IBM 700 series was Nathaniel Rochester, later one of the participants in the Dartmouth AI workshop. Rochester was trying to decide the word length and order structure of the IBM 701 and Samuel decided to rewrite his checkers-playing program using the order structure that Rochester was proposing. In his autobiography, Samuel recalled that I was a bit fearful that everyone in IBM would consider checker-playing program too trivial a matter, so I decided that I would concentrate on the learning aspects of the program. Thus, more or less by accident, I became one of the first people to do any serious programing for the IBM 701 and certainly one of the very first to work in the general field later to become known as artificial intelligence. In fact, I became so intrigued with this general problem of writing a program that would appear to exhibit intelligence that it was to occupy my thoughts almost every free moment during the entire duration of my employment by IBM and indeed for some years beyond.

But in the early days of computing, IBM did not want to fan the popular fears that man was losing out to machines, so the company did not talk about artificial intelligence publicly, observed Samuel later. Salesmen were not supposed to scare customers with speculation about future computer accomplishments. So IBM, among other activities aimed at dispelling the notion that computers were smarter than humans, sponsored the movie Desk Set, featuring a methods engineer (Spencer Tracy) who installs the fictional and ominous-looking electronic brain EMERAC, and a corporate librarian (Katharine Hepburn) telling her anxious colleagues in the research department: They cant build a machine to do our jobthere are too many cross-references in this place. By the end of the movie, she wins both a match with the computer and the engineers heart.

In his1959 paper, Samuel described his approach to machine learning as particularly suited for very specific tasks, in distinction to the Neural-Net approach, which he thought could lead to the development of general-purpose learning machines. Samuels program searched the computers memory to find examples of checkerboard positions and selected the moves that were previously successful. The computer plays by looking ahead a few moves and by evaluating the resulting board positions much as a human player might do, wrote Samuel.

His approach to machine learning still would work pretty well as a description of whats known as reinforcement learning, one of the basket of machine-learning techniques that has revitalized the field of artificial intelligence in recent years, wrote Alexis Madrigal in a 2017 survey of checkers-playing computer programs. One of the men who wrote the bookReinforcement Learning, Rich Sutton, called Samuels research the earliest work thats now viewed as directly relevant to the current AI enterprise.

The current AI enterprise is skewed more in favor of artificial neural networks (or deep learning) then reinforcement learning, although Googles DeepMind famously combined the two approaches in its Go-playing program which successfully beat Go master Lee Sedol in a five-game match in 2016.

Already popular among computer scientists in Samuels time (in 1951, Marvin Minsky and Dean Edmunds built SNARCStochastic Neural Analog Reinforcement Calculatorthe first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons), the neural networks approach was inspired by a1943 paperby Warren S. McCulloch and Walter Pitts in which they described networks of idealized and simplified artificial neurons and how they might perform simple logical functions, leading to the popular (and very misleading) description of todays artificial neural networks-based AI as mimicking the brain.

Over the years, the popularity of neural networks have gone up and down a number of hype cycles, starting with thePerceptron, a 2-layer artificial neural network that was considered by the U.S. Navy, according to a 1958 New York Times report, to be "the embryo of an electronic computer that.. will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." In addition to failing to meet these lofty expectations, neural networks suffered from a fierce competition from a growing cohort of computer scientists (including Minsky) who preferred the manipulation of symbols rather than computational statistics as the better path to creating a human-like machine.

Inflated expectations meeting the trough of disillusionment, no matter what approach was taken, resulted in at least two periods of gloomy AI Winter. But with the invention and successful application of backpropagation as a way to overcome the limitations of simple neural networks, sophisticated statistical analysis was againon the ascendance, now cleverly labeled as deep learning. In 1988, R. Colin Johnson and Chappell Brown published Cognizers: Neural Networks and Machine That Think, proclaiming that neural networks can actually learn to recognize objects and understand speech just like the human brain and, best of all, they wont need the rules, programming, or high-priced knowledge-engineering services that conventional artificial intelligence systems requireCognizers could very well revolutionize our society and will inevitably lead to a new understanding of our own cognition.

Johnson and Brown predicted that as early as the next two years, neural networks will be the tool of choice for analyzing the contents of a large database. This predictionand no doubt similar ones in the popular press and professional journalsmust have sounded the alarm among those who did this type of analysis for a living in academia and in large corporations, having no clue of what the computer scientists were talking about.

InNeural Networks and Statistical Models, Warren Sarle explained in 1994 to his worried and confused fellow statisticians that the ominous-sounding artificial neural networks are nothing more than nonlinear regression and discriminant models that can be implemented with standard statistical software like many statistical methods, [artificial neural networks] are capable of processing vast amounts of data and making predictions that are sometimes surprisingly accurate; this does not make them intelligent in the usual sense of the word. Artificial neural networks learn in much the same way that many statistical algorithms do estimation, but usually much more slowly than statistical algorithms. If artificial neural networks are intelligent, then many statistical methods must also be considered intelligent.

Sarle provided his colleagues with a handy dictionary translating the terms used by neural engineers to the language of statisticians (e.g., features are variables). In anticipation of todays data science (a more recent assault led by computer programmers) and predictions of algorithms replacing statisticians (and even scientists), Sarle reassured his fellow statisticians that no black box can substitute for human intelligence: Neural engineers want their networks to be black boxes requiring no human interventiondata in, predictions out. The marketing hype claims that neural networks can be used with no experience and automatically learn whatever is required; this, of course, is nonsense. Doing a simple linear regression requires a nontrivial amount of statistical expertise.

In a footnote to his mention of neural networks in his 1959 paper, Samuel cited Warren S. McCulloch who has compared the digital computer to the nervous system of a flatworm, and declared: To extend this comparison to the situation under discussion would be unfair to the worm since its nervous system is actually quite highly organized as compared to [the most advanced artificial neural networks of the day]. In 2019, Facebooks top AI researcher and Turing Award-winner Yann LeCun declared that Our best AI systems have less common sense than a house cat. In the sixty years since Samuel first published his seminal machine learning work, artificial intelligence has advanced from being not as smart as a flatworm to having less common sense than a house cat.

See the original post here:

On Thinking Machines, Machine Learning, And How AI Took Over Statistics - Forbes

Supercharging AI to leave the productivity slump in the dust – Advanced Manufacturing

Artificial intelligence already helps individual factories improve production, safety, efficiency and other metrics while lowering costs. Marrying AI and cloud technology can supercharge those benefits and offer manufacturers faster time to value, better visibility into supply chains and dynamic design, proponents say.

AI in the cloud could put an end to the manufacturing labor productivity slump. But where to turn for lessons? Try hedge funds, Formula One racing and cucumber-sorting operations.

AI has long been attractive in manufacturing for solving complex problems and going beyond alarms to intelligent insights, said UptimeAI CEO Jagadish Gattu. Manufacturing operations and equipment are complex, he said.

Moving to the cloudsoftware and services that run on the Internet instead of only on a companys own networksupercharges AI in several ways. Specifically, the move:

Were living in a productivity slump, Baber Farooq, head of product strategy for procurement solutions at SAP, said in reference to the often-cited stats from the U.S. Department of Labor and the Bureau of Labor Statistics, as well as an often-quoted Deloitte/MAPI study. Many factors are causing that. For the last 15 years, the adoption of technology in manufacturing has not been happening at the pace that we need to reverse that [productivity slump] trend.

Over the last 15 years, cloud processes have come to maturity and helped drive productivity for IT.

The promise of cloud computing has been the promise of scalabilityhow can I scale to multiple sites, multiple customers? he said. AI is the continuation of that growth. AI holds the key for manufacturing to capitalize on the gains cloud computing has brought to AI.

AI in the cloud helps manufacturers save money, adapt to change and become aware of emerging market trends, said Oliver Christie, head of Voltare Consulting. Because we have access to AI tools, we can reduce costs, increase quality, speed up time to end result and optimize how products are built, he said. Were not changing the product, just changing how its made. Were able to adapt more quickly to new situations, such as changes in tariffs. Obviously, the pandemic has changed everything. Manufacturing needs to be aware of new market situations and new market trends.

Manufacturers are already using artificial intelligence, but combining AI with the cloud provides access to a wide variety of excellent algorithms that are being constantly updated, Christie said.

The biggest benefit in the cloud is the marketplace of algorithms, with access to a huge number of different algorithms taking different approaches to your data, he said. You can pick the best one out there from around the world.

Combining AI with the cloud allows manufacturers to open a fire hose of data and glean benefits, said Joe Gerstl, director of product management for manufacturing execution systems (MES) at GE Digital.

When youre dealing with data on premises, you have limitations, he said. You have only so much space. You dont have time to process it all. In the cloud, you can have so much data, not just big data but very rich, raw, and very thick. When I say rich, its raw data. It is all the data. We have customers that have 10 years of data in their manufacturing data cloud. Its not summarized or aggregated unless you want it to be.

Because you have so much space and its so cheap to store this data, you can get all your data on the cloud. The whole point of AI is to learn. It can learn patterns. The more data you can feed it, in terms of richness and thickness, the better its going to be at predicting things and providing the powerful analytics you need and can use, he added. When you apply AI to that data, you can make results more accurate because the system has more history to look at. It can learn faster, easier and smarter.

Early adopters are able to be more accurate now in their predictions, Gerstl said. They can achieve improved operational equipment efficiency (OEE), better estimate when orders will be complete, and better predict and prevent problems, he said.

Theyve had time to learn from the data, tweak the AI, and tweak their models, he said. They can see trends and take action faster than before. People who are further along are just better at it.

Early adopters also are seeing benefits as non-technical workers within their companies are able to mine the data for insight, Gerstl said.

These citizen data scientists are able to create some very powerful analytics that help them do their jobs better and faster, make products with higher quality, and result in less equipment down time, he said.

One way to inspire factory managers, citizen data scientists and others to become early adopters is to show them the possibilities, Christie said.

One engaging example is using a $50 computer running AI to sort cucumbers (watch a related Mediacorp video).

With very little technology and open-source software, you can set up something that was unheard of, or prohibitively expensive, 20 years ago and put people into the mindset of how you can train AI, he said.

Christie recommends his clients consider buying or building such a machine, or at least watching the video to get line management and workers thinking about the possibilities.

The telecom and retail industries are ahead in adopting AI in the cloud, Gattu said. In manufacturing, automotive, energy and food and beverage are standouts, he added.

The equipment life cycle in a particular sector is one factor in how quickly AI in the cloud is adopted, he said.

The switch to artificial intelligencecloud or otherwiseis slower in sectors that keep equipment a long time, as well as in highly regulated sectors, such as energy, he added.

Another reason for delays in scaling in domain-specific industries, such as process manufacturing, is that AI solutions are data-science-centric and less domain/application oriented, Gattu said. A plant engineer should not have to learn about neural networks to improve the operations, just as a driver using self-driving car does not have to know about deep learning. Thats why our plant-monitoring solution uses a purpose-built AI engine to solve the needs of plant engineers and manufacturers.

While many early adopters have been large manufacturers, small and mid-size companies are also gaining benefits, Gerstl said.

GE Digital is starting to create what it calls starter kits with out-of-the-box AI to sell to smaller companies, he said.

One small company in the UK was able to use GE Digitals tool to solve a quality issue that had baffled executives and shop-floor workers at the company for a long time, he said.

Another company using GE Digitals tool had a consumer complaint with one of its products, Gerstl said. Without the analytics provided by the tool, solving the problem would have taken six months. We did it in two weeksand it took that long because it was the first time, he said. We had all the data accessible and in a proper format and we had the tools that allowed us to get to the bottom of it.

AI in the cloud can help manufacturers improve poor-performing plants by learning what better-performing plants are doing correctly, Gattu said.

What we generally see is that one plant in the United States has 15 days of downtime out of every year and another plant in Algeria in the same enterprise has 20 days of downtime, he said. The difference between 15 and 20 days can be millions of dollars.

For example, when a piece of equipment is leaking, three possible actions might be possible to repair the leak, Gattu said: With one, the leak could be fixed in five days. With another, seven days. And with the other, only two days.

Our AI solution is learning which of these actions is solving these problems faster, he said. When the issue comes up for the fourth time, the manufacturer knows he should go with the third recommendation because thats able to solve the problem in two days. You can take that knowledge and present it to the person operating the plant in Algeriayou can transfer learning from one use case and make it available to other operations.

Achieving greater efficiency is a key benefit, Gerstl said. For example, some food and beverage manufacturers achieve efficiencies in the 90s.

They have to be efficient to remain competitive, he said. With this data, we can predict relationships and trends. In one case, we discovered a direct relationship between performance speed and quality: After a certain point, the faster you went, the worse your quality got.

By combining the cloud with new or existing AI, manufacturers can compare results and individual variables leading to those resultsmachine to machine, factory to factory and potentially among other manufacturers in the same sectorto achieve better productivity and efficiency, Christie and others said.

Google bought AI firm DeepMind in 2014 and was able to deploy AI in the cloud to reduce cooling costs by 40 percent in its large data centers, he said.

If you have many machines doing the same job, you can see whats working for one machine and optimize it machine to machine and factory to factory, he said. AI in the cloud is the fastest way of optimizing across the board. If youre connecting all those machines to the cloud, its easier to ask the questions and get input. Once you connect machines to the cloud, you can connect to every machine globally. If you wanted to sell information to other manufacturers doing something similar, thats valuable data. If theres no direct competition, sharing data would help you both.

AI in the cloud offers flexible scaling, Gattu said.

AWS, Google and others specialize in how to keep these systems running, he said. They have automatic scaling: If you have more work, they automatically scale to more machines. If you dont have work, they automatically scale down and you save on costs.

Companies that only have AI on premises are then stuck with the hardware and cant scale easily, Gattu said. Cloud gives you a lot more flexibility and access to more powerful servers. You dont have to buy all the high-end compute power and then get stuck with those servers or have to upgrade every two years. With cloud, updating is easy because your vendor is getting new machines. You can scale enormouslyto petabytesin the cloud. On premises, you have to struggle to get that kind of scale without losing performance.

Manufacturers can take advantage of a variety of tools on hosted platforms, such as Azure, and start small. Once they get their applications running, they can then scale without worrying about buying the hardware or managing the software, Gerstl said.

The hardware is very costly for on premises, he said. It cost $100,000 for one of our customers to set up the hardware they use on their site. The procurement process, especially at large corporations, is a real pain. It can take six weeks to get the hardware set up. In that six weeks, the company could already be set up [for AI in the cloud] on Azure or another platform.

Another appeal is to codify the domain knowledge of factory-level subject matter experts, many of whom are approaching retirement, Gattu said.

With AI, manufacturers can bank that expert knowledge before these experts retire and add it to their smart factory tools. The next generation expects to get this kind of knowledge through tools, he said.

The best AI-in-the-cloud products have the ability to capture domain knowledge and are continuously learning, Gattu said.

You need the subject matter expert who knows what it means when pressure changes in a pump, he said. In our solution, we bridge the gap between the AI, the domain knowledge, and the self-learning workflows. If you really want to get the ability to learn, to explain, to understand what is really going on, you need to have a feedback loop. In our plant-monitoring system, the AI continually learns from what the user is doing, from data coming in, from maintenance actions being taken.

Once you have that learning and you have an application that can learn on its own, the growth of that knowledge is exponential, he added. Today, you might be two steps ahead. Tomorrow you might be 10 steps ahead because its a machine thats learning. If there is a set of knowledge you want to build in five years, you could do the same in a year with AI. It can really increase your rate of efficiency and continuously improve the organizations operations.

In addition to sorting cucumbers, manufacturers can learn lessons from some other unlikely sources. For example, hedge funds continue to sponsor competitions to build the best algorithm based on a set of data, Christie said.

Or, consider Formula One racing.

The weather impacts both racing and manufacturing production, he said. Slight changes in temperature, wind and rain impact how a car performs; and those same changes can impact factory production in real time. Additionally, weather changes globally, such as a hurricane 1,000 miles away at a critical point in a supply chain, can also impact production.

A car going the smallest fraction faster makes a difference, Christie said. They have huge amounts of data being used in real time to make real-time decisions as to what to do next. Its a very good industry to look at: how to make fast decisions and how to change when new data is available. It becomes a mirror of what a factory should do.

For example, fiberglass work is very sensitive to changes in temperature and humidity, he said.

An AI system could learn from outcomes when temperatures in a factory are hotter or cooler, humid or less humid, and see what enables the best outcome.

Let the AI set the temperature and humidity, Christie said. Your staff, with their huge amount of knowledge, normally has a good idea. But to correct for something as large as a factory can be difficult. Manufacturers can put some sensors in and set up simple questions: Whats ideal for your manufacturing performance.

But algorithms are not foolproof or the end-all solution. One thing that needs to be realized is when you build an algorithm, youre building off data and off a humans decision as to whats important and what isnt, he said. We need to keep in mind that algorithms are not foolproof.

While humans will remain the drivers of creative design for a long time, AI with Cloud can automate and improve the process, Farooq said.

With AI in the cloud, product designers could see immediately the impact of choosing a particular component or other supply looking at time to source, overall availability, cost, tariffs, natural disaster and multiple other factors.

Envision a product designer inputting different components and raw materials into a design program and seeing the impact in real timein terms of time needed to receive supplies, accessibility, price and the different factories where the part could be made.

Data that exists around different supplies available could be fed into a system, and the system could make recommendations in real time affecting not only the design itself but also the type of supplies that might be used in the process, Farooq said. Based on that, the system can tell me what kind of part can be available at what time and from where so I can design a particular part correctly.

No need even to run a simulation because the system updates the end results based on changing parameters, he said.

As the design work is happening, they are being given this information proactively by the system, Farooq said. They dont have to pause their work, run a simulation and come back.

Link:

Supercharging AI to leave the productivity slump in the dust - Advanced Manufacturing

AI Weekly: GoodAI aims to fund research on fundamental AI challenge – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Theres a growing need for investment in foundational AI technologies. With deep learning potentially approaching computational limits and subfields like natural language running up against intractable technical barriers, novel AI and machine learning techniques have arguably never been in higher demand.

NYU psychologist Gary Marcus, Google software engineer Francois Chollet, and Facebook head of AI Jerome Pesenti, among others, have argued that the lack of progress isnt surprising, as researchers face challenges both algorithmic and scientific. Even the most sophisticated AI models can suffer from catastrophic forgetting, or a tendency to abruptly forget previously learned information, in addition to a lack of reproducibility, explainability, stability, and reliability.

Thats why Marek Rosa, a Slovakian entrepreneur and computer programmer, founded GoodAI, a company dedicated to the research and development of general artificial intelligence (AGI). Hes the CEO and founder of Keen Software House, an independent video game design studio with a headquarters in Prague, the capital of the Czech Republic.

Rosa founded GoodAI in 2014 with a $10 million investment, then announcing the company publicly and its first research roadmap in 2015 and 2016, respectively. In 2017, he founded the General AI Challenge, pledging $5 million in prize money to tackle critical research problems in human-level AI development.

GoodAI now employs around 20 researchers and engineers. Its newest endeavor is the GoodAI Grants Initiative, which aims to fund efforts in areas like curiosity and continual learning. To date, the GoodAI Grants Initiative has awarded over $650,000 all from Rosa to nine projects that GoodAI considers a part of its roadmap to general AI.

What makes us different [from other grant organizations] is our openness and flexibility and our willingness to work with potential grantees in creating a fitting proposal, GoodAI PR manager Will Millership told VentureBeat in an email interview. We really dont want to be limited in who we work with by bureaucracy and therefore we work with individual scientists, groups of researchers, private companies, and even individual students. We do a lot of work to make sure that all the intellectual property from the projects is shared but this doesnt necessarily mean completely open. Each agreement in place aims to respect the academic and business interests of both GoodAI and the receivers of the grants.

In December 2019, Rosa and the GoodAI team published Badger, a unifying AI architecture defined by a principle GoodAI calls modular lifelong learning. Badger, which outlines the direction of GoodAIs research, seeks to create a system of AI agents capable of adapting to a growing, open-ended range of tasks while remaining able to reuse knowledge acquired in previous tasks.

Our aim is to develop safe general AI as fast as possible to help humanity and understand the universe, Millership said. We see the creation of human-level AI as the biggest challenge to mankind and a task far beyond that of an individual researcher or research group. Thats why we believe collaboration and not competition is the best way forward.

Among GoodAIs grant recipients is Deepak Pathak, an assistant professor at Carnegie Mellon University whos taking inspiration from developmental psychology and particularly how curiosity drives humans early developmental learning. Another is Ferran Alet, a Ph.D. student at MITs Computer Science and Artificial Intelligence Laboratory, whos aiming to make an AI model that generalizes to new tasks in new environments from small amounts of data and previous experiences.

GoodAIs ambition AGI, or the hypothetical intelligence of a machine with the capacity to understand or learn from any task has its detractors. Facebook chief AI scientist Yann LeCun believes that it cant exist, because theres no such thing as general intelligence. He argues that even human intelligence is very specialized, requiring many different systems to accomplish different individual tasks.

In something of a rebuttal to this, GoodAI recently released its latest research roadmap, which spotlights some of the technical challenges related to creating human-level or general AI. GoodAI asserts that AGI must learn to learn and engage in lifelong learning, both continuously and at a gradual cadence. It also believes that AGI should be able to engage in open-ended exploration and self-invent goals as well as generalize out of distribution and extrapolate to new problems.

Each of these features reflects the ways in which humans learn throughout their lifetime and therefore we see them as key to creating AI thats able to generalize to new problems in different environments, much like humans do, Millership said. We [plan to] work closely with the grantees during their projects, offering support if they need it, and [put] on a seminar in the summer, where all grantees can share their ideas and projects. Were trying to create an international community of researchers crossing the boundaries of academia and industry.

Despite recentbreakthroughsin solving barriers to AGI, its clear the road to more humanlike AI will be long and winding. However, efforts like GoodAI, along with nonprofit organizations and open communities like ContinualAI and EleutherAI, look to accelerate progress by tapping into the broader pool of AI and machine learning expertise.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Read the original here:

AI Weekly: GoodAI aims to fund research on fundamental AI challenge - VentureBeat

Graphs as a foundational technology stack: Analytics, AI, and hardware – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

How would you feel if you saw demand for your favorite topic which also happens to be your line of business grow 1,000% in just two years time? Vindicated, overjoyed, and a bit overstretched in trying to keep up with demand, probably.

Although Emil Eifrem never used those exact words when we discussed the past, present, and future of graphs, thats a reasonable projection to make. Eifrem is chief executive officer and cofounder of Neo4j, a graph database company that claims to have popularized the term graph database and to be the leader in the graph database category.

Eifrem and Neo4js story and insights are interesting because through them we can trace what is shaping up to be a foundational technology stack for the 2020s and beyond: graphs.

Eifrem cofounded Neo4j in 2007 after he stumbled upon the applicability of graphs in applications with highly interconnected data. His initiation came by working as a software architect on an enterprise content management solution. Trying to model and apply connections between items, actors, and groups using a relational database ended up taking half of the teams time. That was when Eifrem realized that they were trying to fit a square peg in a round hole. He thought theres got to be a better way, and set out to make it happen.

When we spoke for the first time in 2017, Eifrem had been singing the graphs are foundational, graphs are everywhere tune for a while. He still is, but things are different today.

What was then an early adopter game has snowballed to the mainstream today, and its still growing. Graph Relates Everything is how Gartner put it when including graphs in its top 10 data and analytics technology trends for 2021. At Gartners recent Data & Analytics Summit 2021, graph also was front and center.

Interest is expanding as graph data takes on a role in master data management, tracking laundered money, connecting Facebook friends, and powering the search page ranker in a dominant search engine. Panama Papers researchers, NASA engineers, and Fortune 500 leaders: They all use graphs.

According to Eifrem, Gartner analysts are seeing explosive growth in demand for graph. Back in 2018, about 5% of Gartners inquiries on AI and machine learning were about graphs. In 2019, that jumped to 20%. From 2020 until today, 50% of inquiries are about graphs.

AI and machine learning are in extremely high demand, and graph is among the hottest topics in this domain. But the concept dates back to the 18th century, when Leonhard Euler laid the foundation of graph theory.

Euler was a Swiss scientist and engineer whose solution to the Seven Bridges of Knigsberg problem essentially invented graph theory. What Euler did was to model the bridges and the paths connecting them as nodes and edges in a graph.

That formed the basis for many graph algorithms that can tackle real-world problems. Googles PageRank is probably the best-known graph algorithm, helping score web page authority. Other graph algorithms are applied to use cases including recommendations, fraud detection, network analysis, and natural language processing, constituting the domain of graph analytics.

Graph databases also serve a variety of use cases, both operational and analytical. A key advantage they have over other databases is their ability to model intuitively and execute quickly data models and queries for highly interconnected domains. Thats pretty important in an increasingly interconnected world, Eifrem argues:

When we first went to market, supply chain was not a use case for us. The average manufacturing company would have a supply chain two to three levels deep. You can store that in a relational database; its doable with a few hops [or degrees of separation]. Fast-forward to today, and any company that ships stuff taps into this global fine-grained mesh, spanning continent to continent.

All of a sudden, a ship blocks the Suez Canal, and then you have to figure out how that affects your business. The only way you can do that is by digitizing it, and then you can reason about it and do cascading effects. In 2021, youre no longer talking about two to three hops. Youre talking about supply chains that are 20, 30 levels deep. That requires using a graph database its an example of this wind behind our back.

The graph database category is actually a fragmented one. Although they did not always go by that name, graph databases have existed for a long time. An early branch of graph databases are RDF databases, based on Semantic Web technology and dating back about 20 years.

Crawling and categorizing content on the web is a very hard problem to solve without semantics and metadata. This is why Google adopted the technology in 2010, by acquiring MetaWeb.

What we get by connecting data, and adding semantics to information, is an interconnected network that is more than the sum of its parts. This graph-shaped amalgamation of data points, relationships, metadata, and meaning is what we call a knowledge graph. Google introduced the term in 2012, and its now used far and wide.

Knowledge graph use cases are booming. Reaching peak attention in Gartners hype cycle for AI in 2020, applications are trickling down from the Googles and Facebooks of the world to mid-market companies and beyond. Typical use cases include data integration and virtualization, data mesh, catalogs, metadata, and knowledge management, as well as discovery and exploration.

But theres another use of graphs that is blossoming: graph data science and machine learning. We have connected data, and we want to store it in a graph, so graph data science and graph analytics is the natural next step, said Alicia Frame, Neo4j graph data science director.

Once youve got your data in the database, you can start looking for what you know is there, so thats your knowledge graph use case, Frame said. I can start writing queries to find what I know is in there, to find the patterns that Im looking for. Thats where data scientists get started Ive got connected data, I want to store it in the right shape.

But then the natural progression from there is I cant possibly write every query under the sun. I dont know what I dont know. I dont necessarily know what Im looking for, and I cant manually sift through billions of nodes. So, you want to start applying machine learning to find patterns, anomalies, and trends.

As Frame pointed out, graph machine learning is a booming subdomain of AI, with cutting edge research and applications. Graph neural networks operate on graph structures, as opposed to other types of neural networks that operate on vectors. What this means in practice is that they can leverage additional information.

Neo4j was among the first graph databases to expand its offering to data scientists, and Eifrem went as far as to predict that by 2030, every machine learning model will use relationships as a signal. Google started doing this a few years ago, and its proven that relationships are strong predictors of behavior.

What will naturally happen, Eifrem went on to add, is that machine learning models that use relationships via graphs will outcompete those that dont. And organizations that use better models will outcompete everyone else a case of Adam Smiths invisible hand.

This confluence of graph analytics, graph databases, graph data science, machine learning, and knowledge graphs is what makes graph a foundational technology. Its whats driving use cases and adoption across the board, as well as the evolution from databases to platforms that Neo4j also exemplifies. Taking a decade-long view, Eifrem noted, there are four pillars on which this transition is based.

The first pillar is the move to the cloud. Though its probably never going to be a cloud-only world, we are quickly going from on-premises first to cloud-first to database-as-a-service (DBaaS). Neo4j was among the first graph databases to feature a DBaaS offering, being in the cohort of open source vendors Google partnered with in 2019. Its going well, and AWS and Azure are next in line, Eifrem said. Other vendors are pursuing similar strategies.

The second pillar is the emphasis on developers. This is another well established trend in the industry, and it goes hand-in-hand with open source and cloud. It all comes down to removing friction in trying out and adopting software. Having a version of the software that is free to use means adoption can happen in a bottom-up way, with open source having the added benefit of community. DBaaS means going from test cases to production can happen organically.

The third pillar is graph data science. As Frame noted, graph really fills the fundamental requirement of representing data in a faithful way. The real world isnt rows and columns its connected concepts, and its really complex. Theres this extended network topology that data scientists want to reason about, and graph can capture this complexity. So its all about removing friction, and the rest will follow.

The fourth pillar is the evolution of the graph model itself. The commercial depth of adoption today, although rapidly growing, is not on par with the benefits that graph can bring in terms of performance and scalability, as well as intuitiveness, flexibility, and agility, Eifrem said. User experience for developers and data scientists alike needs to improve even further, and then graph can be the No. 1 choice for new applications going forward.

There are actually many steps being taken in that direction. Some of them may come in the form of acronyms such as GraphQL and GQL. They may seem cryptic, but theyre actually a big deal. GraphQL is a way for front-end and back-end developer teams to meet in the middle, unifying access to databases. GQL is a cross-industry effort to standardize graph query languages, the first one that the ISO adopted in the 30-plus years since SQL was formally standardized.

But theres more the graph effect actually goes beyond software. In another booming category, AI chips, graph plays an increasingly important role. This is a topic in and of its own, but its worth noting how, from ambitious upstarts like Blaize, GraphCore and NeuReality to incumbents like Intel, there is emphasis on leveraging graph structure and properties in hardware, too.

For Eifrem, this is a fascinating line of innovation, but like SSDs before it, one that Neo4j will not rush to support until it sees mainstream adoption in datacenters. This may happen sooner rather than later, but Eifrem sees the end game as a generational change in databases.

After a long period of stagnation in terms of database innovation, NoSQL opened the gates around a decade ago. Today we have NewSQL and time-series databases. Whats going to happen over the next three to five years, Eifrem predicts, is that a few generational database companies are going to be crowned. There may be two, or five, or seven more per category, but not 20, so were due for consolidation.

Whether you subscribe to that view, or which vendors to place your bets on, is open for discussion. What seems like a safe bet, however, is the emergence of graph as a foundational technology stack for the 2020s and beyond.

Read more here:

Graphs as a foundational technology stack: Analytics, AI, and hardware - VentureBeat

Baidu debuts updated AI framework and R&D initiative – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

At Wave Summit, Baidus bi-annual deep learning conference, the company announced version 2.1 of PaddlePaddle, its framework for AI and machine learning model development. Among the highlights are a large-scale graph query engine; four pretrained models; and PaddleFlow, a cloud-based suite of machine learning developer tools that include APIs and a software development kit (SDK). Baidu also unveiled what its calling the Age of Discovery, a 1.5 billion RMB (~$235 million) grant program that will invest over the next three years in AI education, research, and entrepreneurship.

At Wave Summit, Baidu CTO Haifeng Wang outlined the top AI trends from the companys perspective. Deep learning with knowledge graphs has significantly improved the performance and interpretability of models, he said, while multimodal semantic understanding across language, speech, and vision has become achievable through graphs and language semantics. Moreover, Wang noted, deep learning platforms are coordinating closely with hardware and software to meet various development needs, including computing power, power consumption, and latency.

To this end, PaddlePaddle 2.1 introduces optimization of automatic mixed precision, which can speed up the training of models including Googles BERT by up to 3 times. New APIs reduce memory usage and further improve training speeds, as well as adding support for data preprocessing, GPU-based computation, mixed-precision training, and model sharing.

Also in tow with PaddlePaddle 2.1 are four new language models built from Baidus ERNIE. ERNIE, which Baidu developed and open-sourced in 2019, learns pretrained natural language tasks through multitask learning, where multiple learning tasks are solved at the same time by exploiting commonalities and differences between them. Beyond this, PaddlePaddle 2.1 brings an optimized pruning compression technology called PaddleSlim, as well as LiteKit, a toolkit for mobile developers that aims to reduce the development costs of edge AI.

PaddlePaddle Enterprise, Baidus business-oriented set of machine learning tools, gained a new service this month in PaddleFlow. PaddleFlow is a cloud platform that provides capabilities for developers to build AI systems, including resources management and scheduling, task execution, and service deployment via developer APIs, a command-line client, and an SDK.

In related news, Baidu says that as a part of its new Age of Discovery initiative, the company will invest RMB 500 million ($78 million) in capital and resources to support 500 academic institutions and train 5,000 AI tutors and 500,000 students with AI expertise by 2024. Baidu also plans to pour RMB 1 billion ($156 million) into 100,000 businesses for intelligent transformation and AI talent training.

Laments over the AI talent shortage have also become a familiar enterprise refrain. OReillys 2021 AI Adoption in the Enterprise paper found that a lack of skilled people and difficulty hiring topped the list of challenges in AI, with 19% of respondents citing this as a significant barrier. In 2018, Element AI estimated that of the 22,000 Ph.D.-educated researchers working on AI development and research globally, only 25% are well-versed enough in the technology to work with teams to take it from research to application.

PaddlePaddle researchers and developers will collaborate with the open source community to build a deep learning open source ecosystem and break the boundaries of AI technology, Baidu said in a press release. With the permeation of AI across various industries, it is critical for platforms to keep lowering their threshold to accelerate intelligent transformation.

Go here to see the original:

Baidu debuts updated AI framework and R&D initiative - VentureBeat

Samsung and Bayer invest in A.I. doctor app Ada Health – CNBC

Berlin-based Ada Health, which has developed a doctor-in-your-pocket style app that uses artificial intelligence to try to diagnose symptoms, has been backed by investment arms of South Korea's Samsung and German pharmaceutical giant Bayer.

Ada Health announced Thursday it has raised a $90 million funding round at an undisclosed valuation that brings total investment in the company up to around $150 million.

Bayer led the round through its Leaps by Bayer investment arm, while Samsung invested through the Samsung Catalyst Fund, a U.S.-based venture capital fund that Samsung Electronics uses to back companies worldwide. Samsung Electronics' former chief strategy officer and corporate president, Young Sohn, has joined the board of Ada Health.

Founded in 2011 by entrepreneurs Dr. Claire Novorol, Martin Hirsch and Daniel Nathrath, Ada Health says its app has been downloaded over 11 million times.

"The app basically works like a WhatsApp chat with your trusted family doctor, but 24/7," CEO Nathrath told CNBC.

The patient starts by entering their symptoms, and an AI chat bot will ask a series of questions to try to determine the issue. After that, the app will present the patient with the conditions that are most likely to be the cause and offers some suggestions on what to do next to address the issue.

The iOS and Android apps give generic advice such as to see a GP in the next three days. But when patients interact with Ada Health through a health system that uses the app, they can go straight into booking an appointment and sharing the outcome of their pre-assessment with a real doctor, Nathrath said.

He said the company has signed deals with several health systems, health insurers and life sciences companies. Axa OneHealth, Novartis, Pfizer and SutterHealth are listed as partners on Ada Health's website.

While the app is free for patients to download, Ada Health charges partners for access to its software.

The company said the new funding will be used to help it expand deeper into the U.S., which is already its biggest market with 2 million users. Elsewhere, Ada Health has roughly 4 million users across the U.K., Germany, Brazil and India, with roughly 1 million in each.

The funding will also be used to improve the company's algorithms, add to the medical knowledge base and go beyond 10 languages, Nathrath said.

He also also wants to feed the Ada Health app with more information beyond symptom data provided by the patient. That could include lab data, genetic testing and sensor data, Nathrath said.

"Smartwatches and other sensors have really made a big leap forward," Nathrath said. "Nowadays you can measure your blood pressure, you can do an ECG, measure heart rate variability and blood oxygen levels."

"Our ambition is really to build what we call a personal operating system for health where you wouldn't just have a symptom check, but you would be able to integrate all relevant sources of health information in a way where ideally Ada becomes this companion that can alert you before the 100 problem becomes a 100,000 a year problem."

Ada Health has received less funding than other "doctor" apps like Babylon and Kry.

Unlike Babylon and Kry, Ada Health doesn't allow patients to hold a video call with a GP.

Ada briefly ran a service called Doctor Chat that allowed users to consult with a registered GP through an on-demand chat portal. However, it was deactivated in March 2018 after being live for around a year.

"We were expecting a lot more people to actually use this than they did," Nathrath said, adding that people prefer the automated chat experience to video calls with GPs.

"When you look at telehealth, you can't scale it as well as you can an AI solution because you still need to hire a lot of doctors in different countries," Nathrath said.

The investment in Ada Health comes just over two weeks after British health start-up Huma raised $130 million from the venture arms of Bayer, Samsung and Hitachi.

Other investors in Ada Health's latest round include Vitruvian Ventures, Inteligo Bank, F4 and Mutschler Ventures.

Go here to see the original:

Samsung and Bayer invest in A.I. doctor app Ada Health - CNBC

How soft law is used in AI governance – Brookings Institution

As an emerging technology, artificial intelligence is pushing regulatory and social boundaries in every corner of the globe. The pace of these changes will stress the ability of public governing institutions at all levels to respond effectively. Their traditional toolkit, in the form of the creation or modification of regulations (also known as hard law), require ample time and bureaucratic procedures to properly function. As a result, governments are unable to swiftly address the issues created by AI. An alternative to manage these effects is soft law, defined as a program that creates substantial expectations that are not directly enforceable by the government. As soft law grows in popularity as a tool to govern AI systems, it is imperative that organizations gain a better understanding of their current deployments and best practicesa goal we aim to facilitate with the launch of a new database documenting these tools.

The governance of emerging technologies has relied on soft law for decades. Entities such as governments, private sector firms, and non-governmental organizations have all attempted to address emerging technology issues through principles, guidelines, recommendations, private standards, best practices, among others. Compared to its hard law counterparts, soft law programs are more flexible and adaptable, and any organization can create or adopt a program. Once programs are created, they can be adapted to reactively or proactively address new conditions. Moreover, they are not legally tied to specific jurisdictions, so they can easily apply internationally. Soft law can serve a variety of objectives: It can complement or substitute hard law, operate as a main governance tool, or as a back-up option. For all these reasons, soft law has become the most common form of AI governance.

The main weakness of soft law governance tools are their lack of enforcement. In place of enforcement mechanisms, the proper implementation of soft law governance mechanisms relies on aligning the incentives of programs stakeholders. Unless these incentives are clearly defined and well-understood, the effectiveness and credibility of soft law will be questioned. To prevent the creation of soft law programs incapable of managing the risks of AI, it is important that stakeholders consider the inclusion of implementation mechanisms and appropriate incentives.

As AI methods and applications have proliferated, so too have soft law governance mechanisms to oversee them. To build on efforts to document soft law AI governance, the Center for Law, Science and Innovation at Arizona State University is launching a database with the largest compilation, to date, of soft law programs governing this technology. The data, available here, offer organizations and individuals interested in the soft law governance of AI with a reference library to compare and contrast original initiatives or draw inspiration for the creation of new ones.

Using a scoping review, the project identified 634 AI soft law programs published between 2001 and 2019 and labeled them using up to 107 variables and themes. Our data revealed several interesting trends. Among them, we found that AI soft law is a relatively recent phenomenon, with about 90% of programs created between 2017 and 2019. In terms of origin, higher-income regions and countries, such as the United States, United Kingdom, and Europe, were most likely to serve as a host to the creation of these instruments.

In the process of identifying stakeholders responsible for generating AI soft law, we found that government institutions have a prominent role in employing these programs. Specifically, more than a third (36%) were created by the public sector, which is evidence that usage of these tools is not confined to the private sector and that they can behave as a complement to traditional hard law in guiding AI governance. Multi-stakeholder alliances involving government, private sector, and non-profits and non-profit/private sector alliances followed with a 21% and 12% share of the programs, respectively.

We also looked at soft laws reliance on the alignment of incentives for implementation. Because government cannot levy a fee or penalty through these programs, stakeholders participating in soft law have to voluntarily agree to participate. Considering this, about 30% of programs in the database publicly mention enforcement or implementation mechanisms. We analyzed these measures and found that they can be divided into four quadrants: internal vs. external and levers vs. roles. The first dimension represents the location of the resources necessary for a mechanisms operation, whether it uses those located within an organization or externally through third-parties. Meanwhile, levers are the toolkit of actions or mechanisms (e.g. committees, indicators, commitments, and internal procedures) that an organization can employ to implement or enforce a program. Its counterpart is characterized as roles. It describes how individuals, the most important resource of any organization, are arranged to execute the toolkit of levers.

Finally, in addition to identifying a programs characteristics, we labeled the text of programs. This was done by creating 15 thematic categories divided into 78 sub-themes that touch upon a wide variety of issues and make it possible to scrutinize how organizations interpret different aspects of AI. The three most labeled themes are education and displacement of labor, transparency and explainability, and ethics. Similarly, the most prevalent sub-themes were general transparency, general mentions of discrimination and bias, and AI literacy.

As AI proliferates and its governance challenges grow, soft law will become an increasingly important part of this technologys governance toolkit. An empirical understanding of the strengths and weaknesses of AI soft law will therefore be crucial for policymakers, technology companies, and civil society as they grapple with how to govern AI in a way that best harnesses its benefits, while managing its risks.

By creating the largest compilation of AI soft law programs, our aim is to provide a critical resource for policymakers in all sectors focused on responding to AI governance challenges. Its intent is to aid decision-makers in their pursuit of balancing the advantages and disadvantages of this tool and facilitate a deeper understanding of how and when they work best. To that end, we hope that the AI soft law databases initial findings can suggest mechanisms for improving the effectiveness and credibility of AI soft law, or even catalyze the creation of new kinds of soft law altogether. After all, the future of AI governance and by extension, AI soft law is too important not to get right.

Carlos Ignacio Gutierrez is a governance of artificial intelligence fellow at Arizona State University. He completed his Ph.D. in Policy Analysis at the Pardee RAND Graduate School.

Gary Marchant is Regents Professor and Faculty Director of the Center for Law, Science & Innovation, Arizona State University.

Continue reading here:

How soft law is used in AI governance - Brookings Institution

The imaging AI field is exploding, but it carries unique challenges – Healthcare IT News

The use of machine learning and artificial intelligence to analyze medical imaging data has grown significantly in recent years with 60 new products approved by the U.S. Food and Drug Administration in 2020 alone.

But AI scaling, particularly in the medical field, can face unique challenges, said Elad Benjamin, general manager of Radiology and AI Informatics at Philips, during an Amazon Web Services presentation this week.

"The AI field is experiencing an amazing resurgence of applications and tools and products and companies," said Benjamin.

The question many companies are seeking to answer is: "How do we use machine learning and deep learning to analyze medical imaging data and identify relevant clinical findings that should be highlighted to radiologists or other imaging providers to provide them with the decision support tools they need?" Benjamin asked.

He outlined three main business models being pursued:

Benjamin described common challenges and bottlenecks in the process of developing and marketing AI tools, noting that some were specifically hard to tackle in healthcare.

Gathering data at scale is one hurdle, he noted, and diversity of information is critical and sometimes difficult to achieve.

And labeling data, for instance, is the most expensive and time-consuming process, and requires a professional's perspective (as opposed to in other industries, when a layperson could label an image as a "car" or a "street" without too much trouble).

Receiving feedback and monitoring are critical too.

"You need to understand how your AI tools are behaving in the real world," Benjamin said. "Are there certain subpopulations where they are less effective? Are they slowly reducing in their quality because of a new scanner or a different patient population that has suddenly come into the fold?"

Benjamin said Philips with the help of AWS tools such as HealthLake, SageMaker and Comprehend is tackling these bottlenecks.

"Without solving these challenges it is difficult to scale AI in the healthcare domain," he said.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Go here to see the original:

The imaging AI field is exploding, but it carries unique challenges - Healthcare IT News

When will AI be ready to really understand a conversation? – Fast Company

Imagine holding a meeting about a new product release, after which AI analyzes the discussion and creates a personalized list of action items for each participant. Or talking with your doctor about a diagnosis and then having an algorithm deliver a summary of your treatment plan based on the conversation. Tools like these can be a big boost given that people typically recall less than 20% of the ideas presented in a conversation just five minutes later. In healthcare, for instance, research shows that patients forget between 40% and 80% of what their doctors tell them very shortly after a visit.

You might think that AI is ready to step into the role of serving as secretary for your next important meeting. After all, Alexa, Siri, and other voice assistants can already schedule meetings, respond to requests, and set up reminders. Impressive as todays voice assistants and speech recognition software might be, however, developing AI that can track discussions between multiple people and understand their content and meaning presents a whole new level of challenge.

Free-flowing conversations involving multiple people are much messier than a command from a single person spoken directly to a voice assistant. In a conversation with Alexa, there is usually only one speaker for the AI to track and it receives instant feedback when it interprets something incorrectly. In natural human conversations, different accents, interruptions, overlapping speech, false starts, and filler words like umm and okay all make it harder for an algorithm to track the discussion correctly. These human speech habits and our tendency to bounce from topic to topic also make it significantly more difficult for an AI to understand the conversation and summarize it appropriately.

Say a meeting progresses from discussing a product launch to debating project roles, with an interlude about the meeting snacks provided by a restaurant that recently opened nearby. An AI must follow the wide-ranging conversation, accurately segment it into different topics, pick out the speech thats relevant to each of those topics, and understand what it all means. Otherwise, Visit the restaurant next door might be the first item in your post-meeting to-do list.

Another challenge is that even the best AI we currently have isnt particularly good at handling jargon, industry-speak, or context-specific terminology. At Abridge, a company I cofounded that uses AI to help patients follow through on conversations with their doctors, weve seen out-of-the-box speech-to-text algorithms make transcription mistakes such as substituting the word tastemaker for pacemaker or Asian populations for atrial fibrillation. We found that providing the AI with information about a conversations topic and context can help. In transcribing conversations with a cardiologist, for example, medical terms like pacemaker are assumed to be the go-to.

The structure of a conversation is also influenced by the relationship between participants. In a doctor-patient interaction, the discussion usually follows a specific template: the doctor asks questions, the patient shares their symptoms, then the doctor issues a diagnosis and treatment plan. Similarly, a customer service chat or a job interview follows a common structure and involves speakers with very different roles in the conversation. Weve found that providing an algorithm with information about the speakers roles and the typical trajectory of a conversation can help it better extract information from the discussion.

Finally, its critical that any AI designed to understand human conversations represents the speakers fairly, especially given that the participants may have their own implicit biases. In the workplace, for instance, AI must account for the fact that there are often power imbalances between the speakers in a conversation that fall along lines of gender and race. At Abridge, we evaluated one of our AI systems across different sociodemographic groups and discovered that the systems performance depends heavily on the language used in the conversations, which varies across groups.

While todays AI is still learning to understand human conversations, there are several companies working on this problem. At Abridge, we are currently building AI that can transcribe, analyze, and summarize discussions between doctors and patients to help patients better manage their health and ultimately improve health outcomes. Microsoft recently made a big bet in this space by acquiring Nuance, a company that uses AI to help doctors transcribe medical notes, for $16 billion. Google and Amazon have also been building tools for medical conversation transcription and analysis, suggesting that this market is going to see more activity in the near future.

Giving AI a seat at the table in meetings and customer interactions could dramatically improve productivity at companies around the world. Otter.ai is using AIs language capabilities to transcribe and annotate meetings, something that will be increasingly valuable as remote work continues to grow. Chorus is building algorithms that can analyze how conversations with customers and clients drive companies performance and make recommendations for improving interactions with customers.

Looking to the future, AI that can understand human conversations could lay the groundwork for applications with enormous societal benefits. Real-time, accurate transcription and summarization of ideas could make global companies more productive. At an individual level, having AI that can serve as your own personal secretary can help each of us focus on being present for the conversations were having without worrying about note taking or something important slipping through the cracks. Down the line, AI that can not only document human conversations but also engage in them could revolutionize education, elder care, retail, and a host of other services.

The ability to fully understand human conversations lies just beyond the bounds of todays AI, even though most humans are able to more or less master it before middle school. However, the technology is progressing rapidly and algorithms are increasingly able to transcribe, analyze, and even summarize our discussions. It wont be long before you find a voice assistant at your next business meeting or doctors appointment ready to share a summary of what was discussed and a list of next steps as soon as you walk out the door.

Sandeep Konam is a machine learning expert who trained in robotics at Carnegie Mellon University and has worked on numerous projects at the intersection of AI and healthcare. He is the cofounder and CTO of Abridge, a company that uses AI to help patients stay on top of their health.

The rest is here:

When will AI be ready to really understand a conversation? - Fast Company

WIMI Holographic Academy Invest More in AI Technology Research, Holographic Technology Achieves Breakthrough in Mobile Interaction – GlobeNewswire

HONG KONG, May 28, 2021 (GLOBE NEWSWIRE) -- MobiusTrend, the fintech market research organization, recently released a research report "WIMI Holographic Academy Invest More in AI Technology Research, Holographic Technology Achieves Breakthrough in Mobile Interaction". With the continuous development of holographic technology, its application range is getting wider and wider. At the same time, in order to bring the audience a more extreme sense of experience and dig deeper into holographic R&D resources, outstanding scholars at home and abroad have successively invested in various special research and explored cutting-edge technologies.

According to a new study by the Holographic Research Group of Brigham Young University, they have found a way to create a lightsaber. Namely, Yoda is green, Darth Vader is red, which naturally produces a luminous beam. This discovery overcomes a long-standing challenge in this field and brings a new concept of interaction to holographic technology.

We should understand a concept firstly, that is, what is the concept of interaction? It is generally believed that interaction takes people as the main body, and people exchange information with other things, and produce interaction and influence. The two will produce a process of information exchange, which may be unilateral or bilateral. Objectively speaking, interaction contains multi-level and multi-dimensional attributes. With the continuous innovation of information technology, computer technology, virtual technology, and imaging technology, the concept of interaction is also changing subtly, and the continuous update of the concept of interaction will expand people's thinking and promote the diversification of interaction methods.

An interview with one of the researchers, Professor of Electrical Engineering of BYU, Dan Smalley, said: "What they see in the scenes they create is real; there is no computer-generated." It is not like movies, either lightsabers or photon torpedoes never really exist in physical space. They are real and can even make simple animations in thin air. The development paves the way for immersive experiences, where people can interact with holographic-like virtual objects that coexist in their direct space. To prove this principle, the team also created virtual stick figures who can walk in the air.

It can be said that the emergence of brand-new interactive concepts has changed people's stereotypes to a certain extent and brought new enlightenment. Its innovation is a key factor for the development of the holographic industry, especially its use in holographic imaging technology. Through the holographic image, the audience can enter a brand-new situation. In addition to watching, the audience can also feel the situation and get a new experience. This "spatial reconstruction" model allows the interactive subject to directly occupy the active position in the context. Holographic images have made human-computer interaction more thorough, enhanced the communication between the product and the audience, allowed the audience to have a deeper understanding of the product, and established the behavioral relationship between the product and the person. At the same time, with the help of holographic images, designers can express the design concepts and thoughts contained in the product to the audience, so that the audience can resonate in thought.

The characteristics of the new interactive concept under the holographic imaging technology are presented as follows. (1) Diversity. In the process of human perception of the surrounding environment, vision and hearing have a complementary effect on each other. Holographic imaging technology fully integrates voice and vision. On the basis of voice interaction, the interaction breaks the shackles of 2D and transforms into a three-dimensional interaction. Even some special interactive media can provide the audience with the sense of smell, touch, and taste based on the sense of sight and hearing. The diversified experience makes the interaction efficiency to a higher level. (2) Visualization. The virtual environment brings hearing and visual feelings to people. However, due to the lack of tactile feel it still has a certain gap with the real feelings. This is also one of the limitations of virtualized interaction. The holographic image is associated with force feedback. When the user touches the image, a special force feedback device will feed back information to the user, allowing the user to get a more intuitive experience. (3) A large amount of information. Compared with traditional media, holographic images carry a greater amount of information. The digital image is refined by computing software, and multiple holographic images can be superimposed to form a digital virtual space, which carries a lot of information. It is also based on this large-capacity information that brings a fuller interactive experience to the audience.

In order to promote in-depth cooperation with academia, explore holographic cutting-edge technologies with scholars at home and abroad, promote the implementation of the industry, and open up research results, this time, the WIMI Holographic Academy of Sciences also continues to research on the AI cutting-edge technology and establish strategic partnerships with scholars from research institutions. WIMI aims to explore disruptive emerging technologies together with them to accelerate the application and promotion of research results. In 2020, relying on the research teams in Shenzhen and Beijing, the WIMI Holographic Academy of Sciences has opened four research themes of holographic computing science, holographic communication science, micro-integration science, and holographic cloud science. Relying on the team strength of the WIMI Holographic Academy of Sciences, it is actively promoting the research and development of holographic products. WIMI is looking forward to exploring the unknown AI cutting-edge technology with outstanding scholars from universities or scientific research institutions at home and abroad, and to creating a sustainable and win-win AI industry-university cooperation research ecosystem. The fineness of image information obtained by WIMI holographic computer vision AI synthesis is about 10 times higher than the industry level, and the processing capability of computer holographic vision AI synthesis is about 80% better than the industry average.

At present, artificial intelligence technology is widely used in various Internet applications, enterprise-level applications, and emerging intelligent hardware scenarios. Namely, AI has penetrated all walks of life. From 2020, the scale of China's core artificial intelligence industry will be close to 65 billion yuan, involving many fields such as security, finance, medical care, and education. Facing different application requirements, artificial intelligence technology has spawned a variety of different machine learning algorithms, such as deep learning, active learning, and reinforcement learning, aiming to bring more extreme experience and greater capacity.

WIMI emphasizes both research and application development, and its basic research focuses on machine learning, computer vision, and other directions. Also, it has published many research papers, and its technology applications focus on social, gaming, education, medical AI, and other fields. In terms of cloud computing, artificial intelligence, the Internet of Things, and other frontier technology fields, WIMI Hologram Cloud also relies on global and Chinese technology and business networks, and it has successively invested in companies in related fields, attracting many innovative companies, partners, and innovative talents. In the future, it is believed that WIMI will play a more important role in the application space and bring humans a better AI interactive experience.

About MobiusTrend

MobiusTrend Group is a leading market research organization in Hong Kong. They have built one of the premier proprietary research platforms on the financial market, emphasizing on emerging growth companies and paradigm-shifting businesses. MobiusTrend team is professional in market research reports, industry insights, and financing trends analysis. For more information, please visit http://www.mobiustrend.com/

Media contact

Company: MobiusTrend Research

E-Mail: cs@mobiustrend.com

Website: http://www.mobiusTrend.com

YouTube: https://www.youtube.com/channel/UCOlz-sCOlPTJ_24rMgR6JLw

Excerpt from:

WIMI Holographic Academy Invest More in AI Technology Research, Holographic Technology Achieves Breakthrough in Mobile Interaction - GlobeNewswire

More than half of Europeans want to replace lawmakers with AI, study says – CNBC

People walking at Strandvagen in Stockholm.

JONATHAN NACKSTRAND

LONDON A study has found that most Europeans would like to see some of their members of parliament replaced by algorithms.

Researchers at IE University's Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

The results, published Thursday, showed that despite AI's clear and obvious limitations, 51% of Europeans said they were in favor of such a move.

Oscar Jonsson, academic director at IE University's Center for the Governance of Change and one of the report's main researchers, told CNBC that there's been a "decades long decline of belief in democracy as a form of governance."

The reasons are likely linked to increased political polarization, filter bubbles and information splintering, he said. "Everyone's perception is that that politics is getting worse and obviously politicians are being blamed so I think it (the report) captures the general zeitgeist," Jonsson said. He added that the results aren't that surprising "given how many people know their MP, how many people have a relationship with their MP (and) how many people know what their MP is doing."

The study found the idea was particularly popular in Spain, where 66% of people surveyed supported it. Elsewhere, 59% of the respondents in Italy were in favor and 56% of people in Estonia.

Not all countries like the idea of handing over control to machines, which can be hacked or act in ways that humans don't want them to. In the U.K., 69% of people surveyed were against the idea, while 56% were against it in the Netherlands and 54% in Germany.

Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.

Opinions also vary dramatically by generation, with younger people found to be significantly more open to the idea. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 were in support of the idea, whereas a majority of respondents above 55-years-old don't see it as a good idea.

Read the original:

More than half of Europeans want to replace lawmakers with AI, study says - CNBC