Tesla Issues Software Patch So That Its Cars Don’t Lose Power Steering on Potholes

Tesla has recalled more than 40,000 of its vehicles due to an issue that could cause a loss of power steering in its 2017-2021 Model S and Model X cars

Pesky Potholes

Tesla has "recalled" more than 40,000 of its vehicles due to a glitch that could cause a loss of power steering, according to a safety-recall report from the National Highway Traffic Administration released last week that was made public on Tuesday.

Despite officially being labeled as a recall, though, it's really just an over-the-air software update that can be downloaded by owners remotely.

Nevertheless, the issue does sound consequential. It applies to rare cases in which the company's 2017-2021 Model S and Model X cars' electronic power assist steering systems erroneously identify abrupt bumps such as potholes as "unexpected steering assist torque," the NHTSA said. In such cases, drivers could still steer their Teslas, but with much greater effort required, especially at lower speeds.

Fortunately, it doesn't look like anyone was hurt or got into any accidents as a result of the oversight, which is estimated to only affect one percent of the cars in question. As of the NHTSA report's release, 314 vehicles have been reported to have been affected by the bug.

Pile Driver

The Elon-Musk-led automaker can let out a sigh of relief that this issue didn't turn out worse, because it's already garnered unwanted scrutiny from the NHTSA and other government bodies that could have potentially ruinous implications.

On the NHTSA's part, the regulator has been investigating crashes involving Tesla's Autopilot driving assistance system since August 2021. In June, it stated that it was significantly widening the scope of its investigation.

In August, Tesla's home state of California's DMV accused the automaker of lying to customers by calling its separate driving assistance systems Autopilot and Full Self-Driving, names that could fool a driver into thinking the systems can fully drive on their own — which they can't.

And now, it was revealed in October, even the Department of Justice has reportedly been furtively probing into Autopilot's misleading marketing.

At the end of the day, it's a fairly minor slip up from Tesla, but one that's amplified by all the magnifying glasses it's provoked from government bodies, both stateside and federal.

More on Tesla: Elon Musk Pulling Engineers From Tesla Autopilot to Work on Twitter

The post Tesla Issues Software Patch So That Its Cars Don't Lose Power Steering on Potholes appeared first on Futurism.

See original here:
Tesla Issues Software Patch So That Its Cars Don't Lose Power Steering on Potholes

Elon Musk Says That Under His Brilliant New Management, Twitter May Go Out of Business

In emails to his new employees, freshly-minted Twitter czar Elon Musk told them that if they don't make money fast, the site may not survive.

But His Emails

In emails to his new employees, freshly-minted Twitter czar Elon Musk painted a pretty doom-tastic portrait of the road ahead for the social network's remaining employees — and told them that soon, they may all be out of a job.

Emails Musk sent to Twitter staff that were reviewed by the New York Times show that, at very least, he's repeating the same line internally as he is on his own account: Twitter needs to be monetized — or else.

"Without significant subscription revenue," the serial CEO wrote, "there is a good chance Twitter will not survive the upcoming economic downturn."

And at a company meeting today, Musk reportedly told employees that "bankruptcy isn't out of the question."

Elon Musk emails Twitter employees

November 9, 2022 pic.twitter.com/Qeg5CA979W

— Internal Tech Emails (@TechEmails) November 10, 2022

PO'd

It's not a great way to start a friendly CEO-staff relationship, to say the least, but it's nevertheless the posture Musk is taking as he makes sweeping changes to the social network that are, unsurprisingly, very unpopular with some of the workers left at the company following his mass layoff of half of Twitter's staff.

"Elon has shown that he cares only about recouping the losses he’s incurring as a result of failing to get out of his binding obligation to buy Twitter," one disgruntled employee wrote in an email to coworkers, according to the NYT. "This will put huge amount of personal, professional and legal risk onto engineers: I anticipate that all of you will be pressured by management into pushing out changes that will likely lead to major incidents."

To be fair, Twitter is now in some seriously dire financial straits under its new ownership, and per the Times is going to be required to pay $1 billion annually in interest under Musk's deal. Paired with advertisers' increasing wariness about the site's trajectory, things aren't looking great in Twitterland.

Nevertheless, this whole mess is indeed shaping up to be as bad as many predicted, with the new CEO following through with his $8 verification plan and all.

It forces us to beg the question: was killing Twitter Musk's plan all along?

More Musk: Elon Musk Is Suddenly Selling Tesla Stock Like Crazy

The post Elon Musk Says That Under His Brilliant New Management, Twitter May Go Out of Business appeared first on Futurism.

Read the rest here:
Elon Musk Says That Under His Brilliant New Management, Twitter May Go Out of Business

Scientists Reproduce Fascinating, Powerful Material Found in Meteorite

In an unprecedented experiment, two teams of scientists have replicated a material that was, until recently, not produced anywhere on Earth.

Spaced Out

In an unprecedented experiment, two teams of scientists on either sides of the Atlantic have replicated a material that was previously not produced anywhere on Earth.

As NPR reports, the replication of this powerful compound could have huge implications not just for the manufacturing of high-end machinery, but also for international relations to boot.

Called tetrataenite, the primarily iron-and-nickel compound is normally able to cool for millions of years as it tumbles around in asteroids. As a press release out of the University of Cambridge notes, the researchers who worked in tandem with Boston's Northeastern University found that if they add phosphorous to the mix, they were able to make synthetic tetrataenite.

Scientists made a material that doesn't exist on Earth: The compound is called tetrataenite. If synthetic tetrataenite works in industrial applications, it could make green energy technologies significantly cheaper. via @nprscience @planetmoney https://t.co/LclRNO5d6w pic.twitter.com/4yd2s4U8oj

— RealClearScience (@RCScience) November 9, 2022

Trader Gold

Beyond it being really awesome that scientists have synthesized a mineral from space, the discovery of synthetic tetrataenite is also huge because it could be used as an alternative to rare earth minerals, those valuable and difficult-to-extract materials used in the production of the heavy-duty "permanent magnets" that power tech ranging from electric vehicles to NASA experiments.

Over the past few decades, China has dominated the rare earths market because a lot of these minerals are found on the outskirts of its mainland, and it has inexpensive manufacturing and worker capabilities to undertake the laborious process of extracting them from other compounds.

Ramp It Up

With the new synthesis of terataenite, however, a future beyond a China-dominated rare earths market could unfold because, as an expert who spoke to NPR noted, it can be used as a replacement for most of the components of permanent magnets.

Northeastern's Laura Lewis cautioned against premature optimism, saying that ample testing needs to be done to make sure the synthetic is as hearty as the one found in meteorites — and even then, it'll still be at least five years, and probably more like eight, before it's "pedal to the metal" on manufacturing with it.

That said, however, it does provide an exciting look at the ways space materials can help us here on Earth — and hopefully bring about some positive international developments, too.

More on space: China Approves Three Moon Missions After Discovering Mineral That Could Be Energy Source

The post Scientists Reproduce Fascinating, Powerful Material Found in Meteorite appeared first on Futurism.

See the original post:
Scientists Reproduce Fascinating, Powerful Material Found in Meteorite

NASA Disputes Calling Its Inflatable Heat Shield a "Bouncy Castle"

Martian Bouncy Castle

It was an impressive feat: NASA launched a massive inflatable heat shield all the way into space, only to test it by crashing it down in the Pacific Ocean near Hawaii.

The stunt, dubbed the Low-Earth Orbit Flight Test of an Inflatable Decelerator (LOFTID), was meant to lay the groundwork of a system capable of landing humans safely on the surface of Mars.

At 30 feet in diameter, the flying saucer-shaped device is meant to act like a giant crash pad for spacecraft as they make their way through the atmosphere of an alien planet.

In other words, it's not unlike a bouncy castle that can be packed away when not in use, as The New York Times' Kenneth Chang suggested.

But that kind of comparison didn't sit well with the people in charge of the project.

"I would say that would be inaccurate," Neil Cheatwood, principal investigator for LOFTID, told Chang.

Splashdown

Early Thursday morning, an Atlas V rocket blasted off with LOFTID in its packed-up state in tow into low-Earth orbit.

Just over two hours later, the massive inflatable device screamed through the Earth's atmosphere, harmlessly splashing down near Hawaii.

The heat shield can act as a huge brake during descent, slowing down large payloads. It's designed to survive a massive 18,000 mph fall, and ward off blistering temperatures of up to 3,000 degrees Fahrenheit.

During future missions to the Red Planet, it could be our ticket to getting to the surface in one piece, according to NASA, when used in tandem with other systems such as parachutes or rockets.

But before we plan our first crewed mission to Mars, where's the harm in investigating if LOFTID could serve double duty as a bouncy castle once we get there?

READ MORE: NASA Launched an Inflatable Flying Saucer, Then Landed It in the Ocean [The New York Times]

More on landing on Mars: NASA Testing Giant "Crumple Zone" Gadget That Would Let Rovers Crash Into Mars and Survive

The post NASA Disputes Calling Its Inflatable Heat Shield a "Bouncy Castle" appeared first on Futurism.

Excerpt from:
NASA Disputes Calling Its Inflatable Heat Shield a "Bouncy Castle"

Divers Discover Fragment of Challenger Space Shuttle Under Ocean

Divers, who were looking for a WW2 aircraft wreckage off the Florida Space Coast discovered the heat shield remains of NASA's space shuttle Challenger.

A Rare Find

A TV documentary crew of divers who were looking for the wreckage of a World War II aircraft off the Florida Space Coast made a startling and unexpected discovery: the heat shield remains of NASA's space shuttle Challenger.

It's an incredibly rare space artifact that acts a somber reminder of the deadly 1986 disaster, a dark chapter in the history of space exploration.

"While it has been nearly 37 years since seven daring and brave explorers lost their lives aboard Challenger, this tragedy will forever be seared in the collective memory of our country," NASA Administrator Bill Nelson said in a statement. "This discovery gives us an opportunity to pause once again, to uplift the legacies of the seven pioneers we lost, and to reflect on how this tragedy changed us."

What they uncover off the coast of Florida, outside of the Triangle, marks the first discovery of wreckage from the 1986 Space Shuttle Challenger in more than 25 years. Don’t miss the premiere of The Bermuda Triangle: Into Cursed Waters on Tuesday, November 22 at 10/9C. pic.twitter.com/LWUoFXxEnK

— HISTORY (@HISTORY) November 10, 2022

Challenger Discovery

According to the TV network History, it's the first Challenger wreckage to have been discovered in more than 25 years. Footage shared by the network show divers examining small eight-inch tiles making up a large mosaic.

NASA now has to decide whether it wants to recover the wreckage. Other pieces of the Challenger spacecraft were put on display to the public for the first time back in 2015 at NASA's Kennedy Space Center Visitor Complex.

The fateful 1986 launch was NASA's 25th Shuttle mission, but 73 seconds after liftoff, it disintegrated at 46,000 feet, a tragedy watched live by countless people around the world on TV.

"Challenger and her crew live on in the hearts and memories of both NASA and the nation," said Kennedy Space Center Director Janet Petro in the statement.

"Today, as we turn our sights again toward the Moon and Mars, we see that the same love of exploration that drove the Challenger crew is still inspiring the astronauts of today’s Artemis Generation," she added, "calling them to build on the legacy of knowledge and discovery for the benefit of all humanity."

The History Channel will air its documentary about the rare find on November 22.

READ MORE: NASA Views Images, Confirms Discovery of Shuttle Challenger Artifact [NASA]

More on NASA: NASA Inspecting Moon Rocket for Damage From Hurricane Nicole

The post Divers Discover Fragment of Challenger Space Shuttle Under Ocean appeared first on Futurism.

See the rest here:
Divers Discover Fragment of Challenger Space Shuttle Under Ocean

Unexploded Shell Removed From Soldier’s Chest by Surgeons Wearing Body Armor

Surgery had to quickly be performed to remove an unexploded shell lodged in a Russian soldier's chest with no guarantee it wouldn't detonate at any moment.

A Russian soldier was rushed to the ER. His diagnosis? An unexploded shell lodged so deep in his chest it was almost touching his spine.

The soldier, junior sergeant Nikolay Pasenko, probably should've been dead already from either the impact or the impending detonation. But instead, defying all expectations, he lived — thanks to surgeons at the Mandryk Central Military Clinical Hospital who successfully removed the shell in an operation that's been dubbed a "miracle" by TASS, a state-owned Russian news agency.

Given Russia's ongoing and near-universally condemned war in Ukraine, you might be inclined to doubt the veracity of the source — but miracles like this have happened before.

"The patient was admitted with a wound that had penetrated [his] chest," the Russian Defense Ministry said in a statement, as quoted by TASS. "The examination revealed that the miraculously unexploded ordnance had pierced [his] ribs and lungs and got lodged close to the spinal cord, between the aorta and the inferior vena cava near the heart."

There was no guarantee that the munition wouldn't explode mid-surgery. The doctors — some military, some civilian — decided to operate on the soldier anyway, wearing body armor under their medical gowns, the Ministry said.

And the surgery had to be done fast — Pasenko was bleeding so profusely that there was no time to dawdle on a decision, let alone relocate to a safer or better equipped location.

"The unexploded shell was stuck between the aorta and the inferior vena cava close to the heart, which could have caused fatal bleeding even without the ordnance's detonation," Medical Corps Lieutenant-Colonel Dmitry Kim, who led the operation, told TASS. "A decision was made to carry out the surgery locally."

That decision proved to be the right call. The shell was removed without detonation, and a recovering Pasenko was shipped off to a central hospital.

But post-surgery, Pasenko said that, at the time, he was opposed to the doctors risking their lives.

"The surgeon ventured to perform the operation, I was against it," he told the Russian news agency. "And now you see that I am sitting in front of you."

"My thanks to surgeon Dmitry Kim and I will be grateful to him for the rest of my life. He replied: 'So, we will explode together.' That's it. He is a very courageous man," Pasenko said.

The post Unexploded Shell Removed From Soldier's Chest by Surgeons Wearing Body Armor appeared first on Futurism.

See the original post here:
Unexploded Shell Removed From Soldier's Chest by Surgeons Wearing Body Armor

Chinese Space Debris Crashes Down in the Philippines

ABC News reports that Chinese space debris from another one of the nation's Long March 5B rockets was just discovered at sea off the Philippines coastline.

Not Again

It happened again. ABC News reports that Chinese space debris from another one of the nation's heavy lift Long March 5B rockets was just discovered at sea off the Philippines' coastline.

The rocket remains are believed to be those of the Long March 5B that launched from the Wenchang Space Launch Center on Hainan island last week, which was reportedly carrying a payload with laboratory materials to the Tiangong Chinese space station.

This isn't the first time that the Philippines has been threatened by Chinese space junk. Now, per ABC, officials from the Philippine Space Agency are pushing authorities in Manila to ratify UN treaties regarding space junk. If those treaties are signed, citizens of the island nation would be allowed to seek restitution for any injury or damage caused by falling rocket debris.

Sky Fall

Considering that the Philippines are under China's direct space flight path, it's fair for officials to worry. In fact, back in August the nation was technically hit twice by Long March 5B junk — once at the beginning of the rocket's launch, and once at the end.

"This shows that the risk is higher for us," an official told the Philippine newspaper the The Inquirer at the time, "because we are under the flight path of most Chinese rocket launches."

Though neither of the recent Long March 5B crashes near the island actually hit land, they very well could. After all, they've done so before. A defunct rocket core made landfall in West Africa last Spring, and more recently, a chunk of a Long March 2D — a different, but apparently equally chaotic — rocket crashed into a Chinese field. And while no lives have been taken by falling space junk thus far, there's certainly a risk, and experts have even warned that there's a ten percent risk that falling cosmic trash will cause human casualties in the next decade.

For its part, China has yet to express any legitimate concern over its extremely messy rockets. And as there's yet to be much in the way of international governance for ensuring that any and all spacefaring nations keep potentially dangerous debris in check, it appears to have little incentive to change its ways.

READ MORE: Suspected Chinese rocket debris found in Philippine waters [ABC News]

More on dangerous debris: Large Chunk of Chinese Rocket Comes Crashing down, Lodges in Field

The post Chinese Space Debris Crashes Down in the Philippines appeared first on Futurism.

The rest is here:
Chinese Space Debris Crashes Down in the Philippines

Elon Musk Might Get Thrashed by Lawsuit From Heavy Metal Drummer

Richard Tornetta, a former metal drummer, sued CEO Elon Musk back in 2018, a suit which is headed to court next week. Experts say he should be worried.

Tesla CEO Elon Musk's just might get shredded this time.

Richard Tornetta, a former metal drummer who made a small investment in Tesla, sued CEO Elon Musk and the company's board in what is called a "shareholder derivative lawsuit" back in 2018, Reuters reports.

The case survived a 2019 motion to dismiss and is set to kick off in a Delaware court on Monday — which will feature Musk's own testimony and Kathaleen McCormick, the same judge who oversaw his initial bid to get out of his chaotic Twitter deal.

If Tornetta were to win, Musk would have to rescind his 2018 stock grants pay package worth $55 billion, a potentially devastating blow, especially considering the fact that Musk has already been selling off appreciable amounts of Tesla stock to fund his acquisition of Twitter.

While these kinds of lawsuits are usually dismissed as "nuisance suits" by business groups, "this case looks different," as Jessica Erickson, a professor at University of Richmond School of Law, told Reuters.

Tornetta, who runs an aftermarket car parts company and used to drum for a now-defunct metal band called "Dawn of Correction," maintains Tesla's board had undisclosed conflicts.

His suit alleges that Musk came up with his own pay plan with help of with his former divorce attorney Todd Maron, who also happened to sit on Tesla's general counsel until late 2018, CNBC reported back in March.

Musk also allegedly set the bar too low for hitting 12 performance targets, as laid out in the 2018 stock grants plan. The plan allows Musk to buy one percent of Tesla stock at a significant discount for each met target.

So far, Tesla has hit 11 out of the 12 targets, according to Reuters, but Tornetta's lawyers argue that three of those goals had already been met when shareholders met to vote on the pay package, something they say wasn't properly disclosed.

Musk and his legal team maintain that the targets kept Musk on track during a difficult time, and eventually led to a massive rise in stock price.

"The plan designed and approved by the board was not a typical pay package intended to compensate the ordinary executive for overseeing the day-to-day operations of a mature company," Musk's attorney said during a pre-trial brief, as quoted by Bloomberg, arguing that the situation called for an extraordinary pay package.

For now, all we can do is wait and see whether the lawsuit will bang heads in court.

READ MORE: Elon Musk braces for $56 billion battle with heavy metal drummer [Reuters]

More on Tesla: Tesla Issues Software Patch So That Its Cars Don't Lose Power Steering on Potholes

The post Elon Musk Might Get Thrashed by Lawsuit From Heavy Metal Drummer appeared first on Futurism.

Follow this link:
Elon Musk Might Get Thrashed by Lawsuit From Heavy Metal Drummer

A Tesla Executive Under Investigation Is Now Working at SpaceX for Some Reason

A ranking Tesla employee is taking a role as vice president of SpaceX's Starship production — even though he's under internal investigation.

Making Moves

It seems ill-advised to hire an employee who's under investigation at one of your other companies in a ranking position, but then again, Elon Musk is far from an ordinary CEO.

That's on full display as SpaceX hires Tesla's Texas plant lieutenant Omead Afshar, who according to sources close to the matter that spoke to Bloomberg has been brought on as vice president of Starship production.

Over the summer, Afshar — reportedly a close confidante of Musk's — was, as the news site reported at the time, under internal investigation for a sketchy plan he allegedly had to buy difficult-to-source construction materials for Tesla. During the investigation, some of the executive's subordinates were fired. But Afshar himself seems to have had a golden, well, Starship.

And pickle ball! https://t.co/InqxFkip7y

— Omead Afshar (@omead) November 6, 2022

Shuffleboard

It remains unclear whether Afshar is still working at Tesla as well, or if he was shuffled over to SpaceX as a result of his investigation. Sources did, however, tell Bloomberg that he hasn't been seen at Tesla's Austin plant in weeks.

Whether he was moved from Tesla to SpaceX or is working both companies, it wouldn't be the first time for either. Musk sent has shuffled Tesla employees to SpaceX previously and even sent them to Twitter in recent weeks. And as Bloomberg notes, another of his close consiglieres, Charles Keuhmann, is an executive at both companies.

To make this kind of hiring move would be weird enough in a regular context, but the fact that Musk is doing so while wreaking havoc over at his other new company makes it seem all the stranger.

More on Musk: MSN Ran a Story About Grimes and Elon Musk That's Completely Fake

The post A Tesla Executive Under Investigation Is Now Working at SpaceX for Some Reason appeared first on Futurism.

More here:
A Tesla Executive Under Investigation Is Now Working at SpaceX for Some Reason

Tesla Reportedly Canceling Solar Roof Installations Across the Country

According to reporting by Elektrek, Tesla's solar division is pulling out its solar roof program across the country, with solar employees getting laid off.

The Sun Sets

Eager customers of Tesla's solar roof program have been left holding the bag as the EV automaker says it's nixing operations in numerous markets, Electrek reports.

The cancellations underscore the degree to which the program has never really taken off. By Elecktrek's estimates, Tesla only installed its solar roofs on around 300 houses during the second quarter of 2022 — an underwhelming figure, especially since CEO Elon Musk has claimed the company's energy division will become as large as its automotive one.

And now, some Tesla Solar customers have been receiving emails from the company telling them that their orders for solar panels are being canceled.

"Upon further review of your project, our team has determined that your home is in an area we no longer service," the emails read, as quoted by Electrek. "As we cannot complete your order, we have processed your cancellation."

Solar Scapegoat

Tesla tends to be opaque when it comes to its energy division, so it's unclear which specific markets got screwed over. Electrek says the reports it's received have come from customers "in major solar markets including the greater Los Angeles area, Northern California, Oregon, and Florida."

In addition, the outlet also reports that Tesla has laid off employees in the solar scheduling, planning, and design department, but just how many is unspecified.

Historically, Tesla's solar program — controversially acquired by buying the company SolarCity in 2016 — is the one that gets the short end of the stick when it comes to reining in the budget.

In 2019, Musk admitted in a pre-trial deposition that, "If I did not take everyone off of solar and focus them on the Model 3 program to the detriment of solar, then Tesla would have gone bankrupt."

"So I took everyone from solar, and said: 'instead of working on solar, you need to work on the Model 3 program.' And as a result, solar suffered, as you would expect," he added.

Musk similarly admitted in 2022 that, for the year before, he had "shortchanged" Tesla's energy division in favor of pushing out more cars.

Considering that Musk bought the division from SolarCity with the alleged intention of bailing out his cousins that owned it, maybe it's not too surprising that the CEO seems to have no qualms over gutting it multiple times.

More on Tesla: Elon Musk Is Suddenly Selling Tesla Stock Like Crazy

The post Tesla Reportedly Canceling Solar Roof Installations Across the Country appeared first on Futurism.

Continue reading here:
Tesla Reportedly Canceling Solar Roof Installations Across the Country

AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming – Express.co.uk

Fear surrounding artificial intelligence has remained prevalent as society has witnessed the mass leaps the technology sector has made in recent years. Shadow Robot Company Director, Rich Walker explained it is not evil A.I. people should necessarily be afraid of but rather the companies they masquerade behind. During an interview with Express.co.uk, Mr Walker explained advanced A.I. that had nefarious intent for mankind would not openly show itself.

He noted companies that actively do harm to society and people within them would be more appealing to A.I. that had goals of destroying humanity.

He said: There is the kind of standard fear of A.I. that comes from science fiction.

Which is either the humanoid robot, like from the Terminator, takes over and tries to destroy humanity.

Or it is the cold compassionless machine that changes the world around it in its own image and there is no space for humans in there.

DON'T MISS:Elon Musk issues terrifying prediction on 'AI robot swarms'

There is actually quite a good argument that there are cold compassionless machines that change the world around us in their own image.

They are called corporations.

We shouldnt necessarily worry about A.I as something that will come along and change everything.

We already have these organisations that will do that.

They operate outside of national rules of laws and societal codes of conduct.

So, A.I. is not the bit that makes that happen, the bits that make that happen are already in place.

He later added: I guess you could say that a company that has known for 30 years that climate change was inevitable and has systematically defunded research into climate change and funded research that shows climate change isnt happening is the kind of organisation I am thinking of.

That is the kind of behaviour you have to say: That is trying to destroy humanity.

DON'T MISSTESS satellite presents stunning new southern sky mosaic[VIDEO]Life discovered deep underground points to subterranean Galapagos'[INTERVIEW]Shadow land: Alien life can exist in 2D universe'[INTERVIEW]

They would argue no they are not trying to do that but the fact would be the effects of what you are doing is trying to destroy humanity.

If you wanted to have an Artificial Intelligence that was a bad guy, a large corporation that profits from fossil fuels and systematically hid the information that fossil fuels were bad for the planet, that would be an A.I bad guy in my book.

The Shadow Robot Company has directed there focus on creating complex dexterous robot hands that mimicked humans hands.

The robotics company uses tactical Telerobot technology to demonstrate how A.I programmes can be used alongside human interaction to create complex robotic relationship.

Read more:

AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming - Express.co.uk

Microsoft shares its vision to become AI industry-leader – TNW

Microsoft last week filed its annual report with the SEC, and with it a new vision that emphasizes AI. The documents state the company will no longer be focused on mobile, but instead on implementing AI solutions.

In the documents, under the heading Our Vision the company put:

Our strategy is to build best-in-class platforms and productivity services for an intelligent cloud and an intelligent edge infused with artificial intelligence (AI)

Which should come as a surprise to no-one, Microsoft has gobbled up AI companies like Pac-Man eating dots, and is using them to do things like teach computers to be amazing at Ms. Pac-Man.

Microsoft started off 2017 by purchasing an AI company, which was added to its already robust machine-learningresearch team.

The companys Microsoft Research AI (MSR AI) group has been doing some impressive work, including helping the blind better understand their surroundings. And Microsoft makes hardware now, in the form of an AI co-processor chip for Hololens 2.0.

Microsoft isnt the only major legacy tech company fully prepared to shift to an AI-driven vision. IBM is famously flaunting their ride on the AI hype train with their cloud-based Watson, and Apple of course has Siri.

Theres no need to ask if AI is the future or not, because it is. Its only a matter of time now before Cortana gets appointed CEO (Sorry Satya!).

Microsoft's New Artificial Intelligence Mission Is Nothing To Dismiss on Seeking Alpha

Read next: Instagram now lets two people go Live in the same stream

Continued here:

Microsoft shares its vision to become AI industry-leader - TNW

What have you learned about machine learning and AI? – The Register

Reg Events Machine learning, AI and robotics are escaping from the lab, and popping up businesses, and we want to know how youre putting them to work.

The call for papers for M3 is open now, and we want to hear how real world organisations like yours are using artificial intelligence, machine learning algorithms, deep learning, and predictive analytics to solve real world business and technology problems.

Whether youre building systems to help researchers make sense of health data, or financial analysts make sense of markets, we want to know about it. We also want to know how youre using the technology to manage, support, and even help your customers - or at least help other parts of your company help them.

We want to hear how youre using the available algorithms, frameworks, cognitive systems and UX, and the workarounds youve put in place to make them work for you. Of course, we also would love to hear how youre putting together and managing the hardware and networks that make these systems possible.

Likewise if you've put neural networks to work, or done more than play with predictive analytics or parallel programming.

And while there are plenty of opinion leaders who will give forth on the security, privacy and ethical implications of computers and robots, wed like to hear how you deal with these challenges in practice

So, send us your proposals for conference sessions and workshops that illustrate the rapid advances in this field - because youre the people that will ultimately decide whether it succeeds or fails.

The conference will take place from October 9 to 11, at 30 Euston Square, Central London. This is a stunningly comfortable venue in which to ponder some of the most intellectually, and ethically, challenging issues facing the tech community today, and we really want you to join us.

Full details here.

Originally posted here:

What have you learned about machine learning and AI? - The Register

AI-guided ultrasound developer Caption Health raises $53M for further rollout – FierceBiotech

Caption Health, maker of an artificial-intelligence-guided ultrasound platform capable of instructing clinicians on obtaining a clearer picture of the heart in motion, has raised $53 million in new financing to expand the commercial reach of its system.

The proceeds will also help fund the development of its AI platform in additional care areas.

"Caption Health is working towards a future where looking inside the body becomes as routine as a blood pressure cuff measurement, said Armen Vidian, a partner at DCVC, which led the startups series B funding round.

September 2-3, 2020 Live, Online Course: Biopharma Revenue Forecasting that Drives Decision Making and Investments

Become fluent in the core elements of revenue forecasting including epidemiology, competitive assessments, market share assignment and pricing. Let Biotech Primer's dynamic industry experts teach you how to assess the value of new therapies.

Simplifying ultrasound is critical to providing fast, effective care," Vidian said. "By making ultrasound accessible to non-specialists with AI-guided, FDA-cleared products, Caption AI brings the benefits of medical imaging to more caregivers in more settings.

The round also raised funding from Caption Healths previous backer, Khosla Ventures, as well as new money from Atlantic Bridge and cardiovascular devicemaker Edwards Lifesciences.

RELATED: To screen COVID-19 patients for heart problems, FDA clears several ultrasound, AI devices from Philips, Eko, Caption Health

Caption Healths ultrasound AI was first approved by the FDA this past February for walking medical professionals through the steps of the common cardiac exam used to diagnose different heart diseases. The software also analyzes the 2D ultrasound image in real time and automatically records the best video clips for later analysis, while calculating measures of heart function.

An updated version of the companys algorithms and guidance later received an expedited clearance from the agency as a tool for front-line hospital staff to help evaluate COVID-19 patients for cardiovascular complications.

Now, Caption Health has accelerated its plans to bring its AI to market later this summer, and the company says it is already in use at 11 U.S. medical centers, integrated with the Terason uSmart 3200T Plus portable ultrasound system.

"We are truly grateful to our investors and to our early adopter clinicians, who have believed in us from the beginning," said Caption Health CEO Charles Cadieu. "This capital will enable us to scale our collaborations with leading research institutions, regional health systems and other providers by making ultrasound available where and when it is neededacross departments, inside and outside the hospital.

See the article here:

AI-guided ultrasound developer Caption Health raises $53M for further rollout - FierceBiotech

Artificial Intelligence: Bias and the case for industry – Luxembourg Times

Over the past many decades, science fiction has shown us scenarios where AI has surpassed human intelligence and overpowered humanity. As we near a tipping point where AI could feature in every part of our lives from logistics to healthcare, human resources to civil security we take a look at opportunities and ethical questions in AI. In this article, we speak to AI expert Prof Dr Patrick Glauner about AI bias, as well as which impact good and bad AI could have on industry and workers.

What about our jobs? Can we trust AI to do what it is meant to, and without bias? What will society look like once we are surrounded by AI? Who will decide how far AI should go? These are some of the frequently asked questions when it comes to AI. These were also part of the questions participants were encouraged to delve into at the FNRs science-meets-science-fiction event House of Frankenstein sparking also the question of what it means to be human in the age of AI.

Its not who has the best algorithm that wins. Its who has the most data.

For about the last decade, the Big Data paradigm that has dominated research in machine learning can be summarized as follows: Its not who has the best algorithm that wins. Its who has the most data.explains Dr Patrick Glauner, who in February 2020 starts a Full Professorship in AI at the Deggendorf Institute of Technology (Germany), at the young age of 30.

In machine learning and statistics, samples of the population are typically used to get insights or derive generalisations about the population as a whole. Having a biased data set means that it is not representative of the population. Glauner explains biases appear in nearly every data set.

The machine learning models trained on those data sets subsequently tend to make biased decisions, too.

Queue facial recognition can for example unlock your phone by scanning your face. However, this technology has turned out to have [SE1] ethnic bias, with personal stories and studies pointing to the technology not distinguishing between faces of Asian ethnicity. Also apps that are meant to predict criminality tend to be biased toward people with darker skin. Why? Because it was developed based on, for example, Caucasian men, rather than a representative sample of populations.

Then there is the case of Tay. Tay was an AI chatbot, which immediately turned racist when unleashed on and exposed to Twitter. This shows that currently AI does not understand what it computes meaning the term Intelligence is criticised by part of AI research community itself. It is crucial to train AI on data sets, but the risk here is that AI makes decisions about something it does not understand at all. Decisions which are then applied by humans without knowing how the AI came to this decision. This is referred to as the explainability problem the black box effect.

Other concerns are the power that comes with this technology, and where to put the limits on how it is used? China, for example, has rolled out facial recognition technology, which can be used to identify protesters. And not just that, a city in China is currently apologising for using facial recognition to shame citizens who are seen wearing their pyjamas in public.

While the EU has drafted ethics guidelines for trustworthy AI, and the CEO of Microsoft has called for global guidelines, ethical guidelines for Government use of such technology are yet to be agreed on and implemented. The use of armed drones in warfare are also a concern.

Bias an old problem on a larger scale

Prof Dr Glauner explains that bias in data is far from new, and that there is a risk that known issues will be carried over to AI if not properly addressed.

Biases have always been present in the field of statistics. I am aware of statistics papers from 1976 and 1979 that started discussing biases. In my opinion, in the Big Data era, we tend to repeat the very same mistakes that have been made in statistics for a long time, but at a much larger scale.

Glauner explains that the machine learning research community has recently started to look more actively into the problem of biased data sets. However, he stresses that there needs to be greater awareness of this issue amongst students studying machine learning, as well as amongst professors.

In my view, it will be almost impossible to entirely get rid of biases in data sets, but that approach would be at least a great start.

Glauner also explains that it is imperative to close the gap between AI in academia and industry, emphasising that he will ensure that students he teaches under his Professorship will learn early on how to solve real-world problems.

AI and jobs

AI has both positive and negative implications for the working world. Some tasks will inevitably be handed over to AI, while others will continue to require humans. There will also be a mix. The Luxembourg Governments Artificial Intelligence: a strategic vision for Luxembourg puts the focus on how AI can improve our professional lives by automating time-consuming data-related tasks, helping us use our time more efficiently in the areas that require social relations, emotional intelligence and cultural sensitivity.

Prof Dr Glauner, whose AI background is rooted in industry, sees AI having a significant impact on the jobs market, both for businesses and workers. Not everyone who loses their job to AI will be able to transform into an AI developer. He also points out that the job market has always undergone change.

For example, look back 100 years: most of the jobs from that time do not exist anymore. However, those changes are now happening more frequently. As a consequence, employees will be forced to potentially undergo retraining multiple times in their career.

For instance, China has become a world-leading country in AI innovation. Chinese companies are using that advantage to rapidly advance their competitiveness in a large number of other industries. If Western companies do not adapt to that reality, they will probably be out of business in the foreseeable future.

AI is the next step of the industrial revolution

Even though those changes are dramatic, we cannot stop them. AI is the next step of the industrial revolution.

While the previous steps addressed the automation of repetitive physical steps, AI allows us to automate manual decision-making. That is a discipline in which humans naturally excel. AIs ability to do so, too, will significantly impact nearly every industry. From a business perspective, this will result in more efficient business processes and new services/products that improve humans lives.

Prof Dr Glauners PhD project is a concrete example of how AI can be used to improve output, and customer experience. Funded by an Industrial Fellowship grant (AFR-PPP at the time) a collaboration between public research and industry Glauner developed AI algorithms that detect the non-technical losses (NTL) of power grids: critical infrastructure assets.

NTLs include, but are not limited to, electricity theft, broken or malfunctioning meters and arranged false meter readings. In emerging markets, NTL are a prime concern and often account for up to 40% of the total electricity distributed.

The annual world-wide costs for utilities due to NTL are estimated to be around USD 100 billion. Reducing NTL in order to increase reliability, revenue, and profit of power grids is therefore of vital interest to utilities and authorities. My thesis has resulted in appreciable results on real-world big data sets of millions of customers.

AI and new industries

The opportunities AI presents for existing industries areas are manifold, if done right, and AI could pave the way for completely new industries as well: space exploration and space mining would hardly be developing so fast without AI. For example, there is a communication delay from the Earth to the Moon, which makes controlling an unmanned vehicle or a machine from Earth challenging to say the least. However, if the machine would be able to navigate on its own and make the most basic of decisions, this communication gap would no longer be much of an obstacle. Find out more about this FNR-funded project.

Improve, not replace

AI undoubtedly represents huge opportunities for industry in particular, and has the potential to improve performance and, output, as well as worker and customer satisfaction, to name only a few. However, it is imperative the bodies in charge put ethical considerations and the good of society at the heart of their strategies. A balance must be found. The goal has to be to improve society and the lives of the people within it, not to replace them. The same goes for bias in AI: after all, what good can come from algorithms that build their assumptions on non-representative data?

More here:

Artificial Intelligence: Bias and the case for industry - Luxembourg Times

Save the bees, save the world: How ApisProtect uses AI and IoT to protect hives – The Next Web

Not be an alarmist, but if bees go extinct its likely that coffee would become a rare and expensive luxury commodity. And I dont want to live in that world.

Luckily, ApisProtect today announced its entry into the US market where it will provide its unique AI-powered hive monitoring system to beekeepers and farmers.

If youre unfamiliar, ApisProtect is a European startup that uses simple proprietary devices and a unique stack of AI and software to, essentially, give beekeepers a spy on the inside.

This means keepers can get ahead of health issues that otherwise could remain hidden. According to ApisProtect:

Beekeepers often rely on costly, time-consuming manual hive checks to understand their operation. However, ApisProtect research shows that 80% of manual hive inspections do not require any action on hives but disrupt the bees and risk the loss of a queen.

With ApisProtect, commercial beekeepers can now safely identify and respond to disease, pests, and other hive problems faster than ever before, thereby increasing colony size and preventing colony loss. ApisProtect lets beekeepers know immediately when specific hives need attention within their operation, as well as which hives are most productive.

Quick take: While specific projections may vary, its safe to say that honeybees are endangered to the point where solutions like this should be considered environmental safety efforts. 2020 was a crappy year for beekeepers coming off of a crappy decade for bees.

This is exactly the kind of thing AI is best suited for. Bees can adapt to just about anything but murder hornets and humans invading their spaces, and it turns out ApisProtect safeguards hives against both.

Pdraig Whelan, Co-Founder and Chief Science Officer of ApisProtect, said, in a press release:

ApisProtect technology could be a useful tool for the detection of murder hornets as a potential new threat to bee hives this pollination season.

Murder hornets can wipe out a bee hive in a matter of hours. Our platform can pinpoint the date at which a hive dies and distinguish whether this has been gradual or sudden. If a hive is healthy one day and dead the next, the beekeeper is alerted rather than having to wait for the next scheduled manual inspection.

The beekeeper can then prioritize visiting the hive and identify the tell-tale signs of a hornet attack. Precision beekeeping ensures the beekeeper can quickly take preventative measures to ensure the safety of their hives and others in the area.

The fact of the matter is that whether its murder hornets, climate change, or disease thats causing the problem, we need to fix it. Bees are incredibly important to the future survival of humans.

While the threat may have been blown out of proportion its unlikely well go extinct just because there are no more bees the loss of our honey-making friends would be a bonafide catastrophe.

According to the National Resources Defense Council:

If honeybees did disappear for good, humans would probably not go extinct (at least not solely for that reason). But our diets would still suffer tremendously. The variety of foods available would diminish, and the cost of certain products would surge.

The California Almond Board, for example, has been campaigning to save bees for years. Without bees and their ilk, the group says, almonds simply wouldnt exist.

Wed still have coffee without bees, but it would become expensive and rare. The coffee flower is only open for pollination for three or four days. If no insect happens by in that short window, the plant wont be pollinated.

For more information check out ApisProtects website here.

Published December 10, 2020 20:28 UTC

View original post here:

Save the bees, save the world: How ApisProtect uses AI and IoT to protect hives - The Next Web

The Edge AI and Vision Alliance Announces the 2021 Vision Tank Start-Up Competition Winners at the Embedded Vision Summit – PRNewswire

SANTA CLARA, Calif., May 28, 2021 /PRNewswire/ -- The Edge AI and Vision Alliance today announced the two winners of this year's Vision Tank Start-Up Competition. The annual competition showcases the best new ventures developing visual AI and computer vision products. During the final round of the competition, five finalists pitched their companies and products to a panel of judges in front of a live audience. The judges picked the winner of the Judges' Award, while attendees chose the winner of the Audience Choice Award.

JUDGES' AWARD: Retrocausal An industry leader in systems that help manufacturing workers avoid assembly mistakes, be more efficient at their daily jobs and improve the processes they drive: http://www.retrocausal.ai

AUDIENCE CHOICE AWARD: Opteran TechnologiesA brain biomimicry spin-out from the University of Sheffield, leveraging over eight years of research and 600 million years of evolution to understand how insect brains navigate and enable a new dawn for autonomy in machines: opteran.com

"We are seeing an amazing number and variety of new ventures using computer vision and visual AI to power products and solutionsacross all industries," said Jeff Bier, Founder of the Edge AI and Vision Alliance and General Chair of the Embedded Vision Summit. "I'm delighted to congratulate Retrocausal and Opteran Technologies for their progress towards bringing truly innovative technologies and solutions to fruition."

As winner of the Vision Tank Judges' Award, Retrocausal receives a $5,000 cash prize, and both winners receive a one-year membership in the Edge AI and Vision Alliance. In addition, the companies get one-on-one advice from the judges, and introductions to potential investors, customers, employees and suppliers.

Now celebrating its tenth year, the Embedded Vision Summit was held online, May 25-28. The conference is focused exclusively on practical, deployable computer vision and AI and attracts a global audience of professionals developing vision-enabled products.

About the Edge AI and Vision AllianceThe Edge AI and Vision Alliance is a worldwide industry partnership bringing together technology providers and end product companies who are creating and enabling innovative and practical applications for computer vision and edge AI. Membership is open to any company that supplies or uses technology for edge AI and vision systems and applications. For more information, visit edge-ai-vision.com.

MEDIA CONTACT:Brianna Crowl Mob: +1 (760) 687-5110Email: [emailprotected]

SOURCE Edge AI and Vision Alliance

Home

Original post:

The Edge AI and Vision Alliance Announces the 2021 Vision Tank Start-Up Competition Winners at the Embedded Vision Summit - PRNewswire

The US, China and the AI arms race: Cutting through the hype – CNET

Prasit photo/Getty Images

Artificial intelligence -- which encompasses everything from service robots to medical diagnostic tools to your Alexaspeaker -- is a fast-growing field that is increasingly playing a more critical role in many aspects of our lives. A country's AI prowess has major implications for how its citizens live and work -- and its economic and military strength moving into the future.

With so much at stake, the narrative of an AI "arms race" between the US and China has been brewing for years. Dramatic headlines suggest that China is poised to take the lead in AI research and use, due to its national plan for AI domination and the billions of dollars the government has invested in the field, compared with the US' focus on private-sector development.

Subscribe to the TVs, Streaming and Audio newsletter, receive notifications and see related stories on CNET.

But the reality is that at least until the past year or so, the two nations have been largely interdependent when it comes to this technology. It's an area that has drawn attention and investment from major tech heavy hitters on both sides of the Pacific, including Apple, Google and Facebook in the US and SenseTime, Megvii and YITU Technology in China.

Generation China is a CNET series that looks at the areas of technology where the country is looking to take a leadership position.

"Narratives of an 'arms race' are overblown and poor analogies for what is actually going on in the AI space," said Jeffrey Ding, the China lead for the Center for the Governance of AI at the University of Oxford's Future of Humanity Institute. When you look at factors like research, talent and company alliances, you'll find that the US and Chinese AI ecosystems are still very entwined, Ding added.

But the combination of political tensions and the rapid spread of COVID-19 throughout both nations is fueling more of a separation, which will have implications for both advances in the technology and the world's power dynamics for years to come.

"These new technologies will be game-changers in the next three to five years," said Georg Stieler, managing director of Stieler Enterprise Management Consulting China. "The people who built them and control them will also control parts of the world. You cannot ignore it."

You can trace China's ramp up in AI interest back to a few key moments starting four years ago.

The first was in March 2016, when AlphaGo -- a machine-learning system built by Google's DeepMind that uses algorithms and reinforcement learning to train on massive datasets and predict outcomes -- beat the human Go world champion Lee Sedol. This was broadcast throughout China and sparked a lot of interest -- both highlighting how quickly the technology was advancing, and suggesting that because Go involves war-like strategies and tactics, AI could potentially be useful for decision-making around warfare.

The second moment came seven months later, when President Barack Obama's administration released three reports on preparing for a future with AI, laying out a national strategic planand describing the potential economic impacts(all PDFs). Some Chinese policymakers took those reports as a sign that the US was further ahead in its AI strategy than expected.

This culminated in July 2017, when the Chinese government under President Xi Jinping released a development plan for the nation to become the world leader in AI by 2030, including investing billions of dollars in AI startups and research parks.

In 2016, professional Go player Lee Sedol lost a five-game match against Google's AI program AlphaGo.

"China has observed how the IT industry originates from the US and exerts soft influence across the world through various Silicon Valley innovations," said Lian Jye Su, principal analyst at global tech market advisory firm ABI Research. "As an economy built solely on its manufacturing capabilities, China is eager to find a way to diversify its economy and provide more innovative ways to showcase its strengths to the world. AI is a good way to do it."

Despite the competition, the two nations have long worked together. China has masses of data and far more lax regulations around using it, so it can often implement AI trials faster -- but the nation still largely relies on US semiconductors and open source software to power AI and machine learning algorithms.

And while the US has the edge when it comes to quality research, universities and engineering talent, top AI programs at schools like Stanford and MIT attract many Chinese students, who then often go on to work for Google, Microsoft, Apple and Facebook -- all of which have spent the last few years acquiring startups to bolster their AI work.

China's fears about a grand US AI plan didn't really come to fruition. In February 2019, US President Donald Trump released an American AI Initiative executive order, calling for heads of federal agencies to prioritize AI research and development in 2020 budgets. It didn't provide any new funding to support those measures, however, or many details on how to implement those plans. And not much else has happened at the federal level since then.

Meanwhile, China plowed on, with AI companies like SenseTime, Megvii and YITU Technology raising billions. But investments in AI in China dropped in 2019, as theUS-China trade war escalated and hurt investor confidence in China, Su said. Then, in January, the Trump administration made it harder for US companies to export certain types of AI software in an effort to limit Chinese access to American technology.

Just a couple weeks later, Chinese state media reported the first known death from an illness that would become known as COVID-19.

In the midst of the coronavirus pandemic, China has turned to some of its AI and big data tools in attempts to ward off the virus, including contact tracing, diagnostic tools anddrones to enforce social distancing. Not all of it, however, is as it seems.

"There was a lot of propaganda -- in February, I saw people sharing on Twitter and LinkedIn stories about drones flying along high rises, and measuring the temperature of people standing at the window, which was complete bollocks," Stieler said. "The reality is more like when you want to enter an office building in Shanghai, your temperature is taken."

A staff member introduces an AI digital infrared thermometer at a building in Beijing in March.

The US and other nations are grappling with the same technologies -- and the privacy, security and surveillance concerns that come along with them -- as they look to contain the global pandemic, said Elsa B. Kania, adjunct fellow with the Center for a New American Security's Technology and National Security Program, focused on Chinese defense innovation and emerging technologies.

"The ways in which China has been leveraging AI to fight the coronavirus are in various respects inspiring and alarming," Kania said. "It'll be important in the United States as we struggle with these challenges ourselves to look to and learn from that model, both in terms of what we want to emulate and what we want to avoid."

The pandemic may be a turning point in terms of the US recognizing the risks of interdependence with China, Kania said. The immediate impact may be in sectors like pharmaceuticals and medical equipment manufacturing. But it will eventually influence AI, as a technology that cuts across so many sectors and applications.

Despite the economic impacts of the virus, global AI investments are forecast to grow from $22.6 billion in 2019 to $25 billion in 2020, Su said. The bigger consequence may be on speeding the process of decoupling between the US and China, in terms of AI and everything else.

The US still has advantages in areas like semiconductors and AI chips. But in the midst of the trade war, the Chinese government is reducing its reliance on foreign technologies, developing domestic startups and adopting more open-source solutions, Su said. Cloud AI giants like Alibaba, for example, are using open-source computing models to develop their own data center chips. Chinese chipset startups like Cambricon Technologies, Horizon Robotics and Suiyuan Technology have also entered the market in recent years and garnered lots of funding.

But full separation isn't on the horizon anytime soon. One of the problems with referring to all of this as an AI arms race is that so many of the basic platforms, algorithms and even data sources are open-source, Kania said. The vast majority of the AI developers in China use Google TensorFlow or Facebook PyTorch, Stieler added -- and there's little incentive to join domestic options that lack the same networks.

The US remains the world's AI superpower for now, Su and Ding said. But ultimately, the trade war may do more harm to American AI-related companies than expected, Kania said.

Now playing: Watch this: Coronavirus care gets help from AI

0:26

"My main concern about some of these policy measures and restrictions has been that they don't necessarily consider the second-order effects, including the collateral damage to American companies, as well as the ways in which this may lessen US leverage or create much more separate or fragmented ecosystems," Kania said. "Imposing pain on Chinese companies can be disruptive, but in ways that can in the long term perhaps accelerate these investments and developments within China."

Still, "'arms race' is not the best metaphor," Kania added. "It's clear that there is geopolitical competition between the US and China, and our competition extends to these emerging technologies including artificial intelligence that are seen as highly consequential to the futures of our societies' economies and militaries."

See the original post here:

The US, China and the AI arms race: Cutting through the hype - CNET

Why EU will find it difficult to legislate on AI – EUobserver

Artificial Intelligence (AI) especially machine learning is a technology that is spreading rapidly around the world.

AI will become a standard tool to help steer cars, improve medical care or automate decision making within public authorities. Although intelligent technologies are drivers of innovation and growth, the global proliferation of them is already causing serious harm in its wake.

Last month, a leaked white paper showed that the European Union is considering putting a temporary ban on facial recognition technologies in public spaces until the potential risks are better understood.

But many AI technologies in addition to facial recognition warrant more concern, especially from European policymakers.

More and more experts have scrutinised the threat that 'Deep Fake' technologies may pose to democracy by enabling artificial disinformation; or consider the Apple Credit Card which grants much higher credit scores to husbands when compared to their wives, even though they share assets.

Global companies, governments, and international organisations have reacted to these worrying trends by creating AI ethics boards, charters, committees, guidelines, etcetera, all to address the problems this technology presents - and Europe is no exception.

The European Commission set up a High Level Expert Group on AI to draft guidelines on ethical AI.

Unfortunately, an ethical debate alone will not help to remedy the destruction caused by the rapid spread of AI into diverse facets of life.

The latest example of this shortcoming is Microsoft, one of the largest producers of AI-driven services in the world.

Microsoft, who has often tried to set itself apart from its Big Tech counterparts as being a moral leader, has recently taken heat for its substantial investment in facial recognition software that is used for surveillance purposes.

"AnyVision" is allegedly being used by Israel to track Palestinians in the West Bank. Although investing in this technology goes directly against Microsoft's own declared ethical principles on facial recognition, there is no redress.

It goes to show that governing AI - especially exported technologies or those deployed across borders - through ethical principles does not work.

The case with Microsoft is only a drop in the bucket.

Numerous cases will continue to pop up or be uncovered in the coming years in all corners of the globe given a functioning and free press, of course.

This problem is especially prominent with facial recognition software, as the European debate reflects. Developed in Big Tech, facial recognition products have been procured by government agencies such as customs and migration officers, police officers, security forces, the military, and more.

This is true for many regions of the world: like in America, the UK, as well as several states in Africa, Asia, and more.

Promising more effective and accurate methods to keep the peace, law enforcement agencies have adopted the use of AI to super-charge their capabilities.

This comes with specific dangers, though, which is shown in numerous reports from advocacy groups and watchdogs saying that the technologies are flawed and deliver more false matches disproportionately for women and darker skin tones.

If law enforcement agencies know that these technologies have the potential to be more harmful to subjects who are more often vulnerable and marginalised, then there should be adequate standards for implementing facial recognition in such sensitive areas.

Ethical guidelines neither those coming from Big Tech nor those coming from international stakeholders are not sufficient to safeguard citizens from invasive, biased, or harmful practices of police or security forces.

Although these problems have surrounded AI technologies in previous years, this has not yet resulted in a successful regulation to make AI "good" or "ethical" terms that mean well but are incredibly hard to define, especially on an international level.

This is why, even though actors from private sector, government, academia, and civil society have all been calling for ethical guidelines in AI development, these discussions remain vague, open to interpretation, non-universal, and most importantly, unenforceable.

In order to stop the faster-is-better paradigm of AI development and remedy some of the societal harm already caused, we need to establish rules for the use of AI that are reliable and enforceable.

And arguments founded in ethics are not strong enough to do so; ethical principles fail to address these harms in a concrete way.

As long as we lack rules that work, we should at least use guidelines that already exist to protect vulnerable societies to the best of our abilities. This is where the international human rights legal framework could be instrumental.

We should be discussing these undue harms as violations of human rights, utilising international legal frameworks and language that has far-reaching consensus across different nations and cultural contexts, is grounded in consistent rhetoric, and is in theory enforceable.

AI development needs to promote and respect human rights of individuals everywhere, not continue to harm society at a growing pace and scale.

There should be baseline standards in AI technologies, which are compliant with human rights.

Documents like the Universal Declaration of Human Rights and the UN Guiding Principles which steer private sector behaviour in human-rights compliant ways need to set the bar internationally.

This is where the EU could lead by example.

By refocusing on these existing conventions and principles, Microsoft's investment in AnyVision, for example, would be seen as not only a direct violation of its internal principles, but also as a violation of the UN Guiding Principles, forcing the international community to scrutinise the company's business activities more deeply and systematically, ideally leading to redress.

Faster is not better. Fast development and dissemination of AI systems has led to unprecedented and irreversible damages to individuals all over the world. AI does, indeed, provide huge potential to revolutionise and enhance products and services, and this potential should be harnessed in a way that benefits everyone.

More:

Why EU will find it difficult to legislate on AI - EUobserver

Security Think Tank: Ignore AI overheads at your peril – ComputerWeekly.com

Artificial intelligence (AI) and machine learning (ML) have huge potential in many areas of business, particularly where there is a need to automate repetitive tasks.

This is of strategic importance for the IT security sector. Growing organisations dont always have the capability to scale up back-office compliance and security teams at a rate that is proportional to their expansion, leaving the existing function to do more with less; automating wherever possible reduces these pressures without compromising compliance.

Of course, AI and ML solutions are not new. We are already witnessing the success of adopting AI to automate everyday tasks such as identifying potential fraud, authenticating users and removing user access. It is ideal for repetitive tasks such as pattern analysis, source data filtering to determine factors such as whether something is an incident and, if so, whether it is critical, so tasks such as reviewing blocked emails, websites and images no longer have to be performed manually (ie by individuals).

AIs ability to simultaneously identify multiple data points that are indicators of fraud, rather than potential incidents having to be investigated line by line, also helps hugely with pinpointing malicious behaviour.

Predicting events before they occur is harder, but ML can help enterprises to stay ahead of potential threats using existing datasets, past outcomes and insight from security breaches with similar organisations all contribute to an holistic overview of when the next attack may occur. Fraud management solutions, security incident and event monitoring(SIEM), network traffic detection and endpoint detection all make use of learning algorithms to identify suspicious activity (based on previous usage data and shared pattern recognition) to establish normal patterns of use and flag outliers as potentially posing a risk to the organisation.

This capability is also critical in counteracting cyber attacks. Rather than manually trawling through a vast number of log files after an event has occurred, known intrusion methods can be identified in real time and mitigating action taken before much of the damage can occur.

To date, the main focus for the use of AI has been on the more technical security elements such as detection, incident management and other repeatable tasks. But these are early days, and there are many other areas that would benefit from its adoption. Governance, risk and compliance (GRC), for example, requires security professionals to crunch large amounts of data to spot risk trends and understand where non-compliance is causing incidents.

First discussions around AI saw it promise to revolutionise information security operations and reduce the amount of work that would need to be performed manually.

As outlined above, it has undoubtedly enabled new areas to be explored, while detecting attacks faster than any human manually looking through data. However, it is not a silver bullet and it comes with overheads, which are often forgotten.

It used to be that organisations installed logging systems that captured critical audit trails the challenge was in finding the time to look at the logs generated, a task that is now undertaken by AI scripts. However, while its easy enough to connect an application to an AI tool so that it can scan for suspicious activity, the AI system must first be set up so that it understands the format of the logs, and what qualifies as an event that needs flagging. In other words, to be effective, it needs training for the specific needs of each enterprise.

It is important not to underestimate these setup costs, along with the resource requirements to monitor the analytics AI provides. Incident management processes still need to be manually detailed so that once an event has been detected it can be investigated to make sure it wont impact the organisation.

Once AI is up and running it is a transformative tool for the organisation, but training it to interpret what action needs to be undertaken as well as rule out false positives is a time-consuming exercise that needs to be factored in to planning and budgets.

AI and ML introduce unprecedented speed and efficiency into the process of maintaining a secure IT estate, making them ideal tools for a predictive IT security stance.

But AI and ML cannot eliminate risk, regardless of how advanced they are, especially when there is an over-reliance on the capabilities of the technology, while its complexities are under-appreciated. Ultimately, risks such as false positives, as well as failure to identify all the threats faced by an organisation, are ever-present within the IT landscape.

Organisations deploying any automated responses therefore need to maintain a balance between specialist human input and technological solutions, while appreciating that AI and ML are evolving technologies. Ongoing training enables the team to stay ahead of the threat curve a critical consideration given that attackers also use AI and ML tools and techniques; defenders need to continually adapt in order to mitigate.

Successful AI and ML will mean different things to different organisations. Metrics may revolve around the time saved by analysts, how many incidents are identified, the number false positive removed, and so on. These should be weighed up against the resource required to configure, manage and review the performance of the tools. As with almost any IT security project, the overall value needs to be viewed through the eyes of the business and its role in achieving corporate objectives to reduce risk.

View original post here:

Security Think Tank: Ignore AI overheads at your peril - ComputerWeekly.com