Something Wild Happens When You Try to Take a Video of a Car’s Sensors

A video shows how the lidar sensors equipped on self-driving cars can wreak havoc on your smartphone camera.

Public service announcement: don't point your phone camera directly at a lidar sensor.

A video recently shared on Reddit demonstrates why. As the camera zooms in on the sensor affixed to the top of a Volvo EX90, a whole galaxy of colorful dots is burned into the image, forming over the exact spot that the flashing light inside the lidar device can be seen.

What you're witnessing isn't lens flare or a digital glitch — it's real, physical damage to the camera. And it's permanent.

"Lidar lasers burn your camera," the Reddit user warned. 

Never film the new Ex90 because you will break your cell camera.Lidar lasers burn your camera.
byu/Jeguetelli inVolvo

Lidar is short for light detection and ranging, and it's become the go-to way for automakers to enable their self-driving cars to "see" their surroundings (unless you're Tesla, that is). The sensors work, essentially, by shooting a constant stream of infrared laser beams to measure the distance to nearby objects, which a computer uses to form a 3D reconstruction of everything in the vicinity of the vehicle.

We can't see the laser beams since they're in a wavelength outside the range of human vision. But cameras, on the other hand, are all too sensitive to the powerful beams. Their delicate little sensors can be damaged if they're brought too close to a lidar source, or if a long lens is used to look at one. As The Drive notes in its coverage, this is why backup cameras are usually unaffected, since they use an ultra-wide angle lens. In the video, you'll also notice that the burn-in damage disappears when the camera zooms out: that's the camera transitioning from a long lens to its undamaged short one.

To its credit, Volvo explicitly warns about lidar damage on its support page and its owners manual, but that hasn't stopped a few surprised owners from learning about it the hard way

And honestly, we can't really blame them. The phenomenon has even caught a self-driving car engineer off-guard, who discovered that their $2,000 Sony camera's sensor was permanently fried after attending a CES show where lidar-equipped cars were being exhibited.

This is a risk with potentially any car's lidar tech and not just Volvo's, to be clear. After The Drive reached out, the Swedish automaker doubled down on its warning.

"It's generally advised to avoid pointing a camera directly at a lidar sensor," a Volvo representative told The Drive. "The laser light emitted by the lidar can potentially damage the camera's sensor or affect its performance."

"Using filters or protective covers on the camera lens can help reduce the impact of lidar exposure," the Volvo rep added. "Some cameras are designed with built-in protections against high-intensity light sources."

If reading all this has you worried about your eyeballs, fret not: according to experts, the lidar beams used in cars are harmless. Volvo's lidar system, for example, uses 1550-nanometer lasers, and at that wavelength, the light can't even reach the retina.

We still wouldn't recommend staring at them, though.

More on phones:Trump Believes Entire iPhones Can Be Manufactured in America

The post Something Wild Happens When You Try to Take a Video of a Car's Sensors appeared first on Futurism.

Read more:
Something Wild Happens When You Try to Take a Video of a Car's Sensors

Family Uses AI To Revive Dead Brother For Impact Statement in Killer’s Trial

In Arizona, the family of a man killed during a road rage incident has used artificial intelligence to revive their dead loved one in court.

In Arizona, the family of a man killed during a road rage incident has used artificial intelligence to revive their dead loved one in court — and the video is just as unsettling as you think.

As Phoenix's ABC 15 reports, an uncanny simulacrum of the late Christopher Pelkey, who died from a gunshot wound in 2021, played in a courtroom at the end of his now-convicted killer's trial.

"In another life, we probably could have been friends," the AI version of Pelkey, who was 37 when he died, told his shooter, Gabriel Paul Horcasitas. "I believe in forgiveness."

Despite that moving missive, it doesn't seem that much forgiveness was in the cards for Horcasitas.

After viewing the video — which was created by the deceased man's sister, Stacey Wales, using an "aged-up" photo Pelkey made when he was still alive — the judge presiding over the case ended up giving the man a 10-and-a-half year manslaughter sentence, which is a year more than what state prosecutors were asking for.

In the caption on her video, Wales explained that she, her husband Tim, and their friend Scott Yenzer made the "digital AI likeness" of her brother using a script she'd written alongside images and audio files they had of him speaking in a "prerecorded interview" taken months before he died.

"These digital assets and script were fed into multiple AI tools to help create a digital version of Chris," Wales wrote, "polished by hours of painstaking editing and manual refinement."

In her interview with ABC15, Pelkey's sister insisted that everyone who knew her late brother "agreed this capture was a true representation of the spirit and soul of how Chris would have thought about his own sentencing as a murder victim."

She added that creating the digital clone helped her and her family heal from his loss and left her with a sense of peace, though others felt differently.

"Can’t put into words how disturbing I find this," writer Eoin Higgins tweeted of the Pelkey clone. "The idea of hearing from my brother through this tech is grotesque. Using it in a courtroom even worse."

Referencing both the Pelkey video and news that NBC is planning to use late sports narrator Jim Fagan's voice to do new promos this coming NBA season, a Bluesky user insisted that "no one better do this to me once I'm dead."

"This AI necromancy bullshit is so creepy and wrong," that user put it — and we must say, it's hard to argue with that.

More on AI revivals: NBC Using AI to Bring Beloved NBA Narrator Jim Fagan Back From the Grave

The post Family Uses AI To Revive Dead Brother For Impact Statement in Killer’s Trial appeared first on Futurism.

Link:
Family Uses AI To Revive Dead Brother For Impact Statement in Killer’s Trial

Deranged Video Shows AI Job Recruiter Absolutely Losing It During an Interview

Looking for work is already arduous enough — but for one job-seeker, the process was made worse by an insane AI recruiter.

Looking for work is already arduous enough — but for one job-seeker, the process became something out of a deleted "Black Mirror" scene when the AI recruiter she was paired with went veritably insane.

In a buckwild TikTok video, the job-seeker is seen suffering for nearly 30 seconds as the AI recruiter barked the term "vertical bar pilates" at her no fewer than 14 times, often slurring its words or mixing up letters along the way.

@its_ken04

It was genuinely so creepy and weird. Please stop trying to be lazy and have AI try to do YOUR JOB!!! It gave me the creeps so bad #fyp

? original sound - Its Ken ?

The incident — and the way it affected the young woman who endured it — is a startling example not only of where America's abysmal labor market is at, but also of how ill-conceived this sort of AI "outsourcing" has become.

Though she looks nonplussed on her interview screen, the TikToker who goes by Ken told 404 Media that she was pretty perturbed by the incident, which occurred during her first (and only) interview with a Stretch Lab fitness studio in Ohio.

"I thought it was really creepy and I was freaked out," the college-aged creator told the website. "I was very shocked, I didn’t do anything to make it glitch so this was very surprising."

As 404 discovered, the glitchy recruiter-bot was hosted by a Y Combinator-backed startup called Apriora, which claims to help companies "hire 87 percent faster" and "interview 93 percent cheaper" because multiple candidates can be interviewed simultaneously.

In a 2024 interview with Forbes, Apriora cofounder Aaron Wang attested that job-seekers "prefer interviewing with AI in many cases, since knowing the interviewer is AI helps to reduce interviewing anxiety, allowing job seekers to perform at their best."

That's definitely not the case for Ken, who said she would "never go through this process again."

"If another company wants me to talk to AI," she told 404, "I will just decline."

Commenters on her now-viral TikTok seem to agree as well.

"This is the rudest thing a company could ever do," one user wrote. "We need to start withdrawing applications folks."

Others still pointed out the elephant in the room: that recruiting used to be a skilled trade done by human workers.

"Lazy, greedy and arrogant," another person commented. "AI interviews just show me they don't care about workers from the get go. This used to be an actual human's job."

Though Apriora didn't respond to 404's requests for comment, Ken, at least, has gotten the last word in the way only a Gen Z-er could.

"This was the first meeting [with the company] ever," she told 404. "I guess I was supposed to earn my right to speak to a human."

More on AI and labor: High Schools Training Students for Manual Labor as AI Looms Over College and Jobs

The post Deranged Video Shows AI Job Recruiter Absolutely Losing It During an Interview appeared first on Futurism.

Visit link:
Deranged Video Shows AI Job Recruiter Absolutely Losing It During an Interview

Called Out for AI Slop, Andrew Cuomo Blames One-Armed Man

Looking for a patsy, ex-New York governor and current NYC mayoral candidate Andrew Cuomo found one in his one-armed aide.

Looking for a patsy, ex-New York governor and current NYC mayoral candidate Andrew Cuomo found one: a one-armed man.

As the New York Times reports, the aide in question, longtime Cuomo adviser Paul Francis, admitted to using ChatGPT to craft the disgraced former governor's new, typo-riddled housing policy plan.

"It’s very hard to type with one hand," Francis, who had his left arm amputated in 2012 following a sudden illness, told the NYT. "So I dictate, and what happens when you dictate is that sometimes things get garbled. And try as I might to see them when I proofread, sometimes they get by me."

The "things" that "got by" the career wonk include, but are not limited to, a headline with the term "objectively" misspelled as "Bbjectively," the mask-off claim that rent control is "symbolic," and a link to a 2024 Gothamist article that cited ChatGPT as the sourced used to pull it up.

Though the 29-page policy document has since been updated to remove its more glaring mistakes, an archived version of the original thing shows the embarrassing errors in all their glory.

The excuse also doesn't quite add up. There's nothing wrong with using dictation software, but why would that cause blatant spelling errors or a reference to ChatGPT? And why wasn't someone else reviewing the document before pushing it out? All told, it sounds a lot like a fictional explanation crafted by a political campaign to deter critics by throwing a man with a disability under the bus.

Prior to Francis' admission in the NYT, the Cuomo campaign went back and forth with local news website Hell Gate as to whether ChatGPT was used to write the document.

After an initial non-denial from spokesperson Rich Azzopardi that thanked the outlet for "pointing out the grammar" errors, followed up with a longer explanation about the use of voice recognition software, and then claimed that the person who wrote the policy paper insisted they didn't use AI to write it.

In a statement to Hell Gate, housing advocate Cea Weaver used the ChatGPT-generated errors to clown on the scandalous ex-governor.

"[Cuomo's] campaign is so out of touch that he is outsourcing housing policy to a robot," Weaver, the director of the New York State Tenant Bloc, told the site. "But New Yorkers don't need ChatGPT to tell us that we need a rent freeze — it's 'bbjective.'"

Obviously, this is far from the first time a politician has been caught using AI — and with the way things are going, it certainly won't be the last.

More on inapproprite AI uses: Judge Goes Ballistic When Man Shows AI-Generated Video in Court

The post Called Out for AI Slop, Andrew Cuomo Blames One-Armed Man appeared first on Futurism.

Read more:
Called Out for AI Slop, Andrew Cuomo Blames One-Armed Man

Chinese Police Deploy Rolling BB-8-STyle Robot to Patrol Streets, Chase Down Suspects

In China, police are now patrolling the streets with a rolling spherical robot that can chase down suspects and beat them in a fight.

Imperial Police

In Eastern China, police are now patrolling the streets with a rolling robot that can chase down suspects — and, they say, beat them in a fight.

As the South China Morning Post reports, cops in the city of Wenzhou in Zhejiang province have lately been flanked by the spherical robot that looks a bit like a militarized version of the cutesy BB-8 robot in "Star Wars."

Named the "Rotunbot" or "RT-G" for short, this spherical robot was created by researchers at Zhejiang University on behalf of a Shenzen-based outfit called Logon Technology. It reportedly weighs about 275 pounds and travels up to 22 miles-per-hour — and according to Wang You, an associate professor who worked on it, only takes a few seconds to reach that speed.

"This robot can cope with dangers such as falling or being beaten," Wang told SCMP, "and can perform tactical actions such as enemy identification, tracking, and capture after modular modification."

Equipped with net-guns, tear gas, and speakers, the robot is also reportedly pretty good at scaring off any would-be attackers.

"If you win the fight, you’ll end up in jail," the robot was heard saying in a recent fight simulation viewed by the SCMP. "If you lose the fight, you’ll end up in hospital."

Burning Rubber

While "Star Wars" aesthetics are very much present in RT-G's design, its autonomous operations are more akin to the 2010 sleeper horror hit "Rubber," which follows a sentient tire as it wreaks havoc across a desertscape.

Though there don't seem to have been any public demonstrations of the robot operating autonomously yet, a promotional video released by Logon ahead of RT-G's deployment in Wenzhou suggests it can navigate various types of situations by itself.

"Narrow terrain, extreme weather, dangerous work environments, violent conflicts and wars, all pose huge threats to human life and activities," reads a translation of the video on the r/Cyberpunk subreddit. "Thus an amphibious, intelligent robot emerged to replace humans in these style environments."

It's a far cry from the crappy police robots that have been repeatedly deployed and recalled by law enforcement in New York — and honestly, this one is a lot scarier.

More on cop-bots: Eric Adams Has Been Indicted, But His Crappy Subway Robot Will Be "Redeployed"

The post Chinese Police Deploy Rolling BB-8-STyle Robot to Patrol Streets, Chase Down Suspects appeared first on Futurism.

Read more:
Chinese Police Deploy Rolling BB-8-STyle Robot to Patrol Streets, Chase Down Suspects

Former CEO Blames Working From Home for Google’s AI Struggles, Regrets It Immediately

Billionaire ex-Google CEO Eric Schmidt is walking back his questionable claim that remote work is to blame for Google's AI failures.

Eyes Will Roll

Ex-Google CEO Eric Schmidt is walking back his questionable claim that remote work is to blame for Google slipping behind OpenAI in Silicon Valley's ongoing AI race.

On Tuesday, Stanford University published a YouTube video of a recent talk that Schmidt gave at the university's School of Engineering. During that talk, when asked why Google was falling behind other AI firms, Schmidt declared that Google's AI failures stem from its decision to let its staffers enjoy remote work and, with it, a bit of "work-life balance."

"Google decided that work-life balance and going home early and working from home was more important than winning," the ex-Googler told the classroom. "And the reason startups work is because people work like hell."

The comment understandably sparked criticism. After all, work-life balance is important, and Google isn't a startup.

And it didn't take long for Schmidt to eat his words.

"I misspoke about Google and their work hours," Schmidt told The Wall Street Journal in an emailed statement. "I regret my error."

In a Stanford talk posted today, Eric Schmidt says the reason why Google is losing to @OpenAI and other startups is because Google only has people coming in 1 day per week ? pic.twitter.com/XPxr3kdNaC

— Alex Kehr (@alexkehr) August 13, 2024

Ctrl Alt Delete

In the year 2024, Google is one of the most influential tech giants on the planet, and a federal judge in Washington DC ruled just last week that Google has monopoly power over the online search market. Its pockets are insanely deep, meaning that it can compete in the industry talent war and devote a ridiculous amount of resources to its AI efforts.

What it didn't do, though, was publicly release a chatbot before OpenAI did. OpenAI, which arguably isn't exactly a startup anymore either, was the first to wrench open that Pandora's box — and Google has been playing catch-up ever since.

So in other words, not sleeping on the floors of Google's lavish facilities isn't exactly the problem here.

In a Wednesday statement on X-formerly-Twitter, the Alphabet Workers Union declared in response to Schmidt's comments that "flexible work arrangements don't slow down our work."

"Understaffing, shifting priorities, constant layoffs, stagnant wages and lack of follow-through from management on projects," the statement continued, "these factors slow Google workers down every day."

Later on Wednesday, as reported by The Verge, Stanford removed the video of Schmidt's talk from YouTube upon the billionaire's request.

More on Google AI: Google's Demo of Its Latest AI Tech Was an Absolute Train Wreck

The post Former CEO Blames Working From Home for Google's AI Struggles, Regrets It Immediately appeared first on Futurism.

Go here to read the rest:
Former CEO Blames Working From Home for Google's AI Struggles, Regrets It Immediately