Taylor Swift May Have Known FTX Was Trouble. Why Elon Musk Is Not Surprised. – Barron’s

Taylor Swift may have been the only celebrity with doubts about the legality of FTX before the cryptocurrency exchange collapsed last year, according to a lawyer leading a class-action lawsuit against the groups high-profile ambassadors.

Adam Moskowitz, a key attorney in the class-action lawsuit against the celebrity ambassadors who promoted FTX, said legal discovery proceedings revealed that Swift did due diligence on FTX. Moskowitz made the comments on The Blocks The Scoop podcast released Wednesday.

FTX failed last year amid allegations of fraud, wiping away billions of dollars of customer and investor money and shaking crypto markets. Sam Bankman-Fried, the exchanges founder and former CEO, faces a number of financial crime charges to which he has pleaded not guilty.

Moskowitz and representatives for Swift and Bankman-Fried didnt immediately respond to requests for comment from Barrons.

Among those named in the lawsuit, which alleges that celebrities endorsed a potential fraudand potentially promoted unregistered securitiesare football star Tom Brady, basketball icon Shaquille ONeal, and businessman Kevin OLeary. Representatives of none of the three immediately responded to requests for comment early on Wednesday.

Advertisement - Scroll to Continue

None of these Defendants performed any due diligence prior to marketing these FTX products to the public, the lawsuit alleges.

Swiftwho the Financial Times reported was courted by FTX to be among the groups celebrity ambassadorsmay have been the outlier.

The one person I found that did that was Taylor Swift, Moskowitz said on The Scoop podcast, in response to a question from host Frank Chaparro on why some celebrities didnt seem to talk to lawyers before signing contracts to make sure they would not be peddling unregistered securities.

Advertisement - Scroll to Continue

In our discovery Taylor Swift actually asked that: Can you tell me that these are not unregistered securities? Moskowitz said.

It makes the music artist a rarity in choosing not to be linked with FTX, which not only attracted high-profile brand ambassadors but a body of venture-capital and pension fund investors. One, venture-capital powerhouse Sequoia Capital, wrote down its investment in FTX to zero before the exchange went bankrupt, noting that it had run a rigorous diligence process but was in the business of taking risk.

One of the worlds richest people, for his part, isnt shocked that Swift appears to have been skeptical about what the Financial Times reported was a sponsorship deal worth more than $100 million.

Im not surprised, Tesla

Advertisement - Scroll to Continue

Some of Swifts biggest tracks are Dont Blame Me, I Knew You Were Trouble, and Bad Blood.

Write to Jack Denton at jack.denton@barrons.com

See the original post:

Taylor Swift May Have Known FTX Was Trouble. Why Elon Musk Is Not Surprised. - Barron's

A single order from Elon Musk’s Tesla has boosted a family’s fortune to over $800 million – Yahoo News

A single order from Tesla has boosted a family's fortune to over $800 million, according to Bloomberg.

Cathode company L&F won a $2.9 billion order from Tesla this year, sending its stock soaring.

That's generated a large amount of wealth for the Jae-hong family, who owns stock in the battery-material firm.

A single order from Elon Musk's Tesla has boosted a family's fortune to hundreds of millions of dollars.

Shares in L&F, a South Korea-based cathode company, have skyrocketed 82% this year after it secured a $2.9 billion order from the US carmaker. It's meant the Jae-hong family, who owns stock in the battery-material firm, are now worth over $800 million, according to Bloomberg.

Tesla has been a long-time customer of L&F, purchasing the company's cathodes for years through batteries provided by LG Energy Solution but this is the first time Musk's automaker has becomes a direct client, per the outlet.

Following the Tesla deal, L&F expects its dependence on LG Energy Solution will fall to 50% of revenue generated by 2025.

"The fact that its latest client is not any other but the one that's leading the market carries even bigger significance," a Meritz Securities analyst told Bloomberg.

Tesla's dominance of the electric-vehicle industry has seen the company trigger a price war to boost demand for its vehicles - and analysts say it's working. The carmaker recently reported record first-quarter deliveries, up 36% from a year earlier.

The EV maker's stock has bounced about 73% to $187 a share so far this year, making it one of the best-performing companieson the tech-heavyNasdaq Compositeindex.

Meanwhile, shares of companies that supply electric-vehicle components or materials have soared in recent years, and subsequently inflated the wealth of their owners. For example, Ryu Kwang-ji, the chairman of chemical company Kumyang Co, saw his stake in the firm balloon to $1.4 billion after the share price surged more than 1.600% in the past year, per Bloomberg.

Read the original article on Business Insider

See the original post:

A single order from Elon Musk's Tesla has boosted a family's fortune to over $800 million - Yahoo News

Elon Musk treating journalistic independence like ‘a game,’ CBC … – The Globe and Mail

Images are unavailable offline.

Elon Musk has repeatedly vowed to increase what he calls the fun levels of Twitter, a platform that is rapidly losing active users, according to industry estimates.

CHANDAN KHANNA/AFP/Getty Images

The CBC says Twitter chief executive Elon Musk is treating journalistic independence as a game, while newsrooms around the world reconsider their use of the social-media platform amid its volatile moves and declining audience.

Mr. Musk has repeatedly vowed to increase what he calls the fun levels of Twitter, a platform that is rapidly losing active users, according to industry estimates, with competitors TikTok, YouTube, Reddit, Facebook and Instagram continuing to hold steadier user activity.

While some people use CBCs Twitter feeds as a source of headlines and news alerts, Twitter is among the smallest sources of traffic for news content out of the social-media platforms we use, CBC spokesperson Leon Mar told The Globe and Mail on Wednesday.

Story continues below advertisement

The Canadian public broadcaster paused its use of the social-media platform for the foreseeable future after one of its accounts was labelled as 69% Government-funded Media this week. Industry observers say the CBCs new label on Twitter is part of a spate of trolling attempts by Mr. Musk, as he reshapes what used to be the worlds foremost communications tool.

With regard to the latest from Elon Musk, this is not a serious response. Journalistic independence is not a game, Mr. Mar said, providing a website link to a late Monday night interaction between Mr. Musk and an online blogger.

Shortly before that exchange, Twitter had labelled CBC as 70% Government-funded Media for a few hours after the online blogger @TitterDaily noted inaccurately that CBC was arguing against the label because it funds the other 30% on its own. Later, that same day, Mr. Musk changed it to 69% Government-funded Media after pseudonymous Twitter user @itsALLrisky, who runs a newsletter about dogecoin, which is cryptocurrency that Mr. Musk has heavily invested in, suggested it as a joke.

Twitter first assigned the @CBC account its label as Government-funded Media on April 16. In the United States, similar labels for National Public Radio and the Public Broadcasting Service have led the news organizations to stop using the platform, a decision that CBC mirrored.

Story continues below advertisement

ABC (Australia), KBS (South Korea) and RNZ (New Zealand) have also been designated as government-funded media like the CBC, Mr. Mar noted. However, unlike most media organizations new labels on Twitter, the @CBC account is the only one with a percentage of government funding attached to it.

Parliamentary appropriations in Canada accounted for 66 per cent of CBC and Radio-Canadas sources of funds in 2022. But thats not the real point, Mr. Mar said. The real issue is that Twitters definition of Government-funded Media means open to editorial interference by government the government has no zero involvement in our editorial content or journalism.

Its all very arbitrary, said Philip Mai, co-director of the Social Media Lab at Toronto Metropolitan University (formerly Ryerson), where he studies how online platforms affect society. These labels are being given and verification checkmarks are being stripped on Twitter on an ad hoc basis.

You cant really reason with a troll like Elon Musk, said Taylor Owen, founding director and professor at McGill Universitys Centre for Media, Technology and Democracy in Montreal. We have to start thinking very seriously whether news organizations even need to use Twitter any more with all of these antics.

Story continues below advertisement

Twitter first began to attach labels to media outlets and government officials in 2020. It was a policy designed predominantly for countries such as China and Russia, where the state exercises control over editorial content through financial resources, direct or indirect political pressure, and control over production or distribution.

Democratic countries such as Canada and the U.S. were mostly spared from the labelling policy. But under Mr. Musks ownership this has shifted.

Mr. Musk finalized his acquisition of Twitter in October, 2022, after abandoning a months-long legal battle to back out of his initial offer. Since then, he has strived to rejuvenate the unprofitable companys mercurial business by establishing subscription options and promising to eliminate verification checkmarks for accounts that do not pay for them. Simultaneously, however, he has slashed Twitters staff, nearly wiping out its entire content moderation team and dissolving its independent Trust and Safety council as a whole.

Were looking at someone who is doing everything on a whim, Prof. Owen said.

Story continues below advertisement

He can do whatever he wants because its this billionaires new toy, and the company is private now with zero accountability, Mr. Mai said.

Trolling has long been a large part of Mr. Musks internet persona. He often mixes quips about April Fools Day, cannabis culture and the number 69 (referring to the oral sex position) with official business matters to elicit a response from his followers. The tycoon brought this aspect of his online identity under his ownership of Twitter, which has seen its revenue plunge over the years, with advertising income further dropping amid Mr. Musks tumultuous takeover.

This week, he interacted with a number of memes and conspiracy theories about the CBC and other Canadian media outlets, allowing the platforms algorithm to push certain inaccurate tweets higher onto its users homepages.

The biggest problem with Twitters ad business is that advertisers dont trust Musk, said Jasmine Enberg, principal analyst at market research firm Insider Intelligence, which forecasts a 28-per-cent decline in advertising income for the company and has slashed estimates of overall revenue from US$4.74-billion to US$2.98-billion this year. The takeover saga caused a spike in time spent in 2022 that has now dissipated, Ms. Enberg said.

Mr. Mar declined to say whether the CBC is still paying to advertise on Twitter. He also did not say how long the broadcasters pause on the platform will last.

Story continues below advertisement

Twitter responded to The Globes repeated requests for comment with a poop emoji. It is an auto-response the company initiated for media inquiries last month, which Mr. Musk tweeted about on March 19, then deleted thereafter.

Link:

Elon Musk treating journalistic independence like 'a game,' CBC ... - The Globe and Mail

A Grand Unified Theory of Why Elon Musk Is So Unfunny – Rolling Stone

BRITTA PEDERSEN/POOL/AFP via Getty Images

Elon Musk has had a busy few days. This weekend, he ordered the w in the the sign on Twitters San Francisco headquarters painted over, so that it read Titter. Then, on Monday, he changed his Twitter display name to Harry Blz before tweeting, Impersonating others is wrong! He later added: Im just hoping a media org that takes itself way too seriously writes a story about Harry Blz Then, on Tuesday, he announced that the sites unpaid legacy verification checks, formerly scheduled for removal on April Fools Day, would now disappear on April 20, or 4/20, the stoner holiday to which Musk has winkingly referred on many occasions. That evening, he gave an interview with a BBC reporter and, in a couple of tweets afterward, pretended to confuse the news organization with the shorthand for the porn category big black cock. Sharing part of the conversation, he commented, Penetrating deep & hard with BBC.

For anyone whos remained a regular Twitter user since Musks takeover of the platform last year, none of this is remotely surprising: he logs on every day and, aided by an algorithm that forces his posts onto everyones feed, punishes us with a routine of garbled gags, corny jokes, and pilfered memes. (Full disclosure: Musk once re-posted a meme I made, and it still makes me feel unclean.) Yet Musk wasnt always so eager to have the public think him a funnyman, as when he carried a sink into Twitter HQ in October to mark his acquisition of the company, tweeting let that sink in.

Years ago, in fact, Musk was content to appreciate comedy as a mere spectator, and in fairness, he was not without taste. In 2016, he tweeted admiringly of the absurdist humor in Samuel Becketts timeless play Waiting for Godot, and recommended the hilariously awkward reality show Nathan for You. He also trumpeted a Tesla feature that allows you to play scenes from Monty Pythons Flying Circus. He didnt, as he does now, obsess over his philosophy of humor per se. True, he might reply Haha awesome 🙂 when tagged in a flattering meme about himself, and he never quite grasped the vernacular of comedy he once called a Liam Neeson cameo in the sitcom Lifes Too Short a sketch but his engagement was measured, light, unassuming. He had nothing to prove. Editors picks

So how did we get from a Musk who enjoyed a modest chuckle now and then to a Musk who hosts Saturday Night Live and fancies himself Memelord of the Universe? Where did his current sense of humor come from? Hes never quite addressed this in the media, nor did he respond to an email asking him to name his comedic influences, but we can still create something of a forensic picture.

To begin with, Musks cultural background seems to have disposed him to British humor (which he has called the best), a style that can toggle between dry wit and edgy or offensive incitement (Great show! he said of U.K. comic Ricky Gervais 2022 Netflix special SuperNature, which contained jokes mocking trans people). Musk has a long history of crossing lines himself, dating back to childhood: father Errol Musk recalled how he would insult adults by calling them stupid if he disagreed with them, and once got pushed down the stairs by a classmate after he made a mocking comment about the suicide of the boys father.

But beyond provocation, Musk clearly adores anything that can be placed in the category of nerd comedy: if a meme is in some way esoteric, requiring specialized knowledge to understand, he seems to regard it as a proof of intelligence. Because only a smart person would find it funny, right? This helps to explain his stated love of Reddit, where dorkiness and its attendant puns, references and values form a distinct social in-group. As a humor community, its the broad equivalent of where the STEM kids sit in the school cafeteria. Musk seems to have been drawn to it by the Tesla and SpaceX fanboys active there, initially promoting a SpaceX engineers Q&A session on the Ask Me Anything subreddit before giving his own interview about theoretical missions to Mars on r/space. In 2016, he took delight in watching redditors savage a Fortune editor who had written about issues stemming from a Tesla Autopilot crash.Related

So: Musk likes jokes that 1) take his side 2) foster a sense of geek community and pride, and 3) are occasionally spiky, hostile or somehow violate a social taboo this latter principle gives him his trollish quality. In the course of his online life, Musk also appears to have missed much non-Reddit internet comedy in the past decade, from the Dadaist gems of so-called Weird Twitter to the horny, artsy political anarchism of Tumblr. Thus his posting, in 2019, of a meta-meme about the evolutionary biologist Richard Dawkins whose text font dates it to an earlier generation of Reddit-favored image macros long out of fashion. This isnt a meme you wouldve randomly stumbled across on social media in 2019 its something you might get if you Googled meme about memes.

To truly understand Musks comedic sensibility, however, we have to ask ourselves how and why he started flaunting it in the first place. Remember, he showed no particular interest in this stuff in the early years of his Twitter account he never tweeted lol or weighed in on memes until 2018, when he suddenly couldnt quit yukking it up about them, encouraging followers to send him their dankest. (Dank as a descriptor for memes was itself a bit pass by then, to say nothing of a 47-year-old man typing the word your as ur.) What could possibly account for this sudden shift, the attempt at youthful cool?

The most obvious answer is one that Musk has given himself: he wanted attention. In 2021, during testimony in a shareholder lawsuit over Teslas 2016 acquisition of solar panel company SolarCity, he explained that his humor creates favorable publicity for the automaker: If we are entertaining people, they would write stories about us and we dont have to spend on advertising which would reduce the price of our cars, Musk said. I do have a sense of humor, he also noted. I think Im funny.

Yet the timing of his pivot to would-be Twitter comic also seems significant. While Musk was already a public figure by 2018, this was the year he cemented his place in pop culture: He started dating Grimes, and attended the Met Gala with her. He appeared on Joe Rogans podcast, where he accepted a puff on a cigar of tobacco and cannabis. He launched his own Tesla Roadster into space.

Behind this glitz, however, his life was going sideways. He told the New York Times that August that the past year had been excruciating, as well as the most difficult and painful year of my career. Teslas Model 3 had been stuck in production hell, and Musk said he was working unreasonably long hours, camping out at the factory and nearly missing his brothers wedding. He also claimed the work was taking a toll on his health. With the compounding pressures, he became more erratic on Twitter, with some board members reportedly concerned that he was taking the powerful, fast-acting insomnia drug Ambien but, instead of going to sleep, binging on the social app.

Two infamous Musk tweets define this phase. That June, he offered a submersible craft to rescuers trying to extract a youth soccer team trapped in a flooded cave in Thailand. When a British cave diver involved in the effort dismissed it on Twitter as a PR stunt, Musk replied angrily, calling him pedo guy in a tweet that sparked a defamation suit. (Musk eventually won the case, with his lawyers arguing the comment was a generic joke he had quickly retracted.) Then, in August, Musk tweeted that he was considering taking Tesla private at $420 a share. Though many interpreted that figure as an allusion to weed, the tweet caused Teslas stock to jump. Only weeks later, Musk changed his mind about the company going private, but he had to settle fraud charges with the Securities and Exchange Commission, as the original tweet was misleading and led to significant market disruption. The SEC deal also stipulated that he would step down as Teslas chairman, with he and the company each paying a $20 million penalty.

Musk this year won a subsequent shareholder lawsuit over the matter, in this case testifying that the $420 price was not a joke. But he has prolifically posted 420 comments and cited the number 69 also a sexual position ever since. Tesla lowered the price of the Model S to $69,420 in 2020, and Musk is particularly fond of reminding everyone that his birthday, June 28, falls 69 days after 4/20.

Taken altogether, Musks recklessness through the summer and fall of 2018 has the air of a midlife crisis: two years after his third divorce, he was dating a celebrity 16 years his junior while pushing himself to physical exhaustion as his company lost hundreds of millions of dollars. He apparently found refuge in memes while indulging a newfound impulse to shitpost, whether that meant firing off brazen insults, slapping unfunny captions on content hed seen elsewhere, or racking up engagement by mentioning the weed and sex numbers. This telegraphed a growing need to be a man of the people, a desperation to be liked.

His failure to develop a more amusing perspective from there can be chalked up to the sycophants who praise his every word, plus an unshakeable nostalgia for an era when he was widely characterized as a visionary and criticized far less. Consider his affection for Doge, a meme he temporarily added to the Twitter interface this month though its heyday was 2013, the first year Fortune named him Businessperson of the Year. Or his recent botching of the innuendo Thats what she said, popularized by the sitcom The Office (2005-2013).

Meanwhile, Musk has continued to develop an explicitly ideological concept of humor that ensures only his allies will ever laugh with him. In 2018, he declared that socialists are usually depressing and have no sense of humor. (He then proclaimed himself a socialist.) By 2022, when he got in a fight with the satirical website Hard Drive over not crediting them for a headline he posted, he was basically arguing that leftists cant have comedy at all. The reason youre not that funny is because youre woke, he tweeted. Humor relies on an intuitive & often awkward truth being recognized by the audience, but wokism is a lie, which is why nobody laughs. Instead, Musk has preferred the satirical news from the right-leaning Babylon Bee, which he reinstated on Twitter weeks after his takeover; it had been banned in early 2022 for sharing a transphobic article.

This means that on top of all the other reasons Musk struggles to craft a solid or relatable joke, he is now bound by the conceit that comedy must usually target his enemies. Because he is ridiculed by online leftists, chided by Democratic leadership, and unfavorably depicted in the liberal press, he has fallen in with culture warriors whose humor is built around trolling these factions. And how do they accomplish that? By daring the other side to censor or cancel them for repeating the same tired shit about Hunter Bidens laptop or pronouns or soy lattes. Theres no organic or dynamic potential here; its just manufactured grievance about how they want to silence free speech. No wonder Musk said that with his arrival CEO, Comedy is now legal on Twitter.

How far hes fallen from Waiting for Godot: these days, hes replying lmao to tweets calling Bill Gates and George Soros the Vax Street Boys. And that trajectory is sadly irreversible. It didnt have to be this way, but when youre as high-profile and thin-skinned as Musk, its all too easy to turn what impoverished sense of humor you had into both a defensive posture and a way to needle others. If someone makes a joke at your expense, it inflicts real damage that must be answered for. If you, as one of the most powerful people on the planet, take a swipe at them well, you can always say you were kidding.

Trending

See the rest here:

A Grand Unified Theory of Why Elon Musk Is So Unfunny - Rolling Stone

How Billionaires Like Jeff Bezos, Elon Musk and George Soros Pay Less Income Tax Than You And How You Can Replicate The Strategy. It’s Legal – Yahoo…

Most billionaires dont get to where they are by earning a salary. And that also means they might pay less income tax than Americans who make a living through wages.

According to a report from ProPublica, some billionaires in the U.S. paid little or no income tax relative to the vast amount of wealth they have accumulated over the years.

The report noted that Amazon.com Inc. Founder Jeff Bezos did not pay a penny in federal income taxes in 2007 and 2011. It also pointed out that Tesla Inc. CEO Elon Musk paid no federal income tax in 2018 and investing legend George Soros did the same three years in a row.

To be sure, billionaires do pay taxes its just that the amount is rather small compared to how much money they actually make. For instance, ProPublicas report showed that between 2014 and 2018, Bezos paid $972 million in total taxes on $4.22 billion of income. Meanwhile, his wealth grew by $99 billion, meaning the true tax rate was only 0.98% during this period.

The reality is, billionaires build their wealth from assets like stocks and real estate. Their net worth goes up when these assets increase in value over time. But the U.S. tax system is not designed to capture the gains from such assets: Capital gains are typically taxed at lower rates than wages and salaries.

But of course, you dont need to be in the three-comma club to invest in these assets.

Check out:

For many well-known billionaires, the bulk of their wealth is tied to the companies they helped create.

If these companies are publicly traded, retail investors can hop on the bandwagon simply by purchasing shares. For those who want to follow Bezos, check out Amazon (AMZN). If you want to bet on Musk, look into Tesla (TSLA).

Heres the neat part: When stocks go up in value, investors only pay tax on realized gains. In other words, if an investor doesnt sell anything, they dont have to pay capital gains tax even if their stock holdings have skyrocketed in value because the gains are not realized.

Story continues

According to ProPublica, thats why some billionaires choose to borrow against their assets instead of selling them. Doing so gives the ultra-wealthy money to spend while deferring taxes on capital gains indefinitely.

That said, when they do sell their shares, they can still get hit with a substantial tax bill. After Musk sold a ton of Tesla shares in 2021, he tweeted that he would pay over $11 billion in taxes that year.

Another popular option for billionaires is real estate, which comes with plenty of tax advantages as well.

When you earn rental income from an investment property, you can claim deductions. These include expenses such as mortgage interest, property taxes, property insurance and ongoing maintenance and repairs.

Theres also depreciation, which refers to the incremental loss of a propertys value as a result of wear and tear. Real estate investors can claim depreciation for many years and accumulate significant tax savings over time.

The best part? The segment is becoming increasingly accessible to retail investors. There are publicly traded real estate investment trusts (REITs) that own income-producing real estate and pay dividends to shareholders. And if you dont like the stock markets volatility, there are also crowdfunding platforms that allow retail investors to invest directly in rental properties through the private market.

Read next:

Photo credits: Shutterstock, Flickr

Don't miss real-time alerts on your stocks - join Benzinga Pro for free! Try the tool that will help you invest smarter, faster, and better.

This article How Billionaires Like Jeff Bezos, Elon Musk and George Soros Pay Less Income Tax Than You And How You Can Replicate The Strategy. It's Legal originally appeared on Benzinga.com

.

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the original post:

How Billionaires Like Jeff Bezos, Elon Musk and George Soros Pay Less Income Tax Than You And How You Can Replicate The Strategy. It's Legal - Yahoo...

San Francisco DA blasts Elon Musk over reaction to stabbing death – NBC News

SAN FRANCISCO In the hours after a tech executive was stabbed to death on a street in San Francisco with no clear suspect, billionaire Elon Musk led a charge on Twitter, where fellow tech executives and wealthy investors said they were fed up with violent repeat offenders getting away with crime in the biggest U.S. tech hub.

On Thursday, it became clear that their interpretation of the killing had been wrong.

City officials said at a news conference that the tech executive, Bob Lee, was murdered not randomly but by a man he knew, and San Franciscos chief prosecutor called out Musk by name for having jumped to conclusions.

District Attorney Brooke Jenkins said Musk was reckless when he suggested within hours of the killing that repeat violent offenders were involved.

Reckless and irresponsible statements like those contained in Mr. Musks tweet that assumed incorrect circumstances about Mr. Lees death served to mislead the world in its perceptions of San Francisco, Jenkins said.

The statements, she continued, also negatively impact the pursuit of justice for victims of crime, as it spreads misinformation at a time when the police are trying to solve a very difficult case.

Musk, the countrys wealthiest person, according to the Bloomberg Billionaires Index, did not immediately respond to a request for comment sent to Twitter, where he is the CEO and majority owner.

Musk responded to Lees killing with a tweet April 5 replying to another user who said Lee had been a friend.

Violent crime in SF is horrific and even if attackers are caught, they are often released immediately, Musk wrote.

He added that the city should take stronger action to incarcerate repeated violent offenders, and he tagged Jenkins Twitter account.

Lees death and Musks tweet added fuel to what has become a particularly contentious topic in San Francisco. Debates about crime, drugs and homelessness and the citys response to them have become flashpoints, with some in the tech and startup community rallying to push for change. That community helped recall the previous district attorney, Chesa Boudin, who was attacked for seeking alternatives to incarceration.

San Francisco has logged 13 homicides this year, matching last years tally in the same time frame, according to police department data. Robberies and assaults have also stayed relatively consistent over the past year.

San Francisco Police Chief William Scott said in an interview with NBC News on Thursday that every homicide is important, but that Lee was a notable person, which elevated media coverage of the case.

Some of the things that were said because of this case, I think were a little bit unfair, Scott said. Its one case. And I believe this would have happened anywhere.

Musk was not alone in rushing to offer an opinion about Lees killing and its broader significance for San Francisco. Matt Ocko, a venture capitalist, called San Francisco lawless and said the criminal-loving city council had literal blood on their hands.

Michelle Tandler, a startup founder who often tweets about crime, said the killing was part of a disturbing crime wave that justified calling in the National Guard. Michael Arrington, the founder of the news site TechCrunch, tweeted that he hated what San Francisco has become.

And investor Jason Calacanis, rejecting a call to wait for the facts, said the city was run by evil incompetent fools & grifters who accomplish nothing except enabling rampant violence.

Calacanis tweeted Thursday that he stood by his earlier view, independent of any one of the thousands of violent crimes that occur every month. Arrington, Ocko and Tandler did not immediately respond to requests for comment.

Rival interpretations of the killing had also played out in the news media, with the San Francisco Chronicle cautioning that violent crime was relatively low in the city, while The New York Times put the lawless quotation in a headline.

Mayor London Breed noted at Thursdays news conference how the case had received wide attention.

There has been a lot of speculation and a lot of things said about our city and crime in the city, Breed said, praising the patient work of prosecutors and police.

Jenkins said people would have been better off waiting for more facts before they weighed in with broad declarations.

We all should and must do better about not contributing to the spread of such misinformation without having actual facts to underlie the statements that we make. Victims deserve that, and the residents of San Francisco deserve that, Jenkins said.

Continued here:

San Francisco DA blasts Elon Musk over reaction to stabbing death - NBC News

Bidens gift to Elon Musk and Tesla – Yahoo Finance

Tesla CEO Elon Musk leans Republican, and hes no friend of Joe Biden. But President Biden and his fellow Democrats have done Musk and his company a favor no Republican would likely consider.

Bidens new rules for tailpipe emissions, which the Environmental Protection Agency proposed on April 12, would sharply limit the pollution cars are allowed to emit for model years 2027 through 2032. If ultimately adopted, in whole or in part, the new rules would effectively force automakers to build far more electric vehicles and far fewer gasoline-powered ones.

That could cause upheaval at many automakers trying to shift from gas-powered cars to electrics at a measured pace that doesnt wreck their profitability. For Tesla, (TSLA) however, it will be business as usualexcept that the competition could end up hobbled by massive new costs, plus the stumbles that often attend large corporate transformations. That makes Tesla the single-biggest beneficiary of the EPAs new effort to slash auto-related emissions.

Ironies abound. Musk and Biden have feuded over labor unions, which Biden considers a key constituency and Musk loathes. When highlighting the rollout of EVs, Biden typically touts new efforts at Ford (F) and General Motors, (GM) which are unionized, while ignoring Tesla, which is not. Yet Tesla is the undisputed leader in EV sales in the United States, with 65% of the US EV market and vastly more sales in the category than Ford, GM or any other automaker.

Musk got so irritated by Bidens dismissiveness that in January 2022 he called Biden a damp sock puppet on Twitter. Later that year, Musk said he had a super bad feeling about the economy, and at a Biden press conference a reporter asked Biden for his response. Lots of luck on his trip to the moon, Biden quipped, referring to Musks hopes for space travel on one of his Space X rockets. Musk continued to tweak Biden on Twitter, and right before the midterm elections last year, Musk advised his 134 million Twitter followers to vote Republican.

Story continues

Biden's gift to Tesla? A Model 3 at a showroom in the U.S. REUTERS/Florence Lo

[Drop Rick Newman a note, follow him on Twitter, or sign up for his newsletter.]

Democrats, however, are better for his car company. Musk and Tesla deserve credit for foreseeing the electrified future and persevering through near-death experiences. But theyve had some help. The electric-vehicle tax credit that helps subsidize the cost of an EV originated in a 2009 law passed by Democrats and signed by President Obama. That tax break helped goose Teslas sales during difficult years when it lost money and needed every penny. President Trump wanted to kill that tax credit, but wasnt able to.

Tesla has also benefited from regulatory credits in California, largely governed by Democrats. California gives Tesla credits for producing zero-emission vehicles that it can sell to other companies who use them as a pollution offset. Such sales have netted Tesla hundreds of millions of dollars.

The Biden administrations new pollution rules could force the biggest transformation of the auto industry in its history. The EPA estimates that if the rules go into effect as proposed, EVs as a portion of new-car sales would rise from less than 6% now to around 67% by 2032. That would be a remarkable shift for just a 10-year period.

All of Teslas assembly lines produce EVs. At other automakers, EVs are a tiny share of production, even with sizeable new commitments to electrics. It costs billions of dollars to build an automotive assembly line, and more to retire old ones no longer in use. Legacy automakers face massive transformation costs. Tesla doesnt.

Since 2019, North American automakers have announced roughly $80 billion worth of new investments in electric vehicles. The EPA argues that a rapid transition to EVs will happen no matter what, given the industrys own large investments in that direction.

The new EPA rules, however, would still impose new costs on top of investments automakers already have planned. The new rules would raise industry-wide costs by somewhere between $180 and $280 billion during the seven-year period, according to the EPA. There would be savings, too, such as better fuel economy for drivers and reduced maintenance for EVs, compared with gas-powered models. But manufacturers largely bear the costs up front, then pass on to consumers what they can recoup through higher prices. Thats the tricky part for legacy automakers: financing the transition to electrics without racking up losses or too many sell recommendations on their stock.

Ford and GM stock has been largely range-bound for years, with the exception of a modest run-up during the Covid rally, when monetary stimulus goosed the whole market. Those flattish stock trends reflect Wall Street worry about massive transformation costs. Tesla, of course, is a high-flier thats still worth six times as much as GM and Ford combined, even with its stock down by more than half from its 2021 peak. Investors think Tesla is poised to dominate an industry driven by EVs, and that dominance could come sooner if the Biden rules stick.

They may not.

Automakers seem sure to challenge the new proposal, saying they cant shift to EVs that fast. So the final rule could be weaker than the proposal. There will probably also be litigation challenging the Biden administrations authority to make such a big changewithout Congressional legislation. The current Supreme Court, with a 6-3 conservative majority, has been much more skeptical of executive-branch authority than in the past, and theres a chance they could block such dramatic changes. A final risk to the new rules is a possible change in administration in 2024, with a future Republican president likely to roll back the Biden standards.

All of those risks add up to a lot of uncertainty for legacy automakers already unloved by the market. CEOs of those companies have to plan for a future where the pace of transformation could range from challenging to ruinous. Tesla has challenges, too, but the burden of stringent pollution regulation isnt one of them.

Maybe Biden and Musk should be a little friendlier toward each other.

Rick Newman is a senior columnist for Yahoo Finance. Follow him on Twitter at @rickjnewman

Click here for politics news related to business and money

Read the latest financial and business news from Yahoo Finance

See the article here:

Bidens gift to Elon Musk and Tesla - Yahoo Finance

Has Elon Musk pricked Lynas rare earths bubble? – Sydney Morning Herald

Rare earths had been considered irreplaceable for building the powerful magnets needed for these vehicles, and Tesla cited the move as removing a crucial production and cost constraint on its operations.

Elon Musks Tesla must cut the use of costly rare earths to meet its goals of making cheaper cars and grow its sales. Bloomberg

Lynas has made it clear that the growing demand for e-vehicles has underpinned demand, and prices, for rare earths. It drove the share price from $1.30 at the start of 2020 to a high of $11 last year, valuing the group at more than $10 billion at its peak.

Since then, Lynas has shredded as much as $2 billion of its market valuation, trading as low as $6. Has Musks declaration shaken the company, or can it keep on trucking?

Industry: Minerals and resources.

Main products: Rare earth ores 17 elements crucial to the manufacture of many hi-tech products such as mobile phones, electric cars and wind turbines. Neodymium and praseodymium (NdPr) are the two elements that have been in particularly high demand due to electric vehicles.

Key figures: Amanda Lacaze has been chief executive since 2014 and the main driver of its success. Kathleen Conlon was appointed chair in 2020 and has been on the board since 2011.

How it started: Lynas, as we know it, was the brainchild of business veteran Nick Curtis who came up with the idea to build a processing plant in Malaysia and set the company up as the only processor of rare earths outside of China. Japanese commercial interests stung by Chinas blocking of rare earth exports in 2010 helped finance the plant.

Operations commenced in 2012, but have been dogged by local controversy over the low-level radioactive material produced by the cracking and leaching process in Malaysia which must now be moved offshore by July this year. A new processing operation in Western Australia will pick up the slack.

How its going: With Lynas setting up the processing plant in Kalgoorlie, it has solved the Malaysia issue. The companys main problem now has been keeping up with demand forecasts which have been sky-high on the back of increased EV production which need rare earths for the powerful and lightweight magnets they need.

Lynas boss Amanda Lacaze. Carla Gottgens

The company has also been in the fortunate position of receiving US government money to fund its plans to set up a processing plant in Texas as governments around the world grow worried about how much they rely on Chinas stranglehold on the supply of crucial elements.

The bear case: When Elon Musk talks, people listen. So, when Musk, and other Tesla executives, unveiled plans to wean the car group off rare earths last month it had a major impact.

You cant run an automotive industry without rare earths, Lacaze told the Melbourne Mining Club just last year. What if you can?

The companys plans to increase rare earths output by 50 per cent by 2025 were deemed to be inadequate precisely due to the boom in car demand.

So, Musks edict to replace rare earth elements from his cars went to the heart of what has made this market a magnet for investors looking to ride the burgeoning demand for e-vehicles of all kinds.

The bull case: Musk might actually be able to pull this rabbit out of the hat and reduce Teslas reliance on rare earths, but not everyone is buying what he is selling.

Especially since Teslas comments say more about the companys aggressive growth targets, and what it needs to do to get there than it does about the attractiveness of rare earths for the electrification of the auto industry.

According to Adamas Intelligence, there is a reason why automakers have not used cheap and accessible alternatives like iron oxide: getting the same performance comes at the price of significantly higher weight.

In one case it cited, the iron oxide magnets were 30 per cent heavier a massive weight penalty, it said.

Tesla might find a cheaper alternative to power its low price cars of the future, but it wont be easy.

Rare earth magnets have been the breakthrough technology that lifted electric vehicles into the same league as conventional cars, Fat Tail Investments James Cooper says.

JP Morgan remains a Lynas fan, it put an Overweight recommendation on the stock this month and an $8.50 price target.

Also this month, UBS analyst Levi Spry upgraded Lynas to a Buy despite lowering its price target to $8.50 due to the 33 per cent slide in the companys share price since January. Tesla was not of any concern.

We remain positive on long-term fundamentals. To this extent, we do not think Teslas intentions to thrift rare earths from its supply chain has a significant impact within our forecast horizon, he said.

UBS is forecasting that Tesla will account for around 7 per cent of demand for NdPr by 2030.

While not insignificant, we still see deficits forming and that inelastic demand (from other OEMs and industries) should keep fundamentals for NdPr strong.

The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion.

See more here:

Has Elon Musk pricked Lynas rare earths bubble? - Sydney Morning Herald

How smart is ChatGPT really and how do we judge intelligence in AIs? – New Scientist

ARTIFICIAL intelligence has been all over the news in the past few years. Even so, in recent months the drumbeat has reached a crescendo, largely because an AI-powered chatbot called ChatGPT has taken the world by storm with its ability to generate fluent text and confidently answer all manner of questions. All of which has people wondering whether AIs have reached a turning point.

The current system behind ChatGPT is a large language model called GPT-3.5, which consists of an artificial neural network, a series of interlinked processing units that allow for programs that can learn. Nothing unusual there. What surprised many, however, is the extent of the abilities of the latest version, GPT-4. In March, Microsoft researchers, who were given access to the system by OpenAI, which makes it, argued that by showing prowess on tasks beyond those it was trained on, as well as producing convincing language, GPT-4 displays sparks of artificial general intelligence. That is a long-held goal for AI research, often thought of as the ability to do anything that humans can do. Many experts pushed back, arguing that it is a long way from human-like intelligence.

So just how intelligent are these AIs, and what does their rise mean for us? Few are better placed to answer that than Melanie Mitchell, a professor at the Santa Fe Institute in New Mexico and author of the book Artificial Intelligence: A guide for thinking humans. Mitchell spoke to New Scientist about the wave of attention AI is getting, the challenges in evaluating how smart GPT-4 really is, and why AI is constantly forcing us

Here is the original post:

How smart is ChatGPT really and how do we judge intelligence in AIs? - New Scientist

ChatGPT is impressive, but it may slow the emergence of AGI – TechTalks

ChatGPT seems to be everywhere. From in-depth reports in highly respected technology publications to gushing reviews in mainstream media, ChatGPT has been hailed as the next big thing in artificial intelligence, and with good reason.

As a developer resource, ChatGPT is simply outstanding, particularly when compared to searching existing resources such as Stack Overflow (which are undoubtedly included in GPTs data model). Ask ChatGPT a software question and you get a summary of available web solutions and some sample code that can be displayed in the language you need. Not happy with the result? Get a refined answer with just a little added info as the system remembers the context of your previous queries. While the just-released GPT-4 offers some significant new features, its usefulness to a developer hasnt changed much in my usage.

As a software asset, ChatGPTs API can be used to give the illusion of intelligence to almost any interactive system. As opposed to typing questions into the web interface, ChatGPT also offers a free API key which enables a program to ask questions and process answers. The API also provides access to features that are not accessible via the web, including options like how long an answer is expected and how creative it should be.

But while ChatGPT has already attracted more than a hundred million users, drawn by its impressive capabilities, it is important to recognize that it only gives the illusion of understanding. In reality, ChatGPT is manipulating symbols and code samples which it has scoured from the web without any understanding of what those symbols and samples mean. If given clear, easy questions, ChatGPT will offer (usually) clear, accurate responses. If asked tricky questions or questions with false or negative premises, the results are far less predictable. ChatGPT can also provide plausible sounding, but incorrect answers and can often be excessively verbose.

So whats wrong with that? To a developer, not much. Simply cut-and-paste the sample code, compile it, and youll know in a few seconds whether or not the answer works properly. This is a different situation than asking a health question, for example, where ChatGPT can report data from dubious sources without citing them, and it is time-consuming to double-check the results.

Further, the new GPT-4 system isnt very good a working backwards from a desired solution to the steps needed to achieve it. In a programming context, we are often given an existing data set and a desired outcome and need to define the algorithm to get from one to the other. If such an algorithm already exists in GPTs dataset, it will likely be found and modified to fit the needed capabilities. Great for a majority of instances. If a new algorithm is needed, though, GPT should not be expected to define one.

ChatGPT represents an incredibly powerful tool and a major advance in self-learning AI. It represents a step toward artificial general intelligence (AGI), the hypothetical (though many would argue inevitable) ability of anintelligent agentto understand or learn any intellectual task thata human can. But it makes only a pretense of actual understanding. It simply manipulates words and symbols. In fact, AI systems such as ChatGPT may be slowing the emergence of AGI due to their continued reliance on bigger and more sophisticated datasets and machine learning techniques to predict the next word or phrase in a sequence.

To make the leap from AI to AGI, researchers ultimately must shift their focus to a more biologically plausible system modeled on the human brain, with algorithms that enable it to build abstract things with limitless connections and context, rather than the vast arrays, training sets, and computer power todays AI demands.

For AGI to emerge, it must have the capability to understand that physical objects exist in a physical world and words can be used to represent those objects, as well as various thoughts and concepts. Because concepts such as art and music, and even some physical objects (those for example which have tastes, smells, or textures) dont easily lend themselves to being expressed in words, however, AGI must also contain multisensory inputs and an underlying data structure which will support the creation of relationships between multiple types of data.

Further, an internal mental model of the AGIs environment with the AGI at its center is essential. Such a model will enable an artificial entity to have perspective and a point of view with respect to its surroundings that approximates the way in which humans see and interpret the world around them. After all, how could a system have a point of view if it never experienced one?

The AGI must also be able to perceive the passage of time, which will allow it to comprehend how each action it takes now will impact the outcomes it experiences in the future. This goes hand-in-hand with the ability to exhibit imagination. Without the ability to imagine, AGI will be incapable of considering the numerous potential actions it can take, evaluating the impact of each action, and ultimately choosing the option that appears to be most reasonable.

There are certainly other capabilities needed for AGI to emerge, but implementation of just these concepts will allow us to better understand what remains to be done for AGI to be realized. Moreover, none of these concepts are impossible to create. To get there, though, researchers need to abandon the current, widely used model of extending a text-based system like ChatGPT to handle multisensory information, a mental model, cause-and-effect, and the passage of time. Instead, they should start with a data structure and a set of algorithms and then utilize the vision, planning, and decision-making capabilities of an autonomous robot to extend these capabilities to ChatGPTs text abilities.

Fortunately, a model for doing all these things already exists in an organ which weighs about 3.3 pounds and uses about 12 watts of energythe human brain. While we know a lot about the brains structure, we dont know what fraction of our DNA defines the brain or even how much DNA defines the structure of its neocortex, the part of the brain we use to think. If we presume that general intelligence is a direct outgrowth of the structure defined by our DNA and that structure could be defined by as little as one percent of that DNA, though, it is clear that the real problem in AGI emergence is not one that requires gigabytes to define, but really one of what to write as the fundamental AGI algorithms.

With that in mind, imagine what could happen if all of todays AI systems were to be built on a common underlying data structure which would enable them and their algorithms to begin interacting with each other.Gradually, a broader context that can understand and learn would emerge. As these systems become more advanced, they would slowly begin to work together to create a more general intelligence that approaches the threshold for human-level intelligence, then equals it, then surpasses it. Perhaps only then will we humans begin to acknowledge that AGI has emerged. To get there, we simply need to change our approach.

Portions of this article are drawn from Microsoft Researchs just-published paper, Sparks of Artificial General Intelligence: Early experiments with GPT-4 By Sebastien Bubeck, et al, https://arxiv.org/pdf/2303.12712.pdf

Excerpt from:

ChatGPT is impressive, but it may slow the emergence of AGI - TechTalks

New AGI hardware in progress for artificial general intelligence – Information Age

The partnership between SingularityNET and Simuli.ai aims to speed up artificial general intelligence advancement, and will focus on the creation of a Metagraph Pattern Matching Chip (MPMC).

This new chip will host the two knowledge graph search algorithms, Breadth-First Search (BFS) and Depth-First Search (DFS).

Combining these building blocks for AI systems into one chip can enable more intuitive knowledge representation, reasoning and decision-making.

Once created, the MPMC will be integrated with Simulis pre-existing Hypervector Chip which is used for processing data patterns with fewer processors than traditional hardware to create an AGI board that aims to accelerate realisation of artificial general intelligence capabilities.

The new hardware is set to be utilised by SingularityNETs spin-off project TrueAGI to offer AGI-as-a-service to enterprise organisations.

Together, SingularityNET and Simuli.ai aim to mitigate common hardware constraints faced by AI developers, such as being limited to graphics processing units (GPU), which help devices handle graphics, effects and videos.

In addition, the project looks to lower the cost of AI training and interference by allowing for this to be achieved with less required hardware.

The Simuli AGI board has strong potential to catalyse emergence of a new era of AI techniques and functions, said Dr. Ben Goertzel, CEO of SingularityNET.

The core of what we need to progress from narrow AI to AGI is of course the right cognitive architectures and learning and reasoning algorithms but without the right hardware, even the best mathematics and software cant run efficiently enough to have practical impact.

So many AI methods weve been working on for decades, are going to finally be able to show their stuff in a practical sense when given the right hardware to run on.

Rachel St.Clair, CEO of Simuli.ai, commented: The power of optimising large scale AGI models to run faster by leveraging Simulis hardware platform is multifold.

First, these AGI frameworks get rapid development that wasnt exactly possible without large compute cost prior. Then, such a device as the AGI motherboard can expand the types of code that can be run scalably and efficiently in a single instance of the AGI model, for example Hyperon.

Also, scalable computing is better for longevity of the planet and the technology itself, so optimising on both the SW/HW sides is key. This will likely result in AGI thats better for everyone. Were excited to be playing a role in tipping the scale from AI to AGI.

Artificial general intelligence a forward-looking term referring to machine intelligence that can solve problems and complete tasks to the same standard as a human has been cited as a next step in AI development, with generative AI being the most prominent current innovation trend in the space. Read more about AGI here.

Why CIOs are turning to knowledge graphs for critical business help Exploring the value of knowledge graphs for chief information officers (CIOs).

What ChatGPT means for developers Will ChatGPT replace software developers, or will it make their jobs more enjoyable by doing the heavy lifting when it comes to writing boilerplate code, debugging and testing?

Visit link:

New AGI hardware in progress for artificial general intelligence - Information Age

Towards Artificial General Intelligence, ChatGPT 5 is on Track – Analytics Insight

Towards artificial general intelligence, ChatGPT 5 is on track and has already begun to trend

Towards artificial intelligence and general intelligence, ChatGPT 5 is on track and has already begun to trend on Twitter, with many people guessing about the ChatGPT platforms future evolution. These ChatGPT-5 Twitter debates have gotten even more interest than those over ChatGPT-4. Many users have great hopes for the future edition, believing that it would feature immaculate visuals, something that ChatGPT-4 has yet to achieve.

OpenAI is actively developing new features for ChatGPT and intends to release GPT-5 later this winter. According to studies on GPT-5s capabilities, OpenAI may be on the verge of reaching Artificial General Intelligence (AGI) and becoming practically indistinguishable from a person in its capacity to create natural language answers.

While ChatGPT may become indistinguishable from a person in terms of natural language answers, it will still outperform the human brain in terms of data processing and content creation. ChatGPT already gained considerable new features as a result of the recent upgrade to GPT-4, enhancing the chatbots utility as a tool. ChatGPT now supports multimodal input, allowing it to receive data via text and graphics and produce replies in many languages.

Some intriguing tweets indicate that ChatGPT-4 has spontaneously addressed ChatGPT-5, even when not pushed to do so. This begs the issue of what ChatGPT-4 is aware of regarding the next version that we are not. Furthermore, GPT-4 has shown exam-taking abilities that outperform those of its predecessor. One of the developers, Siqi Chen, stated on Twitter that GPT-5 will complete its training by December, with OpenAI expecting it to attain AGI.

Whether or not GPT-5 achieves AGI, it should offer major enhancements above GPT-4, which was already a huge advance for ChatGPT. Its impossible to foresee the entire scope of these enhancements, but the chatbot might allow different input modalities and produce faster and more accurate answers. While the possibility of ChatGPT improving and achieving AGI is intriguing, we must equally examine the potential negative implications.

This revelation is expected to ignite a heated argument about whether GPT-5 has genuinely reached AGI, and given the nature of such disputes, it is quite likely that it will be considered to have gained AGI. It implies that with the aid of GPT-5, Generative AI might achieve human-like indistinguishability. Chen further emphasized on Twitter that, while reaching AGI with GPT-5 is not a majority opinion within OpenAI, some people feel it is doable. If Artificial Intelligence achieves AGI, it will have intellectual and task comprehension abilities equivalent to humans.

It is impossible to forecast what these negative consequences would be, just as it is difficult to envision the good consequences of ChatGPT attaining AGI. Despite this ambiguity, there is no need to be concerned about a sci-fi movie scenario in which AI takes over. However, the growth of AI has already prompted worries at Europol, since criminals are exploiting the capabilities of non-AGI versions of ChatGPT for illicit purposes. Before the introduction of GPT-5, we may see an interim version of ChatGPT.

Read more here:

Towards Artificial General Intelligence, ChatGPT 5 is on Track - Analytics Insight

Is artificial intelligence approaching science fiction? – The Trail – The Puget Sound Trail

By Veronica Brinkley

As AI models have advanced, it has become increasingly evident that they will play an important role in the future of humankind. Current models have a range of capabilities that are supposedly designed to aid humans. For instance, photo-generating models like MidJourney and DALL-E have the ability to create images in a multitude of styles, based on the users prompt. These programs outputs have become increasingly accurate to the point that, to the average viewer, they are often indiscernible from real photographs.

The language model ChatGPT is garnering the most media attention. ChatGPT is advancing rapidly; its already on its fourth version. The lab behind it, OpenAI, has stated that the company is building towards an ambitious goal artificial general intelligence (AGI), their term for an AI that is as smart as, if not smarter than, the average human.

These developments have raised alarms in the tech community. Recently, over 1,000 major tech executives including Elon Musk professors, and scientists signed an open letter directed toward OpenAI, requesting an immediate sixmonth pause in artificial intelligence development. The main concern posited by the letter is that AI systems with human-competitive intelligence can pose profound risks to society and humanity, and therefore necessitate governmental regulation. The letter went on to say that AI development labs are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one not even their creators can understand, predict, or reliably control.

OpenAI, for its part, says its technology will improve society. According to their website, advancements in AI could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility. To me, this just sounds like a lot of buzzwords and it doesnt really say much about what they intend to do with their product.

Now, if youre like me, youre probably thinking, I swear Ive seen this in a movie, and it did not end well. Its scary to see the beginning of the march to machine intelligence. Science fiction centered around AI previously felt abstract, but now seems potentially accurate. My personal favorite example is the dystopian sci-fi video game Detroit: Become Human.

Quantic Dreams Detroit: Become Human is set in the not-so-distant future of 2038, in an America where highly-developed androids have become commonplace. The economy is completely dependent on them as the means of production. However, the androids begin to gain sentience and deviate from their programming. This sparks a civil rights movement of deviant androids. A power struggle ensues, as humans refuse to accept androids as autonomous beings. While current AI is far from this reality, it is a chilling projection of what could be in store. The game itself directly comments on this reality in its opening lines: Remember, this isnt just a game, its our future.

Drawing parallels between the story of Detroit: Become Human and our current social trajectory is hardly difficult. CyberLife, the AI research and development firm in the games setting, represents a potential future for OpenAI. In the game, CyberLife has become the standard for androids and therefore holds an immensely disproportionate amount of power over the economy and ruling bodies. Perhaps we arent as far away from this reality as we think. In order to prevent such a future, industries need to change to accommodate AI, a technology that is only growing faster, smarter and more powerful. This is where the government must step in. It must regulate the creation of AI very consciously.

In a perfect world, state leaders consciously and unerringly regulate the creation of AI, acting free from considerations of profit and power. However, as we know, the government doesnt have a great history of neutrality or altruism. And few in Washington even understand technology just watch the Congressional hearings on Facebook or TikTok for examples. Washington has already failed to stay ahead of tech decisions that affect millions. OpenAI has been seemingly honest about these concerns, stating, we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access. While its great that they hope for this outcome, I do not believe that hoping is enough. Improper handling and regulation could have catastrophic effects on society, as seen in the manipulation and misuse of current social media. Detroit: Become Human may not be far off.

As college students who are just beginning to enter the job market, these advancements could easily affect us in the near future. Entry-level positions might be displaced by AI and it may become increasingly difficult for us to find meaningful work. In terms of possible issues with the use of this technology, this is just the tip of the iceberg.

So many of us grew up watching events like this happen in the movies and on TV, and it is hard to believe what was once science fiction is beginning to exist. Its also alarming to watch it unfold, knowing it could impact our futures. But we are not helpless. We can stay informed about technological advancements. We can slow their deployment until the ramifications are understood. We can apply the lessons learned from fictional media and the real-life corruption of social media. And we must consider enacting accompanying regulations on tech industries. We should be mindful that technology is not always a wonder especially in the hands of mere mortals.

Visit link:

Is artificial intelligence approaching science fiction? - The Trail - The Puget Sound Trail

GPT-4 Passes the Bar Exam: What That Means for Artificial … – Stanford Law School

CodexThe Stanford Center for Legal Informatics and the legal technology company Casetext recently announced what they called a watershed moment. Research collaborators had deployed GPT-4, the latest generation Large Language Model (LLM), to takeand passthe Uniform Bar Exam (UBE). GPT-4 didnt just squeak by. It passed the multiple-choice portion of the exam and both components of the written portion, exceeding not only all prior LLMs scores, but also the average score of real-life bar exam takers, scoring in the 90th percentile.

Casetexts Chief Innovation Officer and co-founder Pablo Arredondo, JD 05, who is a Codex fellow, collaborated with Codex-affiliated faculty Daniel Katz and Michael Bommarito to study GPT-4s performance on the UBE. In earlier work, Katz and Bommarito found that a LLM released in late 2022 was unable to pass the multiple-choice portion of the UBE. Their recently published paper, GPT-4 Passes the Bar Exam quickly caught the national attention. Even The Late Show with Steven Colbert had a bit of comedic fun with the notion of robo-lawyers running late-night TV ads looking for slip-and-fall clients.

However for Arredondo and his collaborators, this is serious business. While GPT-4 alone isnt sufficient for professional use by lawyers, he says, it is the first large language model smart enough to power professional-grade AI products.

Here Arredondo discusses what this breakthrough in AI means for the legal profession and for the evolution of products like the ones Casetext is developing.

What technological strides account for the huge leap forward from GPT-3 to GPT-4 with regard to its ability to interpret text and its facility with the bar exam?

If you take a broad view, the technological strides behind this new generation of AI began 80 years ago when the first computational models of neurons were created (McCulloch-Pitts Neuron). Recent advancesincluding GPT-4have been powered by neural nets, a type of AI that is loosely based on neurons and includes natural language processing. I would be remiss not to point you to the fantastic article by Stanford Professor Chris Manning, director of the Stanford Artificial Intelligence Laboratory. The first few pages provide a fantastic history leading up to the current models.

You say that computational technologies have struggled with natural language processing and complex or domain-specific tasks like those in the law, but with advancing capabilities of large language modelsand GPT-4you sought to demonstrate the potential in law. Can you talk about language models and how they have improved, specifically for law? If its a learning model, does that mean that the more this technology is used in the legal profession (or the more it takes the bar exam) the better it becomes/more useful it is to the legal profession?

Large language models are advancing at a breathtaking rate. One vivid illustration is the result of the study I worked on with law professors and Stanford CodeX fellows Dan Katz and Michael Bommarito. We found that while GPT-3.5 failed the bar, scoring roughly in the bottom 10th percentile, GPT-4 not only passed but approached 90th percentile. These gains are driven by the scale of the underlying models more than any fine-tuning for law. That is, our experience has been that GPT-4 outperforms smaller models that have been fine-tuned on law. It is also critical from a security standpoint that the general model doesnt retain, much less learn from, the activity and information of attorneys.

What technologies are next and how will they impact the practice of law?

The rate of progress in this area is remarkable. Every day I see or hear about a new version or application. One of the most exciting areas is something called Agentic AI, where the LLMs (large language models) are set up so that they can themselves strategize about how to carry out a task, and then execute on that strategy, evaluating things along the way. For example, you could ask an Agent to arrange transportation for a conference and, without any specific prompting or engineering, it would handle getting a flight (checking multiple airlines if need be) and renting a car. You can imagine applying this to substantive legal tasks (i.e., first I will gather supporting testimony from a deposition, then look through the discovery responses to find further support, etc).

Another area of growth is mutli-modal, where you go beyond text and fold in things like vision. This should enable things like an AI that can comprehend/describe patent figures or compare written testimony with video evidence.

Big law firms have certain advantages and I expect that they would want to maintain those advantages with this sort of evolutionary/learning technology. Do you expect AI to level the field?

Technology like this will definitely level the playing field; indeed, it already is. I expect this technology to at once level and elevate the profession.

So, AI-powered technology such as LLMs can help to close the access to justice gap?

Absolutely. In fact, this might be the most important thing LLMs do in the field of law. The first rule of the Federal Rules of Civil Procedure exhorts the just, speedy and inexpensive resolution of matters. But if you asked most people what three words come to mind when they think about the legal system, speedy and inexpensive are unlikely to be the most common responses. By making attorneys much more efficient, LLMs can help attorneys increase access to justice by empowering them to serve more clients.

Weve read about AIs double-edged sword. Do you have any big concerns? Are we getting close to a Robocop moment?

My view, and the view of Casetext, is that this technology, as powerful as it is, still requires attorney oversight. It is not a robot lawyer, but rather a very powerful tool that enables lawyers to better represent their clients. I think it is important to distinguish between the near term and the long term questions in debates about AI.

The most dramatic commentary you hear (e.g., AI will lead to utopia, AI will lead to human extinction) is about artificial general intelligence (AGI), which most believe to be decades away and not achievable simply by scaling up existing methods. The near term discussion, about how to use the current technology responsibly, is generally more measured and where I think the legal profession should be focused right now.

At a recent workshop we held at CodeXs FutureLaw conference, Professor Larry Lessig raised several near-term concerns around issues like control and access. Law firm managing partners have asked us what this means for associate training; how do you shape the next generation of attorneys in a world where a lot of attorney work can be delegated to AI? These kinds of questions, more than the apocalyptic prophecies, are what occupy my thinking. That said, I am glad we have some folks focused on the longer term implications.

Pablo Arredondo is a Fellow at Codex The Stanford Center for Legal Informatics and the co-founder of Casetext, a legal AI company. Casetexts CoCounsel platform, powered by GPT-4, assists attorneys in document review, legal research memos, deposition preparation, and contract analysis, among other tasks. Arredondos work at Codex focuses on civil litigation, with an emphasis on how litigators access and assemble the law. He is a graduate of Stanford Law School, JD 05, and of the University of California at Berkeley.

Read more here:

GPT-4 Passes the Bar Exam: What That Means for Artificial ... - Stanford Law School

Is ‘Generative’ AI the Way of the Future? – Northeastern University

Ever since the 20th centurys earliest theories of artificial intelligence set the world on an apparently irreversible track toward the technology, the great promise of AIone thats been used to justify that march forwardis that it can help usher in social transformation and lead to human betterment.

With the arrival of so-called generative AI, such as OpenAIs endlessly amusing and problem-riddled ChatGPT, the decades long slow-roll of AI advancement has felt more like a quantum leap forward. That perceptive jump has some experts worried about the consequences of moving too quickly toward a world in which machine intelligence they say could become an all-powerful, humanity-destroying force la The Terminator.

But Northeastern experts, including Usama Fayyad, executive director for the Institute for Experiential Artificial Intelligence, maintain that those concerns dont reflect reality. That, in fact, AI is being integrated in ways that promote and necessitate human involvementwhat experts have coined human-in-the-loop.

On Tuesday, April 25, Northeastern will host a symposium of AI experts to discuss a range of topics related to the pace of AI development, and how progress is reshaping the workplace, education, health care and many other sectors.Northeastern Global News sat down with Fayyad to learn more about what next weeks conference will take up; the upside of generative AI; as well as broader developments in the space. The conversation has been edited for brevity and clarity.

Generative AI refers to the kind of AI that can, quite simply, generate outputs. Those outputs could be in the form of text like you see in what we call the large language models, such as ChatGPT (a chatbot on top of a large language model), or images, etc. If you are training [the AI] on text, text is what you will get out of it. If you are training it on images, you get images, or modifications of images, out of it. If you are training it on sounds or music, you get music out of it. If you train it on programming code, you get programs out, and so on.

Its also called generative AI because the algorithms have the ability to generate examples on their own. Its part of their training. Researchers would do things like have the algorithm challenge itself through generative adversarial networks, or algorithms that generate adversarial examples that could confuse the system to help strengthen its training. But since their development, researchers quickly realized that they needed human intervention. So most of these systems, including ChaptGPT, actually use and require human intervention. Human beings facilitate a lot of these challenges as part of the training through something called reinforcement learning, a machine learning technique designed to basically improve the systems performance.

We are seeing it applied in educationin higher education in particular. Higher education has taken noteincluding Northeastern, in a very big wayof the fact that these technologies have challenged the way we conduct, for example, standardized testing. Educators have realized that this is just another tool. At Northeastern we have many examples that we will cover in this upcoming workshop of people using it in the classroom. Be it in the College of Arts, Media and Design for things like [Salvador] Dal and LensaAI for images; or be it in writing classes, English classes, or in engineering.

Just like we transitioned from the slide ruler to the calculator, to the computer and then the whole web on your mobile phonethis is another tool, and the proper way to train our students to be ready for the new world is to figure out ways to utilize this technology as a tool.

Its too early to see real world applications at large scale. The technology is too new. But there are estimates that anywhere from 50-80%Im more in the 80% campof the tasks done by a knowledge worker can be accelerated by this technology. Not automated, accelerated. If youre a lawyer and drafting an agreement, you can have a first draft customized very quickly; but then you have to go in and edit or make changes. If youre a programmer, you can turn out an initial program. But it typically wont work well; it will have errors; its not customized to the target. Again, a human being, provided they understand what theyre doing, can go in and modify it, and save themselves 50-80% of the effort.

Its acceleration, not automation because we know the technology can hallucinate in horrible ways, in fact. It can make up stuff; it can try to defend points of view that you ask it to defend; you can make it lie, and you can lie to it and have it believe you.

They call this specific class of technology stochastic parrots, meaning parrots that havelets sayrandom variation. And I like the term parrots because it correctly describes the fact that they dont understand what theyre saying. So they say stuff, and the stuff may sound eloquent, or fluid. Thats one of the big points that we try to make: somehow we have learned in society to associate intelligence with eloquence and fluiditybasically someone who says things nicely. But in reality these algorithms are far from intelligent; they are basically doing autocomplete; they are repeating things theyve seen before, and often they are repeated incorrectly.

Why do I say all of this? Because it means you have a human-in-the-loop needed in doing this work, because you need to check all of this work. You remove a lot of the repetitive monotonous workthats great. You can accelerate itthats productive. You now can spend your time adding value instead of repeating the boring tasks. All of that I consider positive.

I like to use accounting as a good analogy. What did accounting look like 60-70 years ago? Well, you had to deal with these big ledgers; you had to have nice handwriting; you had to have good addition skills in your head; you had to manually verify numbers and go over sums and apply ratios. Guess what? None of those tasksnone, zeroare relevant today. Now, have we replaced accountants because weve now replaced everything they used to do with something that is faster, better, cheaper, repeatable? No. We actually have more accountants today than in the history of humanity.

What were doing with this workshop is were trying to cover the three areas that matter. What is the impact of ChatGPT and generative AI in the classroom, and how should we use it? We bring in folks who are doing this work at Northeastern to provide examples in one panel.

Second, how is the nature of work changing because of these technologies? That will be addressed during another panel where we think about different business applications. We will use the law and health care as the two running examples here.

The third panel is all about responsible use. How does one look out for the ethical traps, and how does one use this technology properly? We start the whole workshop by having one of our faculty members give an overview of what this technology is to help demystify the backbox, if you will.

The idea, basically, is to show that not only are we (Northeastern) aware of the technological developments taking place, but that we have some of the top experts in the world leading the way. And we are already using this stuff in the classroom as of last semester. Additionally, we want to communicate that were here and ready to work with companies, with organizations to learn ways to best utilize this technologyand to do so properly and responsibly.

Theres plenty of evidence now known that ChatGPT has a human-in-the-loop component. Sometimes humans are answering questions, especially when the algorithm gets in trouble. They review the answers and intervene. By the way, this is run-of-the-mill stuff for even Google Search engine. Many people dont know that when they use the Google Search engine, that the MLR, or the machine learning relevance algorithm that decides which page is relevant to which querythat gets retrained three or four times a day based primarily on human editorial input. Theres a lot of stuff that an algorithm cannot capturethat the stochastic parrot will never understand.

Those concerns are focusing on the wrong things. Let me say a few things. We did go through a bit of a phase transition around 2015 or 2016 with these kinds of technologies. Take handwriting recognition, for example. It had jumps over the years, but it took about 15 years to get there, with many revisions along the way. Speech recognition: the same thing. It took a long time, then it started accelerating; but it still took some time.

With these large language models, like reading comprehension and language compilation, we see major jumps that happened with the development of these large language models that are trained on these large bodies of literature or text. And by the way, what is not talked about a lot is that OpenAI had to spend a lot of money curating that text; making sure its balanced. If you train a large language model on two documents that have the same content by two different outcomes, how does the algorithm know which one is right? It doesnt. Either a human has to tell it, or it basically defaults to saying, Whatever I see more frequently must be right. That creates fertile ground for misinformation.

Now, to answer your question about this proposed moratorium. In my mind, its a little bit silly in its motivations. Many of the proponents of this come from a camp where they believe were at the risk of an artificial general intelligencethat is very far from true. Were very, very far from even getting close to that. Again, these algorithms dont know what they are doing. Now, we are in this risky zone of misusing it. There was a recent example from Belgium where someone committed suicide after six months of talking to a chatbot that, in the end, was encouraging him to do it. So there are a lot of dangers that we need to contend with. We know there are issues. However, stopping isnt going to make any difference. In fact, if people agreed to stop, only the good actors will; the bad actors continue on. What we need to start to do, again, is emphasize the fact that fluency, eloquence is not intelligence. This technology has limitations; lets demystify them. Lets put it to good use so we can realize what the bad uses are. That way we can learn how they should be controlled.

Tanner Stening is a Northeastern Global News reporter. Email him at t.stening@northeastern.edu. Follow him on Twitter @tstening90.

See more here:

Is 'Generative' AI the Way of the Future? - Northeastern University

Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript – MSNBC

View this graphic on msnbc.com

You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest this week points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, were starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. Shes also author of Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says its neither artificial nor intelligent, climate change concerns, the need for regulation and more.

Note: This is a rough transcript please excuse any typos.

Kate Crawford: We could turn to OpenAI's own prediction here, which is that they say 80 percent of jobs are going to be automated in some way by these systems. That is a staggering prediction.

Goldman Sachs just released a report this month saying 300 jobs in the U.S. are looking at, you know, very serious forms of automation impacting what they do from day-to-day. So, I mean, it's staggering when you start to look at these numbers, right?

So, the thing that I think is interesting is to think about this historically, right? We could think about the Industrial Revolution. It takes a while to build factory machinery and train people on how things work.

We could think about the transformations that happened in the sort of early days of the personal computer. Again, a slow and gradual rollout as people began to incorporate this technology. The opposite is happening here.

Chris Hayes: Hello and welcome to "Why Is This Happening?" with me, your host, Chris Hayes. There's a famous Arthur C. Clarke quote that I think about all the time. He was a science fiction writer and futurist and he wrote a book called "Profiles of the Future: An Inquiry into the Limits Possible" and this quote, which you probably have caught at one point or another is that, "Any sufficiently advanced technology is indistinguishable from magic."

And there's something profound about that. I remember the first time that, like, I saw Steve Jobs do the iPhone presentation. And then, the first one I held in my hand, it really did feel like magic. It felt like a thing that formerly wasn't possible, that I knew what the sort of laws of physics and technology were and this thing came along and it seemed to break them, so it felt like magic.

I remember feeling that way the first time that I really started to get on the graphical version of the internet. Even before that when I got on the first version of the internet. Like, oh, I have a question about a thing. You know, this baseball player Rod Carew, what did he hit in his rookie season? Right away, right? Magic. Magically, it appears in front of me.

And I think a lot of people have been having the feeling about AI recently. There's a bunch of new, sort of public-facing, machine learning, large language model pieces of software. One is ChatGPT, which I've been messing around with.

There's others for images. One called Midjourney and a whole bunch of others. And youve probably seen the coverage of this because it seems like in the last two months it's just gone from, you know, nowhere and people talk about AI and the algorithm machine learning tool, like, holy smokes.

And I got to say, like, we're going to get into the ins and outs of this today. But at the sort of does it feel like magic level, like, it definitely feels like magic to me.

I went to ChatGPT. I was messing around with it. I told it to write a standup comedy routine in the first person of Ulysses S. Grant about the Siege of Vicksburg using, like, specific details from the battle and it came back with, like, you know, I had to hide my soldiers the way I hide the whiskey from my wife, which is like, you know, he was, you know, notoriously had a drinking problem although tended to not around his wife. So, it was, like, slightly off that way.

But it was like a perfectly good standup routine about the Siege of Vicksburg in the first person of Ulysses S. Grant, and it was done in five seconds. Obviously, we're going to get into all sorts of, you know, I don't think it's going to be like taking over for us, but the reason it felt like magic to me is I know enough about computers and the way they work that I can think through like when my iPhone's doing something, when I'm swiping, I can model what's happening.

Like, there's a bunch of sensors in the actual phone. Those sensors have a set of programming instructions to receive the information of a swipe and then compare it against a set of actions and figure out which one is closest to and then do whatever the command is.

And, you know, I've programmed before, and I can reason out what it's doing. I can reason out what, like, my car is doing. I understand basically how an internal combustion engine works and, you know, the pistons. And I just have no idea what the hell is happening inside this thing that when I told it to do this, it came back with something that seemed like the product of human intelligence. I know it's not. We're going to get into all of it, but it's like it does seem to me like a real step change.

You know, a lot of people feel that way. Now, it so happens that this is something that I studied as an undergraduate and thought a lot about. And there's a long literature about artificial intelligence and human intelligence and we're going to get into all that today.

But because this is so front-of-mind, because this is such an area of interest for me, I'm really delighted to have on today's program Kate Crawford. This is Kate Crawford's lifes work. She's an artificial intelligence expert. She studies the social and political implications of AI.

She's a Research Professor at USC Annenberg, Honorary Professor at University of Sydney, Senior Principal Researcher at Microsoft Research Lab in New York City.

She's the author of "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence." A lot of the things that I think have exploded onto public consciousness in the last few months have been the subject of work that she's been thinking about and doing for a very long time.

So, Kate, it's great to have you in the program.

Kate Crawford: Thanks for having me, Chris.

Chris Hayes: Does it feel like magic to you?

Kate Crawford: I'll be honest. There is definitely the patina of magic. There's that feeling of how is this happening. And to some degree, you know, I've been taken aback at the speed by which we've gotten here. I think anybody who's been working in this field for a long time will tell you the same thing.

Chris Hayes: Oh, really? This feels like a step change to you --

Kate Crawford: Oh, yeah.

Chris Hayes: -- like we're in a new --

Kate Crawford: Yeah. This feels like an inflection point, I would say, even bigger than a step function change. We're looking at --

Chris Hayes: Right.

Kate Crawford: -- a shift that I think is pretty profound and, you know, a lot of people use the iPhone example or the internet example. I like to go even further back. I like to think about the invention of artificial perspective, so we can go back into the 1400s where you had Alberti outline a completely different way of visualizing space, which completely transformed art and architecture and how we understood the world that we lived in.

You know, it's been described as a technology that shifted the mental and material worlds of what it is to be alive. And this is one of those moments where it feels like a perspectival shift that can feel magic. But I can assure you, it is not magic and thats --

Chris Hayes: No, I know --

Kate Crawford: -- where it gets interesting.

Chris Hayes: OK. I know it's not. I'm just being clear. Obviously, I know it's not magic. And also, I actually think the Arthur C. Clarke quote is interesting because there's two different meanings, right?

So, it feels like magic in the sense of, like, things that are genuine magic, right, that in a fantastical universe, they're miracles, right? Or it feels like magic in that, like, when you're around an incredible magician, you know that the laws of physics haven't been suspended but it sure as heck feels like it, right?

Kate Crawford: Oh, yeah.

Chris Hayes: And that's how this feels to me. Like, I understand that this is just, you know, a probabilistic large language learning model that then we'll get into how this is working. So, I get that.

But it sure as heck on the outcome line, you know, feels like something new. The perspectival shift is a really interesting idea. Why does that analogy draw you?

Kate Crawford: Well, let's think about these moments of seeming magic, right? So, there is just decades of examples of this experience. And in fact, we could go all the way back to the man who invented the first chatbot. This is Joseph Weizenbaum. And in the 1960s when he's at MIT, he creates a system called ELIZA. And if you're a person of a certain age, you may remember when ELIZA came out. It's really simple, kind of almost set of scripts that will ask you questions and elicit responses and essentially have a conversation with you.

So, writing in (ph) the 1970s, Weizenbaum was shocked that people were so easily taken in by this system. In fact, he uses a fantastic phrase around this idea that there is this powerful delusional thinking that is induced in otherwise normal people the minute you put them in front of a chatbot.

We assume that this is a form of intelligence. We assume that the system knows more than it does. And, you know, the fact that he captured that in this fantastic book called "Computer Power and Human Reason" back in 1978, I think, hasnt changed that that phenomenon, when we open up ChatGPT, you really can get that sense of, OK, this is a system that really feels like I'm talking to, at least if not a person, a highly-evolved form of computational intelligence.

And I think what's interesting about this perspectival shift is that, honestly, this is a set of technologies that have been pretty well known and understood for some time. The moment of change was the minute that OpenAI put it into a chat box and said, hey, you can have a conversation with a large language model.

That's the moment people started to say this could change every workplace, particularly white-collar workplaces. This could change the whole way that we get information. This could change the way we understand the world because this system is giving you confident answers that can feel extremely plausible even when they make mistakes, which they--

Chris Hayes: Yes.

Kate Crawford: -- frequently do.

Chris Hayes: So, I mean, part of that, too, is like, you know, humans see faces in all kinds of places where there arent faces, right? We project inner lives onto our pets. You know, we have this drive to mentally model other consciousnesses, partly because of the intensely inescapable social means by which we evolved.

So, part of it is in the same way that magicians taking advantage of certain parts of our perceptual apparatus, right, like we're easily distracted by, like, loud motions, right? It's doing that here with our desire to impute consciousness in the same way that, like, we have a whole story about what's going on in a dog's mind when it gets out into the park.

Kate Crawford: Exactly.

Chris Hayes: But, like, I'm not sure it's correct.

Kate Crawford: That is it. And I actually think the magician's trick analogy is the right one here because it operates on two levels. First, we're contributing half of the magic by bringing those, you know, anthropomorphic assumptions into the room and by playing along.

We are literally training the AI model with our responses. So, when it says something and we say, oh, that's great. Thanks. Could I have some more? Thats a signal to the system this was the correct answer.

If you say, oh, that doesn't seem to match up, then it takes that as a negative --

Chris Hayes: Right.

Kate Crawford: -- signal. So, we are literally training these systems with our own intelligence. But there's another way we could think about this magician's trick because while this is happening and while our focus is on, oh, exciting LLMs, there's a whole other set of political and social questions that I think we need to be asking that often get deemphasized.

Chris Hayes: There's a few things here. There's the tech, there's the kind of philosophy, and then there's the, like, political and social implication.

So, just start on the tech. Let's go back to the chatbot you're talking about before, ELIZA. So, there's a bunch of things happening here in a chatbot like ChatGPT that are worth breaking down.

The first is just understanding natural language and, you know, I did computer science as an undergraduate and philosophy and philosophy of mind and some linguistics when I was an undergraduate 25 years ago. And at that time, like, natural language processing was a huge unsolved problem.

You know, we all watched "Star Trek". Computer, give me this. And it's like getting that computer understand a simple sentence is actually, like, wildly complex as a computational problem. We all take it for granted, but it seems like even before you get into what it's giving you back, I mean, now, it's embedded in our lives, Siri, all this stuff.

Like how did we crack that? Is there a layperson's way to explain how we cracked natural language processing?

Kate Crawford: I love the story of the history of how we got here because it gives you a real sense of how that problem has been, if not cracked, certainly seriously advanced. So, we could go back to the sort of prehistory of AI. So, I think sort of 1950s, 1960s.

The idea of artificial intelligence then was something called knowledge-based AI or an expert systems approach. The idea of that was that to get a computer to understand language, you had to teach it to understand linguistic principles, high-level concepts to effectively understand English like the way you might teach a child to understand English by thinking about the principles and thinking about, you know, here's why we use this sort of phrasing, et cetera.

Then something happens in around the 1970s and early 1980s, a new lab is created at IBM, the continuous-speech recognition lab, the CSR lab. And this lab is fascinating because a lot of key figures in AI are there, including Robert Mercer who would later become famous as the, shall we say, very backroom-operator billionaire who funded people like Bannon and the Trump campaign.

Chris Hayes: Yup.

Kate Crawford: Yes, and certainly, the Brexit campaign.

Chris Hayes: Yup.

Kate Crawford: So, he was one of the members of this lab that was headed by Professor Jelinek, and they had this idea. They said instead of teaching computers to understand, let's just teach them to do pattern recognition at scale.

Essentially, we could think about this as the statistical turn, the moment where it was less about principles and more about patterns. So, how do you do it? To teach that kind of probabilistic pattern recognition, you just need data. You need lots and lots and lots of linguistic data, just examples.

And back then, even in the, you know, 1980s, it was hard to get a corpus of data big enough to train a model. They tried everything. They tried patents. They tried, you know, IBM technical manuals, which, funnily enough, didn't sound like human speech. They tried children's books.

And they didn't get a corpus that was big enough until IBM was actually taken to court. This was like a big antitrust case where it went for years. They had, like, a thousand witnesses called. And in this case, this produces the corpus that they used to train their model. Like honestly, you couldnt make this stuff up. Its wild (ph).

Chris Hayes: Is that right?

Kate Crawford: Oh, absolutely. So, they have a breakthrough which is that it is all about scale. And so interestingly --

Chris Hayes: Right.

Kate Crawford: -- Mercer has this line, you know, which is fantastic. There's a historian of science, Tsao-Cheng Lee (ph) who's written about this moment. But, you know, Mercer says, it was one of the rare moments of government being useful despite itself. That was how --

Chris Hayes: Boo.

Kate Crawford: -- he justified this case, right?

So, we see this changed towards basically it's all about data. So, then we have the years of the internet. Think about, you know, the early 2000s. Everyone's doing blogs, social media appears, and this is just grist to the mill. You can scrape and scrape and scrape and create larger and larger training data sets.

So, that's basically what they call these, foundational data sets, which are used to see these patterns. So, effectively, LLMs are advanced pattern recognizers that do not understand language, but they are looking for, essentially, patterns and relationships between the text that theyve been trained on, and they use this to essentially predict the next word in a sentence. So, that's what they're aimed to do.

Chris Hayes: This statistical turn is such an important conceptual point. I just want to stay on it because I think this, like, really helped. And this turn happened before I was sort of interested in natural language processing. But when we were talking about natural language processing, we're still talking in this old model, right?

Well, you teach kids these rules, right, and you teach them or if you learn a second language, like, you learn verb conjugation, right? And you're running them through these rules, like, OK, that's a first person. There's this category called first person. There's a category called verb then a conjugate. There's category of conjugation. One plus one plus one equals three. That gives me, you know, yo voy (ph). OK.

So, thats this sort of principle, rule-based way of sort of understanding language and natural language processing. So, the statistical turn says throw all that out. Let's just say if someone says thanks what's likely to be the next word?

And you see this in the Gmail auto complete.

Kate Crawford: Yup.

Chris Hayes: When you say thanks and it will light up so much. It's just that thanks so much goes together a lot. So, when you put in thanks, it's like pretty good chance it's going to be so much.

And that general principle of if you run enough data and you get enough probabilistic connections between this and that word at scale, is how you get Ulysses S. Grant doing a joke about Vicksburg and hiding his troops the way he hides whiskey from his wife.

Kate Crawford: Exactly. And you could think about all of the words and that joke is being in a kind of big vector space or word cloud where you'd have Ulysses S. Grant, you'd have whiskey, you'd have soldiers, and you can kind of think about the ways in which they would be related.

And the funny thing is trying to write jokes with GPT, some of the time, it's really good and some of the time, it's just not funny at all because it's not --

Chris Hayes: Right. Sure.

Kate Crawford: -- coming from a basis of understanding humor or language.

Chris Hayes: No.

Kate Crawford: It's essentially doing this very large word association game.

Chris Hayes: Right. OK. So, I understand this principle. Like I get it. It's a probabilistic model that is trained on a ton of data and because it's trained on so much data and because it's using a cycle amount of processing power.

Kate Crawford: Oh, yes.

Chris Hayes: Like a genuinely crazy and, like, expensive and carbon intensive. So like, it's like running a car like a huge Mack truck, right?

Kate Crawford: Oh, yeah.

Chris Hayes: It's working its butt off to give me this, my dumb little Vicksburg joke. So, like, I get that intuitively, but maybe, like, if we could just go to the philosophy place, its like, OK, it doesn't understand. But then we're at this question of, like, all right, well what does understanding mean, right?

Kate Crawford: Right.

Chris Hayes: And this is where we start to get into this sort of philosophical AI question. And there's a long line here. There's Alan Turing's Turing test, which means we should explain to folks who don't know that. There's John Searle's Chinese box example, which we should also probably take a second.

But basically, for a long time, this question of, like, what does understanding mean? And if you encountered an intelligence that acted as if it were intelligent, at what point would you get to say it's intelligent without peering into what it's doing on the inside to produce the thing that makes it seem intelligent.

And the Turing test, is Alan Turing, the brilliant British mathematician, basically says, if you can interact with a chatbot that fools you, that's intelligence. And it just feels like, OK, well, ChatGPT, I think, is passing it. It feels like it passes the Turing test at least in some circumstances, yes?

More:

Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript - MSNBC

ChatGPT, artificial intelligence, and the news – Columbia Journalism Review

When OpenAI, an artificial intelligence startup, released its ChatGPT tool in November, it seemed like little more than a toyan automated chat engine that could spit out intelligent-sounding responses on a wide range of topics for the amusement of you and your friends. In many ways, it didnt seem much more sophisticated than previous experiments with AI-powered chat software, such as the infamous Microsoft bot Taywhich was launched in 2016, and quickly morphed from a novelty act into a racism scandal before being shut downor even Eliza, the first automated chat program, which was introduced way back in 1966. Since November, however, ChatGPT and an assortment of nascent counterparts have sparked a debate not only over the extent to which we should trust this kind of emerging technology, but how close we are to what experts call Artificial General Intelligence, or AGI, which, they warn, could transform society in ways that we dont understand yet. Bill Gates, the billionaire cofounder of Microsoft, wrote recently that artificial intelligence is as revolutionary as mobile phones and the Internet.

The new wave of AI chatbots has already been blamed for a host of errors and hoaxes that have spread around the internet, as well as at least one death: La Libre, a Belgian newspaper, reported that a man died by suicide after talking with a chat program called Chai; based on statements from the mans widow and chat logs, the software appears to have encouraged the user to kill himself. (Motherboard wrote that when a reporter tried the app, which uses an AI engine powered by an open-source version of ChatGPT, it offered different methods of suicide with very little prompting.) When Pranav Dixit, a reporter at BuzzFeed, used FreedomGPTanother program based on an open source version of ChatGPT, which, according to its creator, has no guardrails around sensitive topicsthat chatbot praised Hitler, wrote an opinion piece advocating for unhoused people in San Francisco to be shot to solve the citys homeless crisis, [and] used the n-word.

The Washington Post has reported, meanwhile, that the original ChatGPT invented a sexual harassment scandal involving Jonathan Turley, a law professor at George Washington University, after a lawyer in California asked the program to generate a list of academics with outstanding sexual harassment allegations against them. The software cited a Post article from 2018, but no such article exists, and Turley said that hes never been accused of harassing a student. When the Post tried asking the same question of Microsofts Bing, which is powered by GPT-4 (the engine behind ChatGPT), it repeated the false claim about Turley, and cited an op-ed piece that Turley published in USA Today, in which he wrote about the false accusation by ChatGPT. In a similar vein, ChatGPT recently claimed that a politician in Australia had served prison time for bribery, which was also untrue. The mayor has threatened to sue OpenAI for defamation, in what would reportedly be the first such case against an AI bot anywhere.

According to a report in Motherboard, a different AI chat programReplika, which is also based on an open-source version of ChatGPTrecently came under fire for sending sexual messages to its users, even after they said they werent interested. Replika placed limits on the bots referencing of erotic roleplaybut some users who had come to depend on their relationship with the software subsequently experienced mental-health crises, according to Motherboard, and so the erotic roleplay feature was reinstated for some users. Ars Technica recently pointed out that ChatGPT, for its part, has invented books that dont exist, academic papers that professors didnt write, false legal citations, and a host of other fictitious content. Kate Crawford, a professor at the University of Southern California, told the Post that because AI programs respond so confidently, its very seductive to assume they can do everything, and its very difficult to tell the difference between facts and falsehoods.

Joan Donovan, the research director at the Harvard Kennedy Schools Shorenstein Center, told the Bulletin of the Atomic Scientists that disinformation is a particular concern with chatbots because AI programs lack any way to tell the difference between true and false information. Donovan added that when her team of researchers experimented with an early version of ChatGPT, they discovered that, in addition to sources such as Reddit and Wikipedia, the software was also incorporating data from 4chan, an online forum rife with conspiracy theories and offensive content. Last month, Emily Bell, the director of Columbias Tow Center for Digital Journalism, wrote in The Guardian that AI-based chat engines could create a new fake news frenzy.

As I wrote for CJR in February, experts say that the biggest flaw in a large language model like the one that powers ChatGPT is that, while the engines can generate convincing text, they have no real understanding of what they are writing about, and so often insert what are known as hallucinations, or outright fabrications. And its not just text: along with ChatGPT and other programs have come a similar series of AI image generators, including Stable Diffusion and Midjourney, which are capable of producing believable images, such as the recent photos of Donald Trump being arrestedwhich were actually created by Eliot Higgins, the founder of the investigative reporting outfit Bellingcatand a viral image of the Pope wearing a stylish puffy coat. (Fred Ritchin, a former photo editor at the New York Times, spoke to CJRs Amanda Darrach about the perils of AI-created images earlier this year.)

Three weeks ago, in the midst of all these scares, a body called the Future of Life Institutea nonprofit organization that says its mission is to reduce global catastrophic and existential risk from powerful technologiespublished an open letter calling for a six-month moratorium on further AI development. The letter suggested that we might soon see the development of AI systems powerful enough to endanger society in a number of ways, and stated that these kinds of systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. More than twenty thousand people signed the letter, including a number of AI researchers and Elon Musk. (Musks foundation is the single largest donor to the institute, having provided more than eighty percent of its operating budget. Musk himself was also an early funder of OpenAI, the company that created ChatGPT, but he later distanced himself after an attempt to take over the company failed, according to a report from Semafor. More recently, there have been reports that Musk is amassing servers with which to create a large language model at Twitter, where he is the CEO.)

Some experts found the letter over the top. Emily Bender, a professor of linguistics at the University of Washington and a co-author of a seminal research paper on AI that was cited in the Future of Life open letter, said on Twitter that the letter misrepresented her research and was dripping with #Aihype. In contrast to the letters vague references to some kind of superhuman AI that might pose profound risks to society and humanity, Bender said that her research focuses on how large language models, like the one that powers ChatGPT, can be misused by existing oppressive systems and governments. The paper that Bender co-published in 2021 was called On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? It asked whether enough thought had been put into the potential risks of such models. After the paper came out, two of Benders co-authors were fired from Googles AI team. Some believe that Google made that decision because AI is a major focus for the companys future.

As Chloe Xiang noted for Motherboard, Arvind Narayanan, a professor of computer science at Princeton and the author of a newsletter called AI Snake Oil, also criticized the open letter for making it harder to tackle real AI harms, and characterized many of the questions that the letter asked as ridiculous. In an essay for Wired, Sasha Luccioni, a researcher at the AI company Hugging Face, argued that a pause on AI research is impossible because it is already happening around the world, meaning there is no magic button that would halt dangerous AI research while allowing only the safe kind. Meanwhile, Brian Merchant, at the LA Times, argued that the all doom-and-gloom about the risks of AI may spring from an ulterior motive: apocalyptic doomsaying about the terrifying power of AI makes OpenAIs technology seem important, and therefore valuable.

Are we really in danger from the kind of artificial intelligence behind services like ChatGPT, or are we just talking ourselves into it? (I would ask ChatGPT, but Im not convinced I would get a straight answer.) Even if its the latter, those talking themselves into it now include regulators both in the US and around the world. Earlier this week, the Wall Street Journal reported that the Biden administration has started examining whether some kind of regulation needs to be applied to tools such as ChatGPT, due to the concerns that the technology could be used to discriminate or spread harmful information. Officials in Italy already banned ChatGPT for alleged privacy violations. (They later stated that the chatbot could return if it meets certain requirements.) And the software is facing possible regulation in a number of other European countries.

As governments are working to understand this new technology and its risks, so, too, are media companies. Often, they are doing so behind the scenes. But Wired recently published a policy statement on how and when it plans to use AI tools. Gideon Lichfield, Wireds global editorial director, told the Bulletin of the Atomic Scientists that the guidelines are designed both to give our own writers and editors clarity on what was an allowable use of AI, as well as for transparency so our readers would know what they were getting from us. The guidelines state that the magazine will not publish articles written or edited by AI tools, except when the fact that its AI-generated is the whole point of the story.

On the other side of the ledger, a number of news organizations seem more concerned that chatbots are stealing from them. The Journal reported recently that publishers are examining the extent to which their content has been used to train AI tools such as ChatGPT, how they should be compensated and what their legal options are.

Other notable stories:

ICYMI: Free Evan, prosecute the hostage takers

Continue reading here:

ChatGPT, artificial intelligence, and the news - Columbia Journalism Review

AI Dangers Viewed Through the Perspective of Don’t Look Up – BeInCrypto

BeInCrypto explores the potential dangers of Artificial General Intelligence (AGI) by drawing comparisons with the film Dont Look Up. Just as the movie highlights societys apathy towards an impending catastrophe, we explore how similar attitudes could threaten our future as AGI develops.

We examine the chilling parallels and discuss the importance of raising awareness, fostering ethical debates, and taking action to ensure AGIs responsible development.

Dont Look Up paints a chilling scenario: experts struggle to warn the world about an impending disaster while society remains apathetic. This cinematic metaphor mirrors the current discourse on Artificial General Intelligence (AGI).

With AGI risks flying under the radar, many people are questioning why society isnt taking the matter more seriously.

A primary concern in both situations is the lack of awareness and urgency. In the film, the approaching comet threatens humanity, yet the world remains unfazed. Similarly, AGI advancements could lead to disastrous consequences, but the public remains largely uninformed and disengaged.

The film satirizes societys tendency to ignore existential threats. AGIs dangers parallel this issue. Despite advancements, most people remain unaware of AGIs potential risks, illustrating a broader cultural complacency. The medias role in this complacency is also significant, with sensationalized stories often overshadowing the more complex nuances of AGIs implications.

A mix of factors contributes to this collective apathy. Misunderstanding the complexities of AGI, coupled with a fascination for AIs potential benefits, creates a skewed perception that downplays the potential hazards. Additionally, the entertainment industrys portrayal of AI may desensitize the public to the more sobering implications of AGI advancement.

As AI technology evolves, reaching AGI Singularitywhere machines surpass human intelligencebecomes increasingly likely. This watershed moment brings with it a host of risks and benefits, adding urgency to the conversation.

AGI has the potential to revolutionize industries, enhance scientific research, and solve complex global challenges. From climate change to disease eradication, AGI offers tantalizing possibilities.

AGI Singularity may also unleash unintended consequences, as machines with superhuman intelligence could pursue goals misaligned with human values. This disparity underscores the importance of understanding and managing AGIs risks.

Much like the comet in Dont Look Up, AGIs risks carry worldwide implications. These concerns necessitate deeper conversations about potential dangers and ethical considerations.

AGI could inadvertently cause harm if its goals dont align with human values. Despite our best intentions, the fallout might be irreversible, stressing the need for proactive discussions and precautions. Examples include the misuse of AGI in surveillance or autonomous weapons, which could have dire consequences on personal privacy and global stability.

As nations race to develop AGI, the urgency to outpace competitors may overshadow ethical and safety considerations. The race for AGI superiority could lead to hasty, ill-conceived deployments with disastrous consequences. Cooperation and dialogue between countries are crucial to preventing a destabilizing arms race.

While AGI promises vast improvements, it also raises moral and ethical questions that demand thoughtful reflection and debate.

AGI systems may make life-or-death decisions, sparking debates on the ethics of delegating such authority to machines. Balancing AGIs potential benefits with the moral implications requires thoughtful analysis. For example, self-driving cars may need to make split-second decisions in emergency situations, raising concerns about the ethical frameworks guiding such choices.

Artificial intelligence has the potential to widen the wealth gap, as those with access to its benefits gain a disproportionate advantage. Addressing this potential inequality is crucial in shaping AGIs development and deployment. Policymakers must consider strategies to ensure that AGI advancements benefit all of society rather than exacerbate existing disparities.

As AGI systems collect and process vast amounts of data, concerns about privacy and security arise. Striking a balance between leveraging AGIs capabilities and protecting individual rights presents a complex challenge that demands careful consideration.

For society to avoid a Dont Look Up scenario, action must be taken to raise awareness, foster ethical discussions, and implement safeguards.

Informing the public about AGI risks is crucial to building a shared understanding. As awareness grows, society will also be better equipped to address AGIs challenges and benefits responsibly. Educational initiatives, public forums, and accessible resources can play a vital role in promoting informed discourse on AGIs implications.

Tackling AGIs risks requires international cooperation. By working together, nations can develop a shared vision and create guidelines that mitigate the dangers while maximizing AGIs potential. Organizations like OpenAI, the Future of Life Institute, and the Partnership on AI already contribute to this collaborative effort, encouraging responsible AGI development and fostering global dialogue.

Governments have a responsibility to establish regulatory frameworks that encourage safe and ethical AGI development. By setting clear guidelines and promoting transparency, policymakers can help ensure that AGI advancements align with societal values and minimize potential harm.

The parallels between Dont Look Up and the potential dangers of AGI should serve as a wake-up call. While the film satirizes societys apathy, the reality of AGI risks demands our attention. As we forge ahead into this uncharted territory, we must prioritize raising awareness, fostering ethical discussions, and adopting a collaborative approach.

Only then can we address the perils of AGI advancement and shape a future that benefits humanity while minimizing potential harm. By learning from this cautionary tale, we can work together to ensure that AGIs development proceeds with the care, thoughtfulness, and foresight it requires.

Following the Trust Project guidelines, this feature article presents opinions and perspectives from industry experts or individuals. BeInCrypto is dedicated to transparent reporting, but the views expressed in this article do not necessarily reflect those of BeInCrypto or its staff. Readers should verify information independently and consult with a professional before making decisions based on this content.

Read the original:

AI Dangers Viewed Through the Perspective of Don't Look Up - BeInCrypto

SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products – Yahoo Finance

SHANGHAI, April 20, 2023 /PRNewswire/ -- The Shanghai International Automobile Industry Exhibition ("Auto Shanghai 2023"), themed "Embracing the New Era of the Automotive Industry," has been held with a focus on the innovative changes in the automotive industry brought about by technology. SenseAuto, the Intelligent Vehicle Platform of SenseTime, made its third appearance at the exhibition with the three-in-one product suite of intelligent cabin, intelligent driving, and collaborative cloud, showcasing its full-stack intelligent driving solution and six new intelligent cabin products designed to create the future cabin experience with advanced perception capabilities. Additionally, nearly 30 models produced in collaboration with SenseAuto were unveiled at the exhibition, further emphasizing its industry-leading position.

SenseAuto made its third appearance at Auto Shanghai

At the Key Tech 2023 forum, Prof. Wang Xiaogang, Co-founder, Chief Scientist and President of Intelligent Automobile Group, SenseTime, delivered a keynote speech emphasizing that smart autos provide ideal scenarios for AGI (Artificial General Intelligence) to facilitate closed-loop interactions between intelligent driving and passenger experiences in the "third living space", which presents endless possibilities.

SenseAuto empowers nearly 30 mass-produced models showcased at Auto Shanghai 2023In 2022, SenseAuto Cabin and SenseAuto Pilot products were adapted and delivered to 27 vehicle models with more than 8 million new pipelines. These products now cover more than 80 car models from over 30 automotive companies, confirming SenseAuto's continued leadership in the industry.

In the field of intelligent driving, SenseAuto has established mass-production partnerships with leading automakers in China, such as GAC and Neta. At the exhibition, SenseAuto showcased the GAC AION LX Plus, which leverages SenseAuto's stable surround BEV (Bird 's-Eye-View) perception and powerful general target perception capabilities to create a comprehensive intelligent Navigated Driving Assist (NDA) that is capable of completing various challenging perception tasks. The Neta S, another exhibited model at the show, is also equipped with SenseAuto's full-stack intelligent driving solution which provides consumers with a reliable and efficient assisted driving experience in highway scenarios.

Story continues

In the field of intelligent cabin, SenseAuto is committed to developing the automotive industry's most influential AI empowered platform with the aim of providing extremely safe, interactive, and personalized experiences for users. The NIO ES7 model exhibited supports functions such as driver fatigue alerts, Face ID, and child presence detection. SenseAuto's cutting-edge visual AI technology has boosted the accuracy of driver attention detection by 53% in long-tail scenarios, and by 47% in complex scenarios involving users with narrow-set eyes, closed eyes, and backlighting.

The highly anticipated ZEEKR X model showcased features from SenseAuto's groundbreaking intelligent B-pillar interactive system, a first-of-its-kind innovation that allows for contactless unlocking and entry. Other models on display that boast SenseAuto's cutting-edge DMS (Driver Monitoring System) and OMS (Occupant Monitoring System) technologies include Dongfeng Mengshi 917, GAC's Trumpchi E9, Emkoo, as well as the M8 Master models. Moreover, HiPhi has collaborated with SenseAuto on multiple Smart Cabin features and Changan Yida is equipped with SenseAuto's health management product, which can detect various health indicators of passengers in just 30 seconds, elevating travel safety to new heights.

Six Innovative smart cabin features for an intelligent "third living space"SenseAuto is at the forefront of intelligent cabin innovations, with multi-model interaction that integrates vision, speech, and natural language understanding. SenseTime's newly launched "SenseNova" foundation model set, which introduces avariety of foundation models and capabilities in natural language processing and content generation, such as digital human, opens up numerous possibilities for the smart cabin as a "third living space".

SenseAuto presented a futuristic demo cabin at Auto Shanghai 2023, featuring an AI virtual assistant that welcomes guests and directs them to their seats. In addition, SenseTime's latest large-scale language model (LLM), "SenseChat", interacted with guests and provided personalized content recommendations. The "SenseMirage" text-to-image creation platform has also been integrated with the exhibition cabin for the first time. With the help of SenseTime's AIGC (AI-Generated Content) capabilities, guests can enjoy a fun-filled travel experience with various styles of photos generated for them.

At the exhibition, SenseAuto unveiled six industry-first features including Lip-Reading, Guard Mode, Intelligent Rescue, Air Touch, AR Karaoke and Intelligent Screensaver. With six years of industry experience, SenseAuto has accumulated to date a portfolio of 29 features, of which, over 10 are industry-firsts.

SenseNova accelerates mass-production of smart drivingSenseAuto is revolutionizing the autonomous driving industry with its full-stack intelligent driving solution, which integrates driving and parking. The innovative SenseAuto Pilot Entry is cost-effective solution that uses parking cameras for driving functions. SenseAuto's parking feature supports cross-layer parking lot routing, trajectory tracking, intelligent avoidance, and target parking functions to fulfill multiple parking needs in multi-level parking lots.

SenseNova has enabled SenseAuto to achieve the first domestic mass production of BEV perception and pioneer the automatic driving GOP perception system. SenseAuto is proactively driving innovation in the R&D ofautonomous driving technology, leveraging SenseTime's large model system. Its self-developed UniAD has become the industry's first perception and decision intelligence integrated end-to-end autonomous driving solution. The large model is also used for automated data annotation and product testing, which has increased the model iteration efficiency by hundreds of times.

SenseAuto's success is evident in its partnerships with over 30 automotive manufacturers and more than 50 ecosystem partners worldwide.With plans to bring its technology to over 31 million vehicles in the next few years, SenseAuto is leading the way in intelligent vehicle innovation. Leveraging the capabilities of SenseNova, SenseAuto is poised to continue riding the wave of AGI and enhancing its R&D efficiency and commercialization process towards a new era of human-vehicle collaborative driving.

About SenseTime: https://www.sensetime.com/en/about-index#1

About SenseAuto: https://www.sensetime.com/en/product-business?categoryId=1095&gioNav=1

(PRNewsfoto/SenseTime)

Cision

View original content to download multimedia:https://www.prnewswire.com/apac/news-releases/senseauto-empowers-nearly-30-mass-produced-models-exhibited-at-auto-shanghai-2023-and-unveils-six-intelligent-cabin-products-301801980.html

SOURCE SenseTime

Continued here:

SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products - Yahoo Finance

Solving The Mystery Of How ChatGPT And Generative AI Can Surprisingly Pick Up Foreign Languages, Says AI Ethics And AI Law – Forbes

AI is able to pick up additional languages, doing so without magic or pixie dust.getty

The noted author Geoffrey Willans professed to say that anyone who knows no foreign language knows nothing of their own language.

Do you agree with that bold assertion?

Lets give the matter some serious thought.

First, perhaps we can agree that anyone that knows only one language could be labeled as being monolingual. Their native language is whatever language they have come to know. All other languages are said to be foreign to them, thus, if they opt to learn an additional language we could contend that they have picked up a foreign language.

Second, I assume we can concur that anyone that knows two languages could be given the lofty title of being bilingual. For those that know three or more languages, we will reserve the impressive label of being multilingual. An aspect that we might quibble about consists of how much of a language someone must know in order to be considered fluent enough in that language to count as intrepidly knowing an additional language. Hold onto that vexing question since well come back around to it later on herein.

Got a quick question for you.

How are you when it comes to being a language-wielding wizard?

You undoubtedly have friends or colleagues that speak a handful of languages, maybe you do likewise. The odds are that you are probably stronger in just one or two. The other languages are somewhat distant and sketchy in your mind. If push comes to shove, you can at least formulate fundamental sentences and likely comprehend those other languages to some slim degree.

The apex of the language gambit seems to be those amazing polyglots that know a dozen or dozens of languages. It seems nearly impossible to pull off. They imbue languages as easily as wearing a slew of socks and shoes. One moment conveying something elegant in one language and readily jumping over into a different language, nearly at the drop of a hat.

On social media, there are those polyglots that dazzle us by quickly shifting from language to language. They make videos in which they show the surprise and awe of others that admire their ability to effortlessly use a multitude of languages. You have surely wondered whether the polyglot was born with a special knack for languages or whether they truly had to learn many languages in the same way that you learned the two or three that you know. This is the classic question of whether language learning is more so nature versus nurture. We wont be solving that one herein.

There is an important reason that I bring up this weighty discussion overall about being able to use a multitude of languages.

Get yourself ready for the twist.

Maybe sit down and prepare for it.

The latest in generative AI such as ChatGPT and other such AI apps have seemingly been able to pick up additional languages beyond the one or ones that they appeared to have been initially data trained in. AI researchers and AI developers arent exactly sure why this is attainable. We will address the matter and seek to explore various postulated ways in which this can arise.

The topic has recently become a hot one due to an episode of the famed TV show 60 Minutes that interviewed Google executives. During the interviews, a Google exec stated that their AI app was able to engage in Bengali even though it was said to not have been data trained in that language. This enlisted a burst of AI hype, suggesting in this instance that the AI somehow magically made a choice to learn the additional language and proceeded to do so on its own.

Yikes, one might assume, this is surely a sign that these AI apps are converging toward being sentient. How else could the AI make the choice to learn another language and then follow up by learning it? That seems proof positive that contemporary AI is slipping and sliding toward Artificial General Intelligence (AGI), the moniker given to AI that can perform as humans do and otherwise be construed as possessing sentience.

It might be wise to take a deep breath and not fall for these wacky notions.

The amount of fearmongering and anthropomorphizing of AI that is going on right now is beyond the pale. Sadly, it is at times simply a means of garnering views. In other cases, the person or persons involved do not know what they are talking about, or they are being loosey-goosey for a variety of reasons.

In todays column, Id like to set the record straight and examine the matter of how generative AI such as ChatGPT and other AI apps might be able to pick up additional languages. The gist is that this can be mathematically and computationally explained. We dont need to refer to voodoo dolls or create false incantations to get there.

Logic and sensibility can prevail.

Vital Background About Generative AI

Before I get further into this topic, Id like to make sure we are all on the same page overall about what generative AI is and also what ChatGPT and its successor GPT-4 are all about. For my ongoing coverage of generative AI and the latest twists and turns, see the link here.

If you are already versed in generative AI such as ChatGPT, you can skim through this foundational portion or possibly even skip ahead to the next section of this discussion. You decide what suits your background and experience.

Im sure that you already know that ChatGPT is a headline-grabbing AI app devised by AI maker OpenAI that can produce fluent essays and carry on interactive dialogues, almost as though being undertaken by human hands. A person enters a written prompt, ChatGPT responds with a few sentences or an entire essay, and the resulting encounter seems eerily as though another person is chatting with you rather than an AI application. This type of AI is classified as generative AI due to generating or producing its outputs. ChatGPT is a text-to-text generative AI app that takes text as input and produces text as output. I prefer to refer to this as text-to-essay since the outputs are usually of an essay style.

Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language. To know more about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.

There are four primary modes of being able to access or utilize ChatGPT:

The capability of being able to develop your own app and connect it to ChatGPT is quite significant. On top of that capability comes the addition of being able to craft plugins for ChatGPT. The use of plugins means that when people are using ChatGPT, they can potentially invoke your app easily and seamlessly.

I and others are saying that this will give rise to ChatGPT as a platform.

All manner of new apps and existing apps are going to hurriedly connect with ChatGPT. Doing so provides the interactive conversational functionality of ChatGPT. The users of your app will be impressed with the added facility. You will likely get a bevy of new users for your app. Furthermore, if you also provide an approved plugin, this means that anyone using ChatGPT can now make use of your app. This could demonstrably expand your audience of potential users.

The temptation to have your app connect with ChatGPT is through the roof. Even if you dont create an app, you still might be thinking of encouraging your customers or clients to use ChatGPT in conjunction with your everyday services. The problem though is that if they encroach onto banned uses, their own accounts on ChatGPT will also face scrutiny and potentially be locked out by OpenAI.

As noted, generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

There are numerous concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).

Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.

There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what todays AI can do. They assume that AI has capabilities that we havent yet been able to achieve. Thats unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that President Abraham Lincoln flew around the country in a private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets werent around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.

Into all of this comes a slew of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

Ill be interweaving AI Ethics and AI Law related considerations into this discussion.

Figuring Out The Languages Conundrum

We are ready to further unpack this thorny matter.

I would like to start by discussing how humans seem to learn languages. I do so cautiously in the sense that I am not at all going to suggest or imply that todays AI is doing anything of the same. As earlier stated, it is a misguided and misleading endeavor to associate the human mind with the mathematical and computational realm of contemporary AI.

Nonetheless, some overarching reveals might be useful to note.

We shall begin by considering the use case of humans that know only one language, ergo being monolingual. If you know only one language, there is an interesting argument to be made that you might be able to learn a second language when undergoing persistent exposure to that second language.

Consider for example this excerpt from a research study entitled English Only? Monolinguals In Linguistically Diverse Contexts Have An Edge In Language Learning by researchers Kinsey Bice and Judith Kroll:

The crux is that your awareness of a single language can be potentially leveraged toward learning a second language by mere immersion into that second language. This is described as arising when in a linguistically diverse context. You might not necessarily grasp what those words in the other language mean, but you kind of catch on by exposure to the language and presumably due to your already mindful familiarity with your primary language.

Note that you didnt particularly have to be told how the second language works.

Of course, most people take a class that entails learning a second language and are given explicit instruction. That likely is the prudent path. The other possibility is that via a semblance of mental osmosis or mental gymnastics, you can gradually glean a second language. We can make a reasonable assumption that this is due to already knowing one language. If you didnt know any language at all, presumably you wouldnt have the mental formulation that could so readily pattern onto a second language. You would be starting from a veritable blank slate (well, maybe, since there is debate over what our brains consist of about wired versus learned aspects of language as a default).

This then covers a vital aspect that when you know one language, you possibly do not need explicit teaching about another language to learn that second language. We seem to be able to use a sense of language structure and patterns to figure out a second language. Not everyone can easily do so. It might be that you would struggle mightily over a lengthy period of time to comprehend the second language. A faster path would usually consist of explicit instruction.

But anyway, we can at times make that mental leap.

Lets explore another angle to this.

There is an intriguing postulation that if you learn a second language as a child, the result is that you will be more amenable to learning additional languages as an adult. Those people that are only versed in a single language throughout childhood allegedly will have a harder time learning a second language as an adult.

Consider this excerpt from a research study entitled A Critical Period For Second Language Acquisition: Evidence From 2/3 Million English Speakers by researchers Joshua Hartshorne, Joshua Tenenbaum, and Steven Pinker:

In short, a common suspected phenomenon is that a child that learns only one language during childhood is not somehow formulating a broadened capacity for learning languages all told. If they learn at least a second language, in theory, this is enabling their mind to discern how languages contrast and compare. In turn, this is setting them up for being more versed in that second language than would an adult that learns the second language while an adult. Plus, the child is somewhat prepared to learn a third language or additional languages throughout childhood and as an adult.

The idea too is that an adult that only learned one language as a child has settled into a one-language mode. They havent had to stretch their mind to cope with a second language. Thus, even though as an adult they should be able to presumably learn a second language, they might have difficulty doing so because they had not previously formulated the mentally beneficial generic structures and patterns to tackle a second language.

Please know that there is a great deal of controversy associated with those notions. Some agree with those points, some do not. Furthermore, the explanations for why this does occur, assuming it does occur, vary quite a bit.

If you want a boatload of controversy, heres more such speculation that gets a lot of heated discourse on this topic. Hold onto your hat.

Consider this excerpt from a research study entitled The Benefits Of Multilingualism To The Personal And Professional Development Of Residents Of The US by Judith Kroll and Paola Dussias:

The contention is that individuals with exposure to multiple languages during childhood benefit in many ways including greater openness to other languages and new learning itself.

Life though is not always a bed of roses. A concern is that a child might get confused or confounded when trying to learn more than one language during their childhood. The claim is that a child might not be able to focus on their considered primary language. They could inadvertently mix the other language and end up in a nowhere zone. They arent able to pinpoint their main language, and nor are they able to pinpoint the second language.

Parents are presented with a tough choice. Do you proceed to have your child learn a second language, doing so under the hopes and belief that this is the best means of aiding your child toward language learning and perhaps other advantages of mental stimulation? Or do you focus on one language alone, believing that once they are older it might be better to have them then attempt a second language, rather than doing so as a child?

Much of our existing educational system has landed on the side that knowing a second language as a child seems to be the more prudent option. Schools typically require a minimum amount of second language learning during elementary school, and ramp this up in high school. Colleges tend to do so as well.

Returning to the cited study above, heres what the researchers further state:

The expression often used is that when you know two or more languages, you have formulated a mental juggling capacity that allows you to switch from language to language. To some extent, the two or more languages might be construed as mental competitors, fighting against each other to win in your mental contortions when interpreting language. Some people relish this. Some people have a hard time with it.

I think that covers enough of the vast topic of language learning for the purposes herein. As mentioned, the language arena is complex and a longstanding matter that continues to be bandied around. Numerous theories exist. It is a fascinating topic and one that obviously is of extreme significance to humankind due to our reliance on language.

Imagine what our lives would be like if we had no language to communicate with. Be thankful for our wonderous language capacities, no matter how they seem to arise.

Generative AI And The Languages Affair

We are now in a position to ease into the big question about generative AI such as ChatGPT and the use of languages.

AI researcher Jan Leike at OpenAI tweeted this intriguing question on February 13, 2023:

And within the InstructGPT research paper that was being referred to, this point is made about the languages in the dataset that was used:

This brings us to my list of precepts about generative AI and the pattern-matching associated with languages, specifically:

A quick unpacking might be helpful.

First, realize that words are considered to be objects by most generative AI setups.

As Ive discussed about ChatGPT and GPT-4, see the link here, text or words are divided up into tokens that are approximately 3 letters or so in length. These tokens are various assigned numbers. The numbers are used to do the pattern matching amidst the plethora of words that are for example scanned during the data training of the generative AI. All of it is tokenized and used in a numeric format.

The text you enter as a prompt is encoded into a tokenized number. The response formulated by generative AI is a series of tokenized numbers that are then mapped into the corresponding letters and word segments for presentation to you when using an AI app.

The words being scanned during data training are typically sourced on the Internet in terms of passages of text that are posted on websites. Only a tiny fraction of the text on the Internet is usually involved in this scanning for data training and pattern-matching formulation purposes. A mathematical and computational network structure is devised that attempts to statistically associate words with other words, based on how humans use words and as exhibited via the Internet sites being scanned.

You might find of interest that there are concerns that this widespread text scanning is possibly violating Intellectual Property (IP) rights and entails plagiarism, see my analysis at the link here. It is an issue being pursued in our courts and well need to wait and see how the courts rule on this.

By and large, the generative AI that you hear about is data trained on words from the Internet that are in English, including for example the data training of ChatGPT. Though the bulk of the words encountered during the Internet scanning was in English, there is nonetheless some amount of foreign or other language words that are also likely to be encountered. This could be by purposeful design as guided by the AI developers, but usually, it is more likely a happenstance as to the casting of a shall we say a rather wide net when sauntering across a swath of the Internet.

It is like aiming to catch fish in your fishnet and meanwhile, you just so happen to also get some lobsters, crabs, and other entities along the way.

What happens with those other entities that are caught in the fishnet?

One possibility is that the pattern matching of the generative AI opts to treat those encountered words as a separate category in comparison to the English words being examined. They are outliers in contrast to the preponderance of words being patterned on. In a sense, each such identification of foreign words can be classified as belonging to a different potential language. Per my analogy, if fish were being scanned, the appearance of a lobster or a crab would be quite different, and ergo could be mathematically and computationally placed into a pending separate category.

Unless the AI developers have been extraordinarily cautious, the chances are that some notable level of these non-English words will be encapsulated during the data training across the selected portions of the Internet. One devised approach would be to simply discard any words that are calculated as possibly being non-English. This is not usually the case. Most generative AI is typically programmed to take a broad-brush approach.

The point is that a generative AI is unlikely to be of a single language purity.

Ive been discussing the case of using English as the primary language for being patterned. All other languages would be considered foreign with respect to English in that instance. Of course, we could readily and AI researchers have indeed chosen other languages to be the primary language for their generative AI efforts, which in that instance would mean that English is a foreign language in that case.

For purposes of this discussion, well continue with the case of English as selected as the primary language. The same precepts apply even if some other language is the selected primary language.

We can usually assume that the data training of a generative AI is going to include encounters with a multitude of other languages. If the encounters are sufficiently numerous, the mathematical and computational pattern matching will conventionally treat those as a separate category and pattern match them as based within this newly set aside category. Furthermore, pattern matching can mathematically and computationally broaden as the encounters aid in ferreting out the patterns of one language versus the patterns of a different language.

Here are some handy rules of thumb about what the generative AI is calculating:

As the pattern matching gets enhanced via the encounters with other languages, this also has the side benefit that when encountering yet another newly encountered language. The odds are that less of the language is needed to extrapolate what the language consists of. Smaller and smaller sample sizes can be extrapolated.

There is an additional corollary associated with that hypothesis.

Suppose that an additional language that well refer to for convenience as language Z had not been encountered at all during the data training. Later on, a user decides to enter a prompt into the generative AI that consists of that particular language Z.

You might at first assume that the generative AI would summarily reject the prompt as unreadable because the user is using a language Z that has not previously been encountered. Assuming that the AI developers were mindful about devising the generative AI to fully attempt to respond to any user prompt, the generative AI might shift into a language pattern-matching mode programmatically and try to pattern match on the words that otherwise seem to be outside of the norm or primary language being used.

This could account for the possibility that such a user-entered prompt elicits a surprising response by the generative AI in that the emitted response is also shown in the language Z, or that a response in say English is emitted and has seemingly discerned part of what the prompt was asking about. You see, the newly encountered language Z is parsed based on the pattern-matching generalizations earlier formed during data training as to the structure of languages.

During the 60 Minutes interview with Google executives, the exec that brought up the instance of generative AI that suddenly and surprisingly seemed to be able to respond to a prompt that was in Bengali, further stated that after some number of prompts in Bengali, the generative AI was able to seemingly translate all of Bengali. Heres what James Manyika, Googles SVP, stated: We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali.

See the original post here:

Solving The Mystery Of How ChatGPT And Generative AI Can Surprisingly Pick Up Foreign Languages, Says AI Ethics And AI Law - Forbes