How Billionaires Like Jeff Bezos, Elon Musk and George Soros Pay Less Income Tax Than You And How You Can Replicate The Strategy. It’s Legal – Yahoo…

Most billionaires dont get to where they are by earning a salary. And that also means they might pay less income tax than Americans who make a living through wages.

According to a report from ProPublica, some billionaires in the U.S. paid little or no income tax relative to the vast amount of wealth they have accumulated over the years.

The report noted that Amazon.com Inc. Founder Jeff Bezos did not pay a penny in federal income taxes in 2007 and 2011. It also pointed out that Tesla Inc. CEO Elon Musk paid no federal income tax in 2018 and investing legend George Soros did the same three years in a row.

To be sure, billionaires do pay taxes its just that the amount is rather small compared to how much money they actually make. For instance, ProPublicas report showed that between 2014 and 2018, Bezos paid $972 million in total taxes on $4.22 billion of income. Meanwhile, his wealth grew by $99 billion, meaning the true tax rate was only 0.98% during this period.

The reality is, billionaires build their wealth from assets like stocks and real estate. Their net worth goes up when these assets increase in value over time. But the U.S. tax system is not designed to capture the gains from such assets: Capital gains are typically taxed at lower rates than wages and salaries.

But of course, you dont need to be in the three-comma club to invest in these assets.

Check out:

For many well-known billionaires, the bulk of their wealth is tied to the companies they helped create.

If these companies are publicly traded, retail investors can hop on the bandwagon simply by purchasing shares. For those who want to follow Bezos, check out Amazon (AMZN). If you want to bet on Musk, look into Tesla (TSLA).

Heres the neat part: When stocks go up in value, investors only pay tax on realized gains. In other words, if an investor doesnt sell anything, they dont have to pay capital gains tax even if their stock holdings have skyrocketed in value because the gains are not realized.

Story continues

According to ProPublica, thats why some billionaires choose to borrow against their assets instead of selling them. Doing so gives the ultra-wealthy money to spend while deferring taxes on capital gains indefinitely.

That said, when they do sell their shares, they can still get hit with a substantial tax bill. After Musk sold a ton of Tesla shares in 2021, he tweeted that he would pay over $11 billion in taxes that year.

Another popular option for billionaires is real estate, which comes with plenty of tax advantages as well.

When you earn rental income from an investment property, you can claim deductions. These include expenses such as mortgage interest, property taxes, property insurance and ongoing maintenance and repairs.

Theres also depreciation, which refers to the incremental loss of a propertys value as a result of wear and tear. Real estate investors can claim depreciation for many years and accumulate significant tax savings over time.

The best part? The segment is becoming increasingly accessible to retail investors. There are publicly traded real estate investment trusts (REITs) that own income-producing real estate and pay dividends to shareholders. And if you dont like the stock markets volatility, there are also crowdfunding platforms that allow retail investors to invest directly in rental properties through the private market.

Read next:

Photo credits: Shutterstock, Flickr

Don't miss real-time alerts on your stocks - join Benzinga Pro for free! Try the tool that will help you invest smarter, faster, and better.

This article How Billionaires Like Jeff Bezos, Elon Musk and George Soros Pay Less Income Tax Than You And How You Can Replicate The Strategy. It's Legal originally appeared on Benzinga.com

.

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the original post:

How Billionaires Like Jeff Bezos, Elon Musk and George Soros Pay Less Income Tax Than You And How You Can Replicate The Strategy. It's Legal - Yahoo...

A Grand Unified Theory of Why Elon Musk Is So Unfunny – Rolling Stone

BRITTA PEDERSEN/POOL/AFP via Getty Images

Elon Musk has had a busy few days. This weekend, he ordered the w in the the sign on Twitters San Francisco headquarters painted over, so that it read Titter. Then, on Monday, he changed his Twitter display name to Harry Blz before tweeting, Impersonating others is wrong! He later added: Im just hoping a media org that takes itself way too seriously writes a story about Harry Blz Then, on Tuesday, he announced that the sites unpaid legacy verification checks, formerly scheduled for removal on April Fools Day, would now disappear on April 20, or 4/20, the stoner holiday to which Musk has winkingly referred on many occasions. That evening, he gave an interview with a BBC reporter and, in a couple of tweets afterward, pretended to confuse the news organization with the shorthand for the porn category big black cock. Sharing part of the conversation, he commented, Penetrating deep & hard with BBC.

For anyone whos remained a regular Twitter user since Musks takeover of the platform last year, none of this is remotely surprising: he logs on every day and, aided by an algorithm that forces his posts onto everyones feed, punishes us with a routine of garbled gags, corny jokes, and pilfered memes. (Full disclosure: Musk once re-posted a meme I made, and it still makes me feel unclean.) Yet Musk wasnt always so eager to have the public think him a funnyman, as when he carried a sink into Twitter HQ in October to mark his acquisition of the company, tweeting let that sink in.

Years ago, in fact, Musk was content to appreciate comedy as a mere spectator, and in fairness, he was not without taste. In 2016, he tweeted admiringly of the absurdist humor in Samuel Becketts timeless play Waiting for Godot, and recommended the hilariously awkward reality show Nathan for You. He also trumpeted a Tesla feature that allows you to play scenes from Monty Pythons Flying Circus. He didnt, as he does now, obsess over his philosophy of humor per se. True, he might reply Haha awesome 🙂 when tagged in a flattering meme about himself, and he never quite grasped the vernacular of comedy he once called a Liam Neeson cameo in the sitcom Lifes Too Short a sketch but his engagement was measured, light, unassuming. He had nothing to prove. Editors picks

So how did we get from a Musk who enjoyed a modest chuckle now and then to a Musk who hosts Saturday Night Live and fancies himself Memelord of the Universe? Where did his current sense of humor come from? Hes never quite addressed this in the media, nor did he respond to an email asking him to name his comedic influences, but we can still create something of a forensic picture.

To begin with, Musks cultural background seems to have disposed him to British humor (which he has called the best), a style that can toggle between dry wit and edgy or offensive incitement (Great show! he said of U.K. comic Ricky Gervais 2022 Netflix special SuperNature, which contained jokes mocking trans people). Musk has a long history of crossing lines himself, dating back to childhood: father Errol Musk recalled how he would insult adults by calling them stupid if he disagreed with them, and once got pushed down the stairs by a classmate after he made a mocking comment about the suicide of the boys father.

But beyond provocation, Musk clearly adores anything that can be placed in the category of nerd comedy: if a meme is in some way esoteric, requiring specialized knowledge to understand, he seems to regard it as a proof of intelligence. Because only a smart person would find it funny, right? This helps to explain his stated love of Reddit, where dorkiness and its attendant puns, references and values form a distinct social in-group. As a humor community, its the broad equivalent of where the STEM kids sit in the school cafeteria. Musk seems to have been drawn to it by the Tesla and SpaceX fanboys active there, initially promoting a SpaceX engineers Q&A session on the Ask Me Anything subreddit before giving his own interview about theoretical missions to Mars on r/space. In 2016, he took delight in watching redditors savage a Fortune editor who had written about issues stemming from a Tesla Autopilot crash.Related

So: Musk likes jokes that 1) take his side 2) foster a sense of geek community and pride, and 3) are occasionally spiky, hostile or somehow violate a social taboo this latter principle gives him his trollish quality. In the course of his online life, Musk also appears to have missed much non-Reddit internet comedy in the past decade, from the Dadaist gems of so-called Weird Twitter to the horny, artsy political anarchism of Tumblr. Thus his posting, in 2019, of a meta-meme about the evolutionary biologist Richard Dawkins whose text font dates it to an earlier generation of Reddit-favored image macros long out of fashion. This isnt a meme you wouldve randomly stumbled across on social media in 2019 its something you might get if you Googled meme about memes.

To truly understand Musks comedic sensibility, however, we have to ask ourselves how and why he started flaunting it in the first place. Remember, he showed no particular interest in this stuff in the early years of his Twitter account he never tweeted lol or weighed in on memes until 2018, when he suddenly couldnt quit yukking it up about them, encouraging followers to send him their dankest. (Dank as a descriptor for memes was itself a bit pass by then, to say nothing of a 47-year-old man typing the word your as ur.) What could possibly account for this sudden shift, the attempt at youthful cool?

The most obvious answer is one that Musk has given himself: he wanted attention. In 2021, during testimony in a shareholder lawsuit over Teslas 2016 acquisition of solar panel company SolarCity, he explained that his humor creates favorable publicity for the automaker: If we are entertaining people, they would write stories about us and we dont have to spend on advertising which would reduce the price of our cars, Musk said. I do have a sense of humor, he also noted. I think Im funny.

Yet the timing of his pivot to would-be Twitter comic also seems significant. While Musk was already a public figure by 2018, this was the year he cemented his place in pop culture: He started dating Grimes, and attended the Met Gala with her. He appeared on Joe Rogans podcast, where he accepted a puff on a cigar of tobacco and cannabis. He launched his own Tesla Roadster into space.

Behind this glitz, however, his life was going sideways. He told the New York Times that August that the past year had been excruciating, as well as the most difficult and painful year of my career. Teslas Model 3 had been stuck in production hell, and Musk said he was working unreasonably long hours, camping out at the factory and nearly missing his brothers wedding. He also claimed the work was taking a toll on his health. With the compounding pressures, he became more erratic on Twitter, with some board members reportedly concerned that he was taking the powerful, fast-acting insomnia drug Ambien but, instead of going to sleep, binging on the social app.

Two infamous Musk tweets define this phase. That June, he offered a submersible craft to rescuers trying to extract a youth soccer team trapped in a flooded cave in Thailand. When a British cave diver involved in the effort dismissed it on Twitter as a PR stunt, Musk replied angrily, calling him pedo guy in a tweet that sparked a defamation suit. (Musk eventually won the case, with his lawyers arguing the comment was a generic joke he had quickly retracted.) Then, in August, Musk tweeted that he was considering taking Tesla private at $420 a share. Though many interpreted that figure as an allusion to weed, the tweet caused Teslas stock to jump. Only weeks later, Musk changed his mind about the company going private, but he had to settle fraud charges with the Securities and Exchange Commission, as the original tweet was misleading and led to significant market disruption. The SEC deal also stipulated that he would step down as Teslas chairman, with he and the company each paying a $20 million penalty.

Musk this year won a subsequent shareholder lawsuit over the matter, in this case testifying that the $420 price was not a joke. But he has prolifically posted 420 comments and cited the number 69 also a sexual position ever since. Tesla lowered the price of the Model S to $69,420 in 2020, and Musk is particularly fond of reminding everyone that his birthday, June 28, falls 69 days after 4/20.

Taken altogether, Musks recklessness through the summer and fall of 2018 has the air of a midlife crisis: two years after his third divorce, he was dating a celebrity 16 years his junior while pushing himself to physical exhaustion as his company lost hundreds of millions of dollars. He apparently found refuge in memes while indulging a newfound impulse to shitpost, whether that meant firing off brazen insults, slapping unfunny captions on content hed seen elsewhere, or racking up engagement by mentioning the weed and sex numbers. This telegraphed a growing need to be a man of the people, a desperation to be liked.

His failure to develop a more amusing perspective from there can be chalked up to the sycophants who praise his every word, plus an unshakeable nostalgia for an era when he was widely characterized as a visionary and criticized far less. Consider his affection for Doge, a meme he temporarily added to the Twitter interface this month though its heyday was 2013, the first year Fortune named him Businessperson of the Year. Or his recent botching of the innuendo Thats what she said, popularized by the sitcom The Office (2005-2013).

Meanwhile, Musk has continued to develop an explicitly ideological concept of humor that ensures only his allies will ever laugh with him. In 2018, he declared that socialists are usually depressing and have no sense of humor. (He then proclaimed himself a socialist.) By 2022, when he got in a fight with the satirical website Hard Drive over not crediting them for a headline he posted, he was basically arguing that leftists cant have comedy at all. The reason youre not that funny is because youre woke, he tweeted. Humor relies on an intuitive & often awkward truth being recognized by the audience, but wokism is a lie, which is why nobody laughs. Instead, Musk has preferred the satirical news from the right-leaning Babylon Bee, which he reinstated on Twitter weeks after his takeover; it had been banned in early 2022 for sharing a transphobic article.

This means that on top of all the other reasons Musk struggles to craft a solid or relatable joke, he is now bound by the conceit that comedy must usually target his enemies. Because he is ridiculed by online leftists, chided by Democratic leadership, and unfavorably depicted in the liberal press, he has fallen in with culture warriors whose humor is built around trolling these factions. And how do they accomplish that? By daring the other side to censor or cancel them for repeating the same tired shit about Hunter Bidens laptop or pronouns or soy lattes. Theres no organic or dynamic potential here; its just manufactured grievance about how they want to silence free speech. No wonder Musk said that with his arrival CEO, Comedy is now legal on Twitter.

How far hes fallen from Waiting for Godot: these days, hes replying lmao to tweets calling Bill Gates and George Soros the Vax Street Boys. And that trajectory is sadly irreversible. It didnt have to be this way, but when youre as high-profile and thin-skinned as Musk, its all too easy to turn what impoverished sense of humor you had into both a defensive posture and a way to needle others. If someone makes a joke at your expense, it inflicts real damage that must be answered for. If you, as one of the most powerful people on the planet, take a swipe at them well, you can always say you were kidding.

Trending

See the rest here:

A Grand Unified Theory of Why Elon Musk Is So Unfunny - Rolling Stone

Bidens gift to Elon Musk and Tesla – Yahoo Finance

Tesla CEO Elon Musk leans Republican, and hes no friend of Joe Biden. But President Biden and his fellow Democrats have done Musk and his company a favor no Republican would likely consider.

Bidens new rules for tailpipe emissions, which the Environmental Protection Agency proposed on April 12, would sharply limit the pollution cars are allowed to emit for model years 2027 through 2032. If ultimately adopted, in whole or in part, the new rules would effectively force automakers to build far more electric vehicles and far fewer gasoline-powered ones.

That could cause upheaval at many automakers trying to shift from gas-powered cars to electrics at a measured pace that doesnt wreck their profitability. For Tesla, (TSLA) however, it will be business as usualexcept that the competition could end up hobbled by massive new costs, plus the stumbles that often attend large corporate transformations. That makes Tesla the single-biggest beneficiary of the EPAs new effort to slash auto-related emissions.

Ironies abound. Musk and Biden have feuded over labor unions, which Biden considers a key constituency and Musk loathes. When highlighting the rollout of EVs, Biden typically touts new efforts at Ford (F) and General Motors, (GM) which are unionized, while ignoring Tesla, which is not. Yet Tesla is the undisputed leader in EV sales in the United States, with 65% of the US EV market and vastly more sales in the category than Ford, GM or any other automaker.

Musk got so irritated by Bidens dismissiveness that in January 2022 he called Biden a damp sock puppet on Twitter. Later that year, Musk said he had a super bad feeling about the economy, and at a Biden press conference a reporter asked Biden for his response. Lots of luck on his trip to the moon, Biden quipped, referring to Musks hopes for space travel on one of his Space X rockets. Musk continued to tweak Biden on Twitter, and right before the midterm elections last year, Musk advised his 134 million Twitter followers to vote Republican.

Story continues

Biden's gift to Tesla? A Model 3 at a showroom in the U.S. REUTERS/Florence Lo

[Drop Rick Newman a note, follow him on Twitter, or sign up for his newsletter.]

Democrats, however, are better for his car company. Musk and Tesla deserve credit for foreseeing the electrified future and persevering through near-death experiences. But theyve had some help. The electric-vehicle tax credit that helps subsidize the cost of an EV originated in a 2009 law passed by Democrats and signed by President Obama. That tax break helped goose Teslas sales during difficult years when it lost money and needed every penny. President Trump wanted to kill that tax credit, but wasnt able to.

Tesla has also benefited from regulatory credits in California, largely governed by Democrats. California gives Tesla credits for producing zero-emission vehicles that it can sell to other companies who use them as a pollution offset. Such sales have netted Tesla hundreds of millions of dollars.

The Biden administrations new pollution rules could force the biggest transformation of the auto industry in its history. The EPA estimates that if the rules go into effect as proposed, EVs as a portion of new-car sales would rise from less than 6% now to around 67% by 2032. That would be a remarkable shift for just a 10-year period.

All of Teslas assembly lines produce EVs. At other automakers, EVs are a tiny share of production, even with sizeable new commitments to electrics. It costs billions of dollars to build an automotive assembly line, and more to retire old ones no longer in use. Legacy automakers face massive transformation costs. Tesla doesnt.

Since 2019, North American automakers have announced roughly $80 billion worth of new investments in electric vehicles. The EPA argues that a rapid transition to EVs will happen no matter what, given the industrys own large investments in that direction.

The new EPA rules, however, would still impose new costs on top of investments automakers already have planned. The new rules would raise industry-wide costs by somewhere between $180 and $280 billion during the seven-year period, according to the EPA. There would be savings, too, such as better fuel economy for drivers and reduced maintenance for EVs, compared with gas-powered models. But manufacturers largely bear the costs up front, then pass on to consumers what they can recoup through higher prices. Thats the tricky part for legacy automakers: financing the transition to electrics without racking up losses or too many sell recommendations on their stock.

Ford and GM stock has been largely range-bound for years, with the exception of a modest run-up during the Covid rally, when monetary stimulus goosed the whole market. Those flattish stock trends reflect Wall Street worry about massive transformation costs. Tesla, of course, is a high-flier thats still worth six times as much as GM and Ford combined, even with its stock down by more than half from its 2021 peak. Investors think Tesla is poised to dominate an industry driven by EVs, and that dominance could come sooner if the Biden rules stick.

They may not.

Automakers seem sure to challenge the new proposal, saying they cant shift to EVs that fast. So the final rule could be weaker than the proposal. There will probably also be litigation challenging the Biden administrations authority to make such a big changewithout Congressional legislation. The current Supreme Court, with a 6-3 conservative majority, has been much more skeptical of executive-branch authority than in the past, and theres a chance they could block such dramatic changes. A final risk to the new rules is a possible change in administration in 2024, with a future Republican president likely to roll back the Biden standards.

All of those risks add up to a lot of uncertainty for legacy automakers already unloved by the market. CEOs of those companies have to plan for a future where the pace of transformation could range from challenging to ruinous. Tesla has challenges, too, but the burden of stringent pollution regulation isnt one of them.

Maybe Biden and Musk should be a little friendlier toward each other.

Rick Newman is a senior columnist for Yahoo Finance. Follow him on Twitter at @rickjnewman

Click here for politics news related to business and money

Read the latest financial and business news from Yahoo Finance

See the article here:

Bidens gift to Elon Musk and Tesla - Yahoo Finance

San Francisco DA blasts Elon Musk over reaction to stabbing death – NBC News

SAN FRANCISCO In the hours after a tech executive was stabbed to death on a street in San Francisco with no clear suspect, billionaire Elon Musk led a charge on Twitter, where fellow tech executives and wealthy investors said they were fed up with violent repeat offenders getting away with crime in the biggest U.S. tech hub.

On Thursday, it became clear that their interpretation of the killing had been wrong.

City officials said at a news conference that the tech executive, Bob Lee, was murdered not randomly but by a man he knew, and San Franciscos chief prosecutor called out Musk by name for having jumped to conclusions.

District Attorney Brooke Jenkins said Musk was reckless when he suggested within hours of the killing that repeat violent offenders were involved.

Reckless and irresponsible statements like those contained in Mr. Musks tweet that assumed incorrect circumstances about Mr. Lees death served to mislead the world in its perceptions of San Francisco, Jenkins said.

The statements, she continued, also negatively impact the pursuit of justice for victims of crime, as it spreads misinformation at a time when the police are trying to solve a very difficult case.

Musk, the countrys wealthiest person, according to the Bloomberg Billionaires Index, did not immediately respond to a request for comment sent to Twitter, where he is the CEO and majority owner.

Musk responded to Lees killing with a tweet April 5 replying to another user who said Lee had been a friend.

Violent crime in SF is horrific and even if attackers are caught, they are often released immediately, Musk wrote.

He added that the city should take stronger action to incarcerate repeated violent offenders, and he tagged Jenkins Twitter account.

Lees death and Musks tweet added fuel to what has become a particularly contentious topic in San Francisco. Debates about crime, drugs and homelessness and the citys response to them have become flashpoints, with some in the tech and startup community rallying to push for change. That community helped recall the previous district attorney, Chesa Boudin, who was attacked for seeking alternatives to incarceration.

San Francisco has logged 13 homicides this year, matching last years tally in the same time frame, according to police department data. Robberies and assaults have also stayed relatively consistent over the past year.

San Francisco Police Chief William Scott said in an interview with NBC News on Thursday that every homicide is important, but that Lee was a notable person, which elevated media coverage of the case.

Some of the things that were said because of this case, I think were a little bit unfair, Scott said. Its one case. And I believe this would have happened anywhere.

Musk was not alone in rushing to offer an opinion about Lees killing and its broader significance for San Francisco. Matt Ocko, a venture capitalist, called San Francisco lawless and said the criminal-loving city council had literal blood on their hands.

Michelle Tandler, a startup founder who often tweets about crime, said the killing was part of a disturbing crime wave that justified calling in the National Guard. Michael Arrington, the founder of the news site TechCrunch, tweeted that he hated what San Francisco has become.

And investor Jason Calacanis, rejecting a call to wait for the facts, said the city was run by evil incompetent fools & grifters who accomplish nothing except enabling rampant violence.

Calacanis tweeted Thursday that he stood by his earlier view, independent of any one of the thousands of violent crimes that occur every month. Arrington, Ocko and Tandler did not immediately respond to requests for comment.

Rival interpretations of the killing had also played out in the news media, with the San Francisco Chronicle cautioning that violent crime was relatively low in the city, while The New York Times put the lawless quotation in a headline.

Mayor London Breed noted at Thursdays news conference how the case had received wide attention.

There has been a lot of speculation and a lot of things said about our city and crime in the city, Breed said, praising the patient work of prosecutors and police.

Jenkins said people would have been better off waiting for more facts before they weighed in with broad declarations.

We all should and must do better about not contributing to the spread of such misinformation without having actual facts to underlie the statements that we make. Victims deserve that, and the residents of San Francisco deserve that, Jenkins said.

Continued here:

San Francisco DA blasts Elon Musk over reaction to stabbing death - NBC News

OpenAI revives its robotic research team, plans to build dedicated AI – Interesting Engineering

OpenAI being in the news isnt a novelty at all. This time its bagging headlines for restarting its robotics research group after three years. A ChatGPT developer confirmed this move in an interview with Forbes.

It has been almost four years since OpenAI had disbanded a team which researched ways of using AI to teach robots new tasks.

According to media reports, OpenAI is now on the verge of developing a host of multimodal large language models for robotics use cases. A multimodal model is a neural network capable of processing various types of input, not just text. For instance, it can handle data from a robots onboard sensors.

OpenAI had bid goodbye to its original robotics research group. Wojciech Zaremba said, I actually believe quite strongly in the approach that the robotics [team] took in that direction, but from the perspective of AGI [artificial general intelligence], I think that there was actually some components missing. So when we created the robotics [team], we thought that we could go very far with self-generated data and reinforcement learning.

According to a report in Forbes, OpenAI has been hiring again for its robotics team and they have been actively on the lookout for a research robotics engineer. They are seeking an individual skilled in training multimodal robotics models to unlock new capabilities for our partners robots, researching and developing improvements to our core models, exploring new model architectures, collecting robotics data, and conducting evaluations.

Were looking for candidates with a strong research background and experience in shipping AI applications, the company stated.

Earlier this year, OpenAI also invested in humanoid developer Figure AIs Series B fundraising. This investment highlights OpenAIs clear interest in robotics.

Over the past year, OpenAI has significantly invested in the robotics field through its startup fund, pouring millions into companies like Figure AI, 1X Technologies, and Physical Intelligence. These investments underscore OpenAIs keen interest in advancing humanoid robots. In February, OpenAI hinted at a renewed focus on robotics when Figure AI secured additional funding. Shortly after, Figure AI released a video showcasing a robot with basic speech and reasoning skills, powered by OpenAIs model.

Peter Welinder, OpenAIs vice president and a member of the original robotics team, stated, Weve always planned to return to robotics, and we see a path with Figure to explore the potential of humanoid robots powered by highly capable multimodal models.

According to the report, OpenAI doesnt intend to compete directly with other robotics companies. Instead, it aims to develop AI technology that other manufacturers can integrate into their robots. Job listings indicate that new engineers will collaborate with external partners to train advanced AI models. It remains unclear if OpenAI will venture into creating its own robotics hardware, a challenge it has faced in the past. For now, the focus seems to be on leveraging its AI expertise to enhance robotic functionalities.

Apart from this Apple has also been reported to collaborate with OpenAI so that it can inculcate ChatGPT technology into its iOS 18 operating systems for iPhones, according to different media outlets.

The integration of ChatGPT, an advanced AI developed by OpenAI under Sam Altmans leadership, is set to revolutionize how Siri comprehends and responds to complex queries. This partnership, anticipated to be officially announced at this years Worldwide Developers Conference (WWDC), has been in the works for several months and has faced internal challenges and resistance from both companies.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Gairika Mitra Gairika is a technology nerd, an introvert, and an avid reader. Lock her up in a room full of books, and you'll never hear her complain.

Excerpt from:

OpenAI revives its robotic research team, plans to build dedicated AI - Interesting Engineering

Has Elon Musk pricked Lynas rare earths bubble? – Sydney Morning Herald

Rare earths had been considered irreplaceable for building the powerful magnets needed for these vehicles, and Tesla cited the move as removing a crucial production and cost constraint on its operations.

Elon Musks Tesla must cut the use of costly rare earths to meet its goals of making cheaper cars and grow its sales. Bloomberg

Lynas has made it clear that the growing demand for e-vehicles has underpinned demand, and prices, for rare earths. It drove the share price from $1.30 at the start of 2020 to a high of $11 last year, valuing the group at more than $10 billion at its peak.

Since then, Lynas has shredded as much as $2 billion of its market valuation, trading as low as $6. Has Musks declaration shaken the company, or can it keep on trucking?

Industry: Minerals and resources.

Main products: Rare earth ores 17 elements crucial to the manufacture of many hi-tech products such as mobile phones, electric cars and wind turbines. Neodymium and praseodymium (NdPr) are the two elements that have been in particularly high demand due to electric vehicles.

Key figures: Amanda Lacaze has been chief executive since 2014 and the main driver of its success. Kathleen Conlon was appointed chair in 2020 and has been on the board since 2011.

How it started: Lynas, as we know it, was the brainchild of business veteran Nick Curtis who came up with the idea to build a processing plant in Malaysia and set the company up as the only processor of rare earths outside of China. Japanese commercial interests stung by Chinas blocking of rare earth exports in 2010 helped finance the plant.

Operations commenced in 2012, but have been dogged by local controversy over the low-level radioactive material produced by the cracking and leaching process in Malaysia which must now be moved offshore by July this year. A new processing operation in Western Australia will pick up the slack.

How its going: With Lynas setting up the processing plant in Kalgoorlie, it has solved the Malaysia issue. The companys main problem now has been keeping up with demand forecasts which have been sky-high on the back of increased EV production which need rare earths for the powerful and lightweight magnets they need.

Lynas boss Amanda Lacaze. Carla Gottgens

The company has also been in the fortunate position of receiving US government money to fund its plans to set up a processing plant in Texas as governments around the world grow worried about how much they rely on Chinas stranglehold on the supply of crucial elements.

The bear case: When Elon Musk talks, people listen. So, when Musk, and other Tesla executives, unveiled plans to wean the car group off rare earths last month it had a major impact.

You cant run an automotive industry without rare earths, Lacaze told the Melbourne Mining Club just last year. What if you can?

The companys plans to increase rare earths output by 50 per cent by 2025 were deemed to be inadequate precisely due to the boom in car demand.

So, Musks edict to replace rare earth elements from his cars went to the heart of what has made this market a magnet for investors looking to ride the burgeoning demand for e-vehicles of all kinds.

The bull case: Musk might actually be able to pull this rabbit out of the hat and reduce Teslas reliance on rare earths, but not everyone is buying what he is selling.

Especially since Teslas comments say more about the companys aggressive growth targets, and what it needs to do to get there than it does about the attractiveness of rare earths for the electrification of the auto industry.

According to Adamas Intelligence, there is a reason why automakers have not used cheap and accessible alternatives like iron oxide: getting the same performance comes at the price of significantly higher weight.

In one case it cited, the iron oxide magnets were 30 per cent heavier a massive weight penalty, it said.

Tesla might find a cheaper alternative to power its low price cars of the future, but it wont be easy.

Rare earth magnets have been the breakthrough technology that lifted electric vehicles into the same league as conventional cars, Fat Tail Investments James Cooper says.

JP Morgan remains a Lynas fan, it put an Overweight recommendation on the stock this month and an $8.50 price target.

Also this month, UBS analyst Levi Spry upgraded Lynas to a Buy despite lowering its price target to $8.50 due to the 33 per cent slide in the companys share price since January. Tesla was not of any concern.

We remain positive on long-term fundamentals. To this extent, we do not think Teslas intentions to thrift rare earths from its supply chain has a significant impact within our forecast horizon, he said.

UBS is forecasting that Tesla will account for around 7 per cent of demand for NdPr by 2030.

While not insignificant, we still see deficits forming and that inelastic demand (from other OEMs and industries) should keep fundamentals for NdPr strong.

The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion.

See more here:

Has Elon Musk pricked Lynas rare earths bubble? - Sydney Morning Herald

The AI revolution is coming to robots: how will it change them? – Nature.com

For a generation of scientists raised watching Star Wars, theres a disappointing lack of C-3PO-like droids wandering around our cities and homes. Where are the humanoid robots fuelled with common sense that can help around the house and workplace?

Rapid advances in artificial intelligence (AI) might be set to fill that hole. I wouldnt be surprised if we are the last generation for which those sci-fi scenes are not a reality, says Alexander Khazatsky, a machine-learning and robotics researcher at Stanford University in California.

From OpenAI to Google DeepMind, almost every big technology firm with AI expertise is now working on bringing the versatile learning algorithms that power chatbots, known as foundation models, to robotics. The idea is to imbue robots with common-sense knowledge, letting them tackle a wide range of tasks. Many researchers think that robots could become really good, really fast. We believe we are at the point of a step change in robotics, says Gerard Andrews, a marketing manager focused on robotics at technology company Nvidia in Santa Clara, California, which in March launched a general-purpose AI model designed for humanoid robots.

At the same time, robots could help to improve AI. Many researchers hope that bringing an embodied experience to AI training could take them closer to the dream of artificial general intelligence AI that has human-like cognitive abilities across any task. The last step to true intelligence has to be physical intelligence, says Akshara Rai, an AI researcher at Meta in Menlo Park, California.

But although many researchers are excited about the latest injection of AI into robotics, they also caution that some of the more impressive demonstrations are just that demonstrations, often by companies that are eager to generate buzz. It can be a long road from demonstration to deployment, says Rodney Brooks, a roboticist at the Massachusetts Institute of Technology in Cambridge, whose company iRobot invented the Roomba autonomous vacuum cleaner.

There are plenty of hurdles on this road, including scraping together enough of the right data for robots to learn from, dealing with temperamental hardware and tackling concerns about safety. Foundation models for robotics should be explored, says Harold Soh, a specialist in humanrobot interactions at the National University of Singapore. But he is sceptical, he says, that this strategy will lead to the revolution in robotics that some researchers predict.

The term robot covers a wide range of automated devices, from the robotic arms widely used in manufacturing, to self-driving cars and drones used in warfare and rescue missions. Most incorporate some sort of AI to recognize objects, for example. But they are also programmed to carry out specific tasks, work in particular environments or rely on some level of human supervision, says Joyce Sidopoulos, co-founder of MassRobotics, an innovation hub for robotics companies in Boston, Massachusetts. Even Atlas a robot made by Boston Dynamics, a robotics company in Waltham, Massachusetts, which famously showed off its parkour skills in 2018 works by carefully mapping its environment and choosing the best actions to execute from a library of built-in templates.

For most AI researchers branching into robotics, the goal is to create something much more autonomous and adaptable across a wider range of circumstances. This might start with robot arms that can pick and place any factory product, but evolve into humanoid robots that provide company and support for older people, for example. There are so many applications, says Sidopoulos.

The human form is complicated and not always optimized for specific physical tasks, but it has the huge benefit of being perfectly suited to the world that people have built. A human-shaped robot would be able to physically interact with the world in much the same way that a person does.

However, controlling any robot let alone a human-shaped one is incredibly hard. Apparently simple tasks, such as opening a door, are actually hugely complex, requiring a robot to understand how different door mechanisms work, how much force to apply to a handle and how to maintain balance while doing so. The real world is extremely varied and constantly changing.

The approach now gathering steam is to control a robot using the same type of AI foundation models that power image generators and chatbots such as ChatGPT. These models use brain-inspired neural networks to learn from huge swathes of generic data. They build associations between elements of their training data and, when asked for an output, tap these connections to generate appropriate words or images, often with uncannily good results.

Likewise, a robot foundation model is trained on text and images from the Internet, providing it with information about the nature of various objects and their contexts. It also learns from examples of robotic operations. It can be trained, for example, on videos of robot trial and error, or videos of robots that are being remotely operated by humans, alongside the instructions that pair with those actions. A trained robot foundation model can then observe a scenario and use its learnt associations to predict what action will lead to the best outcome.

Google DeepMind has built one of the most advanced robotic foundation models, known as Robotic Transformer 2 (RT-2), that can operate a mobile robot arm built by its sister company Everyday Robots in Mountain View, California. Like other robotic foundation models, it was trained on both the Internet and videos of robotic operation. Thanks to the online training, RT-2 can follow instructions even when those commands go beyond what the robot has seen another robot do before1. For example, it can move a drink can onto a picture of Taylor Swift when asked to do so even though Swifts image was not in any of the 130,000 demonstrations that RT-2 had been trained on.

In other words, knowledge gleaned from Internet trawling (such as what the singer Taylor Swift looks like) is being carried over into the robots actions. A lot of Internet concepts just transfer, says Keerthana Gopalakrishnan, an AI and robotics researcher at Google DeepMind in San Francisco, California. This radically reduces the amount of physical data that a robot needs to have absorbed to cope in different situations, she says.

But to fully understand the basics of movements and their consequences, robots still need to learn from lots of physical data. And therein lies a problem.

Although chatbots are being trained on billions of words from the Internet, there is no equivalently large data set for robotic activity. This lack of data has left robotics in the dust, says Khazatsky.

Pooling data is one way around this. Khazatsky and his colleagues have created DROID2, an open-source data set that brings together around 350 hours of video data from one type of robot arm (the Franka Panda 7DoF robot arm, built by Franka Robotics in Munich, Germany), as it was being remotely operated by people in 18 laboratories around the world. The robot-eye-view camera has recorded visual data in hundreds of environments, including bathrooms, laundry rooms, bedrooms and kitchens. This diversity helps robots to perform well on tasks with previously unencountered elements, says Khazatsky.

When prompted to pick up extinct animal, Googles RT-2 model selects the dinosaur figurine from a crowded table.Credit: Google DeepMind

Gopalakrishnan is part of a collaboration of more than a dozen academic labs that is also bringing together robotic data, in its case from a diversity of robot forms, from single arms to quadrupeds. The collaborators theory is that learning about the physical world in one robot body should help an AI to operate another in the same way that learning in English can help a language model to generate Chinese, because the underlying concepts about the world that the words describe are the same. This seems to work. The collaborations resulting foundation model, called RT-X, which was released in October 20233, performed better on real-world tasks than did models the researchers trained on one robot architecture.

Many researchers say that having this kind of diversity is essential. We believe that a true robotics foundation model should not be tied to only one embodiment, says Peter Chen, an AI researcher and co-founder of Covariant, an AI firm in Emeryville, California.

Covariant is also working hard on scaling up robot data. The company, which was set up in part by former OpenAI researchers, began collecting data in 2018 from 30 variations of robot arms in warehouses across the world, which all run using Covariant software. Covariants Robotics Foundation Model 1 (RFM-1) goes beyond collecting video data to encompass sensor readings, such as how much weight was lifted or force applied. This kind of data should help a robot to perform tasks such as manipulating a squishy object, says Gopalakrishnan in theory, helping a robot to know, for example, how not to bruise a banana.

Covariant has built up a proprietary database that includes hundreds of billions of tokens units of real-world robotic information which Chen says is roughly on a par with the scale of data that trained GPT-3, the 2020 version of OpenAI's large language model. We have way more real-world data than other people, because thats what we have been focused on, Chen says. RFM-1 is poised to roll out soon, says Chen, and should allow operators of robots running Covariants software to type or speak general instructions, such as pick up apples from the bin.

Another way to access large databases of movement is to focus on a humanoid robot form so that an AI can learn by watching videos of people of which there are billions online. Nvidias Project GR00T foundation model, for example, is ingesting videos of people performing tasks, says Andrews. Although copying humans has huge potential for boosting robot skills, doing so well is hard, says Gopalakrishnan. For example, robot videos generally come with data about context and commands the same isnt true for human videos, she says.

A final and promising way to find limitless supplies of physical data, researchers say, is through simulation. Many roboticists are working on building 3D virtual-reality environments, the physics of which mimic the real world, and then wiring those up to a robotic brain for training. Simulators can churn out huge quantities of data and allow humans and robots to interact virtually, without risk, in rare or dangerous situations, all without wearing out the mechanics. If you had to get a farm of robotic hands and exercise them until they achieve [a high] level of dexterity, you will blow the motors, says Nvidias Andrews.

But making a good simulator is a difficult task. Simulators have good physics, but not perfect physics, and making diverse simulated environments is almost as hard as just collecting diverse data, says Khazatsky.

Meta and Nvidia are both betting big on simulation to scale up robot data, and have built sophisticated simulated worlds: Habitat from Meta and Isaac Sim from Nvidia. In them, robots gain the equivalent of years of experience in a few hours, and, in trials, they then successfully apply what they have learnt to situations they have never encountered in the real world. Simulation is an extremely powerful but underrated tool in robotics, and I am excited to see it gaining momentum, says Rai.

Many researchers are optimistic that foundation models will help to create general-purpose robots that can replace human labour. In February, Figure, a robotics company in Sunnyvale, California, raised US$675 million in investment for its plan to use language and vision models developed by OpenAI in its general-purpose humanoid robot. A demonstration video shows a robot giving a person an apple in response to a general request for something to eat. The video on X (the platform formerly known as Twitter) has racked up 4.8 million views.

Exactly how this robots foundation model has been trained, along with any details about its performance across various settings, is unclear (neither OpenAI nor Figure responded to Natures requests for an interview). Such demos should be taken with a pinch of salt, says Soh. The environment in the video is conspicuously sparse, he says. Adding a more complex environment could potentially confuse the robot in the same way that such environments have fooled self-driving cars. Roboticists are very sceptical of robot videos for good reason, because we make them and we know that out of 100 shots, theres usually only one that works, Soh says.

As the AI research community forges ahead with robotic brains, many of those who actually build robots caution that the hardware also presents a challenge: robots are complicated and break a lot. Hardware has been advancing, Chen says, but a lot of people looking at the promise of foundation models just don't know the other side of how difficult it is to deploy these types of robots, he says.

Another issue is how far robot foundation models can get using the visual data that make up the vast majority of their physical training. Robots might need reams of other kinds of sensory data, for example from the sense of touch or proprioception a sense of where their body is in space say Soh. Those data sets dont yet exist. Theres all this stuff thats missing, which I think is required for things like a humanoid to work efficiently in the world, he says.

Releasing foundation models into the real world comes with another major challenge safety. In the two years since they started proliferating, large language models have been shown to come up with false and biased information. They can also be tricked into doing things that they are programmed not to do, such as telling users how to make a bomb. Giving AI systems a body brings these types of mistake and threat to the physical world. If a robot is wrong, it can actually physically harm you or break things or cause damage, says Gopalakrishnan.

Valuable work going on in AI safety will transfer to the world of robotics, says Gopalakrishnan. In addition, her team has imbued some robot AI models with rules that layer on top of their learning, such as not to even attempt tasks that involve interacting with people, animals or other living organisms. Until we have confidence in robots, we will need a lot of human supervision, she says.

Despite the risks, there is a lot of momentum in using AI to improve robots and using robots to improve AI. Gopalakrishnan thinks that hooking up AI brains to physical robots will improve the foundation models, for example giving them better spatial reasoning. Meta, says Rai, is among those pursuing the hypothesis that true intelligence can only emerge when an agent can interact with its world. That real-world interaction, some say, is what could take AI beyond learning patterns and making predictions, to truly understanding and reasoning about the world.

What the future holds depends on who you ask. Brooks says that robots will continue to improve and find new applications, but their eventual use is nowhere near as sexy as humanoids replacing human labour. But others think that developing a functional and safe humanoid robot that is capable of cooking dinner, running errands and folding the laundry is possible but could just cost hundreds of millions of dollars. Im sure someone will do it, says Khazatsky. Itll just be a lot of money, and time.

Originally posted here:

The AI revolution is coming to robots: how will it change them? - Nature.com

Can AI ever be smarter than humans? | Context – Context

Whats the context?

"Artificial general intelligence" (AGI) - the benefits, the risks to security and jobs, and is it even possible?

LONDON - When researcher Jan Leike quit his job at OpenAI last month, he warned the tech firm's "safety culture and processes (had) taken a backseat" while it trained its next artificial intelligence model.

He voiced particular concern about the company's goal to develop "artificial general intelligence", a supercharged form of machine learning that it says would be "smarter than humans".

Some industry experts say AGI may be achievable within 20 years, but others say it will take many decades, if at all.

But what is AGI, how should it be regulated and what effect will it have on people and jobs?

OpenAI defines AGI as a system "generally smarter than humans". Scientists disagree on what this exactly means.

"Narrow" AI includes ChatGPT, which can perform a specific, singular task. This works by pattern matching, akin to putting together a puzzle without understanding what the pieces represent, and without the ability to count or complete logic puzzles.

"The running joke, when I used to work at Deepmind (Google's artificial intelligence research laboratory), was AGI is whatever we don't have yet," Andrew Strait, associate director of the Ada Lovelace Institute, told Context.

IBM has suggested that artificial intelligence would need at least seven critical skills to reach AGI, including visual and auditory perception, making decisions with incomplete information, and creating new ideas and concepts.

Narrow AI is already used in many industries, but has been responsible for many issues, like lawyers citing "hallucinated" - made up - legal precedents and recruiters using biased services to check potential employees.

AGI still lacks definition, so experts find it difficult to describe the risks that it might pose.

It is possible that AGI will be better at filtering out bias and incorrect information, but it is also possible new problems will arise.

One "very serious risk", Strait said, was an over-reliance on the new systems, "particularly as they start to mediate more sensitive human-to-human relationships".

AI systems also need huge amounts of data to train on and this could result in a massive expansion of surveillance infrastructure. Then there are security risks.

"If you collect (data), it's more likely to get leaked," Strait said.

There are also concerns over whether AI will replace human jobs.

Carl Frey, a professor of AI and work at the Oxford Internet Institute, said an AI apocalypse was unlikely and that "humans in the loop" would still be needed.

But there may be downward pressure on wages and middle-income jobs, especially with developments in advanced robotics.

"I don't see a lot of focus on using AI to develop new products and industries in the ways that it's often being portrayed. All applications boil down to some form of automation," Frey told Context.

As AI develops, governments must ensure there is competition in the market, as there are significant barriers to entry for new companies, Frey said.

There also needs to be a different approach to what the economy rewards, he added. It is currently in the interest of companies to focus on automation and cut labour costs, rather than create jobs.

"One of my concerns is that the more we emphasise the downsides, the more we emphasise the risks with AI, the more likely we are to get regulation, which means that we restrict entry and that we solidify the market position of incumbents," he said.

Last month, the U.S. Department of Homeland Security announced a board comprised of the CEOs of OpenAI, Microsoft, Google, and Nvidia to advise the government on AI in critical infrastructure.

"If your goal is to minimise the risks of AI, you don't want open source. You want a few incumbents that you can easily control, but you're going to end up with a tech monopoly," Frey said.

AGI does not have a precise timeline. Jensen Huang, the chief executive of Nvidia, predicts that today's models could advance to the point of AGI within five years.

Huang's definition of AGI would be a program that can improve on human logic quizzes and exams by 8%.

OpenAI has indicated that a breakthrough in AI is coming soon with Q* (pronounced Q-Star), a secretive project reported in November last year.

Microsoft researchers have said that GPT-4, one of OpenAI's generative AI models, has "sparks of AGI". However, it does not "(come) close to being able to do anything that a human can do", nor does it have "inner motivation and goals" - another key aspect in some definitions of AGI.

But Microsoft President Brad Smith has rejected claims of a breakthrough.

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said in November.

Frey suggested there would need to be significant innovation to get to AGI, due to both limitations in hardware and the amount of training data available.

"There are real question marks around whether we can develop AI on the current path. I don't think we can just scale up existing models (with) more compute, more data, and get to AGI."

Read the rest here:

Can AI ever be smarter than humans? | Context - Context

Responsible AI needs further collaboration – Chinadaily.com.cn – China Daily

Wang Lei (standing), chairman of Wenge Tech Corporation, talks to participants at the World Summit on the Information Society. For China Daily

Further efforts are needed to build responsible artificial intelligence by promoting technological openness, fostering collaboration and establishing consensus-driven governance to fully unleash AI's potential to boost productivity across various industries, an executive said.

The remarks were made by Wang Lei, chairman of Wenge Tech. Corporation, a Beijing-based AI company recognized by the Ministry of Industry and Information Technology as a "little giant" firmnovel and elite small and medium-sized enterprises that specialize in niche markets. Wang delivered his speech at the recently concluded World Summit on the Information Society.

"AI has made extraordinary progress in recent years. Innovations like ChatGPT and hundreds of other large language models (LLMs) have captured global attention, profoundly transforming how we work and live," said Wang.

"Now we are entering a new era of Artificial General Intelligence (AGI). Enterprise AI has proven to create significant value for customers in fields such as government operations, ESGs, supply chain management, and defense intelligence, excelling in analysis, forecasting, decision-making, optimization, and risk monitoring," he added.

A recent report from the think-tank a16z and IDC reveals that global enterprise investments in AI have surged from an average of $7 million to $18 million, a 2.5-fold increase. In China, the number of LLMs grew from 16 to 318 last year, with over 80 percent focusing on industry-specific applications, Wang noted.

He predicted a promising future for Enterprise AI, with decision intelligence being the ultimate goal. "Complex problems will be broken down into smaller tasks, each resolved by different AI models. AI agents and multi-agent collaboration frameworks will optimize decision-making strategies and action planning, integrating AI into workflows, data streams, and decision-making processes within industry-specific scenarios."

Wang proposed a three-step methodology for successful Enterprise AI transformation: data engineering, model engineering, and domain engineering.

"To build responsible AI, we must address several challenges head-on," he emphasized. "Promoting technological openness can reduce regional and industrial imbalances, fostering collaboration can mitigate unfair usage restrictions, and establishing consensus-driven governance can significantly enhance AI safety."

Continue reading here:

Responsible AI needs further collaboration - Chinadaily.com.cn - China Daily

OpenAI says it’s charting a "path to AGI" with its next frontier AI model – ITPro

OpenAI has revealed that it recently started work on training its next frontier large language model (LLM).

The first version of OpenAIs ChatGPT debuted back in November 2022 and became an unexpected breakthrough hit which launched generative AI into public consciousness.

Since then, there have been a number of updates to the underlying model. The first version of ChatGPT was built on GPT-3.5 which finished training in early 2022., while GPT-4 arrived in March 2023. The most recent, GPT-4o, arrived in April this year.

Now OpenAI is working on a new LLM and said it anticipates the system to bring us to the next level of capabilities on our path to [artificial general intelligence] AGI.

AGI is a hotly contested concept whereby an AI would like humans be good at adapting to many different tasks, including ones it has never been trained on, rather than being designed for one particular use.

AI researchers are split on whether AGI could ever exist or whether the search for it may even be based on a misunderstanding of how intelligence works.

OpenAI provided no details of what the next model might do, but as its LLMs have evolved, the capabilities of the underlying models have expanded.

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.

While GPT-3 could only deal with text, GTP-4 is able to accept images as well, while GPT-4o has been optimized for voice communication. Context windows have also increased markedly with each interaction, although the size of the models and technical details still remain secret.

Sam Altman, CEO at OpenAI, has stated that GPT-4 cost more than $100 million to train, per Wired, and the model is rumored to have more than one trillion parameters. This would make it one of, if not the biggest, LLM currently in existence.

That doesnt necessarily mean the next model will be even larger; Altman has previously suggested the race for ever bigger models may be coming to an end.

Smaller models working together might be a more useful way of using generative AI, he has said.

And even if OpenAI has started training its next model, dont expect to see the impact of it very soon. Training models can take many months and that can be just the first step. It took six months of testing after training was finished before OpenAI released GPT-4.

The company also said it will create a new Safety and Security Committee led by OpenAI directors Bret Taylor, Adam DAngelo, Nicole Seligman, and Altman. This committee will be responsible for making recommendations to the board on critical safety and security decisions for OpenAI projects and operations.

One of its first tasks will be to evaluate and develop OpenAIs processes and safeguards over the next 90 days. After that the committee will share their recommendations with the board.

Some may raise eyebrows at the safety committee being made up of members of existing OpenAIs board.

Dr Ilia Kolochenko, CEO at ImmuniWeb and adjunct professor of cyber security at Capital Technology University, questioned whether the move will actually deliver positive outcomes as far as AI safety is concerned.

Being safe does not necessarily imply being accurate, reliable, fair, transparent, explainable and non-discriminative the absolutely crucial characteristics of GenAI solutions, Kolochenko said. In view of the past turbulence at OpenAI, I am not sure that the new committee will make a radical improvement.

The launch of the safety committee comes amidst greater calls for more rigorous regulation and oversight of LLM development. Most recently, a former OpenAI board member has argued that self-governance isnt the right approach for AI firms and has argued that a strong regulatory framework is needed.

OpenAI has made public efforts to calm AI safety fears in recent months. It was among a host of major industry players to sign up to a safe development pledge at the Seoul AI Summit that could see them pull the plug on their own models if they cannot be built or deployed safely.

But these commitments are voluntary and come with plenty of caveats, leading some experts to call for stronger legislation and requirements for tougher testing of LLMs.

Because of the potentially large risks associated with the technology, AI companies should be subject to a similar regulatory framework as pharmaceuticals companies, critics argue, where companies have to hit standards set by regulators who can make the final decision on if and when a product can be released.

Read the rest here:

OpenAI says it's charting a "path to AGI" with its next frontier AI model - ITPro

Why AI Won’t Take Over The World Anytime Soon – Bernard Marr

In an era where artificial intelligence features prominently in both our daily lives and our collective imagination, its common to hear concerns about these systems gaining too much power or even becoming autonomous rulers of our future. Yet, a closer look at the current state of AI technology reveals that these fears, while popular in science fiction, are far from being realized in the real world. Heres why were not on the brink of an AI takeover.

The majority of AI systems we encounter daily are examples of "narrow AI." These systems are masters of specialization, adept at tasks such as recommending your next movie on Netflix, optimizing your route to avoid traffic jams or even more complex feats like writing essays or generating images. Despite these capabilities, they operate under strict limitations, designed to excel in a particular arena but incapable of stepping beyond those boundaries.

This is true even of the generative AI tools that are dazzling us with their ability to create content across multiple modalities. They can draft essays, recognize elements in photographs, and even compose music. However, at their core, these advanced AIs are still just making mathematical predictions based on vast datasets; they do not truly "understand" the content they generate or the world around them.

Narrow AI operates within a predefined framework of variables and outcomes. It cannot think for itself, learn beyond what it has been programmed to do, or develop any form of intention. Thus, despite the seeming intelligence of these systems, their capabilities remain tightly confined. For those who fear their GPS might one day lead them on a rogue mission to conquer the world, you can rest easy. Your navigation system is not plotting global dominationit is simply calculating the fastest route to your destination, oblivious to the broader implications of its computations.

The concept of artificial general intelligence, an AI capable of understanding, learning and applying knowledge across a broad spectrum of tasks just like a human, remains a distant goal. Todays most sophisticated AIs struggle with tasks that a human child performs intuitivelyrecognizing objects in a messy room or grasping the subtleties of a conversation.

Transitioning from narrow AI to AGI isn't merely a matter of incremental improvements but requires foundational breakthroughs in how AI learns and interprets the world. Researchers are still deciphering the basic principles of cognition and machine learning, and the challenge of developing a machine that genuinely understands context or displays common sense is still a significant scientific hurdle.

Another factor is that current AI systems have an insatiable appetite for data, requiring vast amounts to learn and function effectively. This dependency on large datasets is one of the primary bottlenecks in AI development. Unlike humans, who can learn from a few examples or even from a single experience, AI systems need thousandsor even millionsof data points to master even simple tasks. This difference highlights a fundamental gap in how humans and machines process information.

The data needs of AI are not just extensive but also specific, and in many domains, such high-quality, large-scale datasets simply do not exist. For instance, in specialized medical fields or in areas involving rare events, the requisite data to train AI effectively can be scarce or non-existent, limiting the applicability of AI in these fields.

That means that the notion that AI systems might spontaneously evolve to outsmart humans is, therefore, more than just unlikely.

While AI continues to evolve and integrate deeper into our lives and industries, the infrastructure around its development is simultaneously maturing. This dual progression ensures that as AI capabilities grow. As AI technology progresses, so does the imperative for dynamic regulatory frameworks. The tech community is increasingly proficient at implementing safety and ethical guidelines. However, these measures must evolve in lockstep with AI's rapid developments to ensure robust, safe, and controlled operations.

By proactively adapting regulations, we can effectively anticipate and mitigate potential risks and unintended consequences, securing AI's role as a powerful tool for positive advancement rather than a threat. This continued focus on safe and ethical AI development is crucial for harnessing its potential while avoiding the pitfalls depicted in dystopian narratives. AI is here to assist and augment human capabilities, not to replace them. So, for now, the world remains very much in human hands.

Here is the original post:

Why AI Won't Take Over The World Anytime Soon - Bernard Marr

OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics – TechRadar

OpenAI, the tech company behind ChatGPT, has announced that its formed a Safety and Security Committee thats intended to make the firms approach to AI more responsible and consistent in terms of security.

Its no secret that OpenAI and CEO Sam Altman - who will be on the committee - want to be the first to reach AGI (Artificial General Intelligence), which is broadly considered as achieving artificial intelligence that will resemble human-like intelligence and can teach itself. Having recently debuted GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it expects to be one step closer to AGI.

GPT-4o was debuted on May 13 to the public as a next-level multimodal (capable of processing in multiple modes) generative AI model, able to deal with input and respond with audio, text, and images. It was met with a generally positive reception, but more discussion around the innovation has since arisen regarding its actual capabilities, implications, and the ethics around technologies like it.

Just over a week ago, OpenAI confirmed to Wired that its previous team responsible for overseeing the safety of its AI models had been disbanded and reabsorbed into other existing teams. This followed the notable departures of key company figures like OpenAI co-founder and chief scientist Ilya Sutskever, and co-lead of the AI safety superalignment team Jan Leike. Their departure was reportedly related to their concerns that OpenAI, and Altman in particular, was not doing enough to develop its technologies responsibly, and was forgoing conducting due diligence.

This has seemingly given OpenAI a lot to reflect on and its formed the oversight committee in response. In the announcement post about the committee being formed, OpenAI also states that it welcomes a robust debate at this important moment. The first job of the committee will be to evaluate and further develop OpenAIs processes and safeguards over the next 90 days, and then share recommendations with the companys board.

The recommendations that are subsequently agreed upon to be adopted will be shared publicly in a manner that is consistent with safety and security.

The committee will be made up of Chairman Bret Taylor, CEO of Quora Adam DAngelo, and Nicole Seligman, a former executive of Sony Entertainment, alongside six OpenAI employees which includes Sam Altman as mentioned, and John Schulman, a researcher and cofounder of OpenAI. According to Bloomberg, OpenAI stated that it will also consult external experts as part of this process.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Ill reserve my judgment for when OpenAIs adopted recommendations are published, and I can see how theyre implemented, but intuitively, I dont have the greatest confidence that OpenAI (or any major tech firm) is prioritizing safety and ethics as much as they are trying to win the AI race.

Thats a shame, and its unfortunate that generally speaking, those who are striving to be the best no matter what are often slow to consider the cost and effects of their actions, and how they might impact others in a very real way - even if large numbers of people are potentially going to be affected.

Ill be happy to be proven wrong and I hope I am, and in an ideal world, all tech companies, whether theyre in the AI race or not, should prioritize the ethics and safety of what theyre doing at the same level that they strive for innovation. So far in the realm of AI, that does not appear to be the case from where Im standing, and unless there are real consequences, I dont see companies like OpenAI being swayed that much to change their overall ethos or behavior.

See the article here:

OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar

What is artificial general intelligence, and is it a useful concept? – New Scientist

If you take even a passing interest in artificial intelligence, you will inevitably have come across the notion of artificial general intelligence. AGI, as it is often known, has ascended to buzzword status over the past few years as AI has exploded into the public consciousness on the back of the success of large language models (LLMs), a form of AI that powers chatbots such as ChatGPT.

That is largely because AGI has become a lodestar for the companies at the vanguard of this type of technology. ChatGPT creator OpenAI, for example, states that its mission is to ensure that artificial general intelligence benefits all of humanity. Governments, too, have become obsessed with the opportunities AGI might present, as well as possible existential threats, while the media (including this magazine, naturally) report on claims that we have already seen sparks of AGI in LLM systems.

Despite all this, it isnt always clear what AGI really means. Indeed, that is the subject of heated debate in the AI community, with some insisting it is a useful goal and others that it is a meaningless figment that betrays a misunderstanding of the nature of intelligence and our prospects for replicating it in machines. Its not really a scientific concept, says Melanie Mitchell at the Santa Fe Institute in New Mexico.

Artificial human-like intelligence and superintelligent AI have been staples of science fiction for centuries. But the term AGI took off around 20 years ago when it was used by the computer scientist Ben Goertzel and Shane Legg, cofounder of

Read more:

What is artificial general intelligence, and is it a useful concept? - New Scientist

22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create – Livescience.com

The artificial intelligence (AI) revolution is here, and it's already changing our lives in a wide variety of ways. From chatbots to sat-nav, AI has revolutionized the technological space but in doing so, it may be set to take over a wide variety of jobs, particularly those involving labor-intensive manual tasks.

But it's not all bad news: as with most new technologies, the hypothetical advent of artificial general intelligence (AGI) where machines are smarter than humans and can apply what they learn across multiple disciplines could also lead to new roles. So what might the job market of the near future look like, and could your job be at risk?

One of the most mind-numbing and tedious jobs around today, data entry will surely be one of the first roles supplanted by AI. Instead of a human laboring over endless data sets and fiddly forms for hours on end, AI systems will be able to input and manage large amounts of data quickly and seamlessly, hopefully freeing up human workers for much more productive tasks.

You might already have endured robotic calls asking if you have been the victim of an accident that wasnt your fault, or whether youre keen to upgrade your long-distance calling plan but this could be just a taste of things to come. AI services could easily take the work of a whole call center, automatically dialling hundreds, if not thousands of unsuspecting victims to spread the word, whether you like it or not.

On the friendlier side, AI customer service agents are already a common sight on the websites of many major companies. Often in the form of chatbots, these agents offer a first line of support, before deferring to a human where needed. In the not too distant future, though, expect the AI to take over completely, walking customers through their complaints or queries from start to finish.

Restaurant bookings can be a hassle, as overworked staff or maitre ds try to juggle existing reservations with no-shows and chancers who try their arm at getting a last-minute slot. Booking a table will soon be a whole lot easier, however, with an entirely computerized system able to allocate slots and spaces with ease, and even juggle late cancellations or alterations without the need for anyone to lose their spot.

Although image generation has grabbed much of the headlines, AI voice creation has become a growing presence in the entertainment and creative world. Offering potentially unlimited customization options, directors and producers can now create a voice whatever tone, style or accent they require which is then able to say whatever they desire, without the need for costly retakes or ever getting tired.

Text generation has quickly become one of the most-used aspects of AI technology, with copilots and other tools able to quickly generate large amounts of texts based on a simple prompt. Whether youre looking to fill your new website with business-focused copy, or offering more detail on your latest product launch AI text generation provides a quick and easy way to do whatever you need to do.

In a similar vein, many of the leading website builder services today offer a fully AI-powered service, allowing you to create the page of your dreams simply by entering a few prompts. From start-ups to sole traders and all the way to big business, theres no need to fiddle around with templates simply tell the platform what youre after, and a personalized website will be yours to customize or publish in moments.

This one may still sound a bit more like the realm of science fiction, but with cars getting smarter by the year, fully AI-powered driving is not too much of a pipe dream any more. Far from the basic autopilot tools on offer today, the cars of the future may well be able to not just operate independently, but provide their passengers with a fully-curated experience, from air conditioning at just the right level, to your favorite radio station.

Another position that is based around humans taking in huge amounts of data and creating reports, accounting is set for an AI revolution that could see many roles replaced. No need to spend hours collating receipts and entering numbers into a spreadsheet when AI can quickly scan, identify and upload all the information needed, taking the stress out of tax season and answering any queries or questions with ease.

The legal industry is another one that is dominated by large amounts of data and paperwork, and also one that is dominated by role-specific processes and even language. This makes it another prime candidate for AI, which will be able to automate the lengthy data analysis and entry actions undertaken by paralegals and legal assistants today although given the scale or importance of the case involved, it may still be wise to have some kind of human element

Signing in for an appointment or a meeting is another job that many believe can easily be done by AI platforms. Rather than needing to bother or distract a human from their job, simply check in on a display screen, with your visitors badge or meeting confirmation registered in seconds allowing you (and everyone else) to get on with your day.

Similar to AI drivers, autonomous vehicles and robots powered by AI systems could soon be taking the role of delivery people. After scanning the list of destinations for any given day, the vehicle or platform would be able to quickly calculate the most efficient route, ensuring no waiting around all day for your package, as well as being able to instantly flag any issues or missed deliveries.

In a boost to current spell checking tools, it may be that AI systems eventually graduate from suggesting or writing content to helping check it for mistakes. Once trained on a style guide or content guidelines, an AI editor could quickly scan through articles, documents and filings to spot any issues a particularly handy speed boost in highly regulated industries such as banking, insurance or healthcare before flagging possible problems to a human supervisor.

Away from the written word, AI-powered platforms could soon be helping compose the next great pieces of music. Taking inspiration from vast libraries of existing pieces, these futuristic musicians could quickly dream up everything from film soundtracks to radio jingles, once again meaning companies or organizations would no longer need to pay human performers for day-long sessions consisting of multiple takes.

Another area which relies on quickly spotting trends and patterns among huge tranches of data, the statistics field could be quickly swamped by AI platforms. Whether it is at a business level, where companies could look to spot potential growth opportunities or risky situations, all the way down to the sports stats used by commentators and fans alike, AI can quickly come up with the figures needed.

A job that has already declined in importance over the past few years thanks to the emergence and widespread adoption of centralized collaboration tools, the role of project manager is another sure-fire target for AI. Rather than having a designated manager trying to keep tabs on the work being done by a number of disparate teams, an AI-powered central solution could collate all the progress in a single location, allowing everyone to view the latest updates and stay on top of their work.

Were already seeing the beginning of AI taking over the image design and generation space, with animation set to be one of the first fields to feel the effect. As more and more advanced AI programs emerge, creating any kind of customized animation will soon be easier than ever, with production studios able to easily create the movies, TV shows and other media they require.

In a similar vein to the entertainment industry, creating designs for new products, advertising campaigns and more will doubtless soon be another field dominated by AI. With a simple prompt, companies will be able to create the graphics they need, with potentially endless customization options that can be carried out instantly, with no need for back-and-forth with human designers.

Keeping track of potential security risks is another task that could be easily handled by AI, which will be able to continuously monitor multiple data fields and sensors to spot issues or threats before they take hold. Once detected, the systems would hopefully be able to take proactive action to lock down valuable data or company platforms, while alerting human agents and managers to ensure everything remains protected.

Many of us are perfectly comfortable booking and scheduling our vacations independently, but sometimes you want all of the stress of planning taken off your hands. Rather than leaving it to a human agent, AI travel service platforms could gather all of your requirements and come up with a tailored solution or itinerary exactly sculpted to your needs, without endless back and forth, taking all of the hassle out of your vacation planning.

Making assessments on the viability of insurance applications can be a lengthy process, with agents needing to take into consideration a huge number of potential risks and other criteria, often via specific formulae or structures. Rather than a human needing to spend all this time, AI agents could quickly scan through all the information provided, coming up with a decision much faster and more effectively.

One final field that is again dominated by analyzing huge amounts of data, past knowledge, and spotting upcoming trends and actions before they happen, stock trading could also quickly become dominated by AI. AI systems will be able to speedily act to make the best deals for financial firms in the blink of an eye, outpacing and outperforming human traders with ease, and possibly leading to even bigger profits.

First, and perhaps most obviously, will be an increase in roles for people looking to advise businesses exactly what kind of AI they should be utilizing. Simply grabbing as many AI tools and services as possible may have a tremendously destabilizing effect on a business, so having an expert who is able to outline the exact benefits and risks of specific technologies will become increasingly important for companies of all sizes.

In a similar vein, getting the most out of your companys new AI tools will be vital, so having trainers skilled in the right services will be absolutely critical. The ability to suggest to workers at all levels what they can utilize AI for will be incredibly useful for businesses everywhere, walking employees through the various platforms and educating them about any possible ill effects.

With chatbots and virtual agents becoming the main entry point for people encountering AI, knowing just how to communicate with such systems is going to be vital to making the relationship productive. Having experts who know the best way to talk to models such as ChatGPT, especially when it comes to phrasing specific questions or prompts, will be increasingly important as our dependence on AI models increases.

Once were happy with how we communicate with AI models, the next big obstacle might be understanding what keeps it happy or at least, productive. We may soon see experts who, much like human therapists, are engaged with AI models to try and understand what makes them tick including why they might show bias or toxicity in order to make our relationships with them more effective overall.

On the occasion that something does go wrong whether thats a poorly worded corporate email, or an advertising campaign that features an embarrassing slip-up there will be a need for crisis managers who can step in and look to quickly defuse the situation. This may become increasingly important in situations where AI may put sensitive data or even lives at risk, although hopefully such incidents will be rare.

The next step along from a crisis involving AI agents or systems may be lawyers or legal experts who specialize in dealing with non-human creators. The ability to represent a defendant who isnt physically present in a courtroom may become increasingly valuable as the role of AI in everyday life, and the risks it poses, becomes more prevalent especially as business data or personal information gets involved.

With AI set to push the limits of what can be done with analysis and data processing, it may be that some companies looking to adopt new tools are simply not equipped to handle the new technology. Stress testers will be able to evaluate the status of your tech stack and network to make sure that any AI tools your business is set to use dont have the opposite effect and push everything to breaking point.

With content creation becoming an increasingly important role for AI, were likely to see such images, audio and video appearing more frequently in everyday life. But were already seeing backlash against obviously AI-generated content littered with errors, like extra fingers on humans, or nonsense alphabets in advertising. Having a human editor that is able to audit this content and ensure it is accurate, and fit for human consumption, could be a vital new role.

In a similar vein, AI-generated content may also need a human sense-checking it before it hits the public domain. Similar to the work currently being done by proofreaders and editors on human-produced content around the world, making sure that AI documents flow properly and sound legitimate will be another crucial consideration, and should lead to a growth in these sorts of roles.

Finally, despite the efficiency and effectiveness of AI-generated content, there will still always be room for the human touch. Much like we already have authentic artists, or artisans who specialize in handmade goods, it may soon be that we have creators and painters who strive for their work to be authentically human, setting them apart from the AI hordes.

See the rest here:

22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - Livescience.com

OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? – Vox.com

Editors note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altmans tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.

On Monday, OpenAI announced exciting new product news: ChatGPT can now talk like a human.

It has a cheery, slightly ingratiating feminine voice that sounds impressively non-robotic, and a bit familiar if youve seen a certain 2013 Spike Jonze film. Her, tweeted OpenAI CEO Sam Altman, referencing the movie in which a man falls in love with an AI assistant voiced by Scarlett Johansson.

But the product release of ChatGPT 4o was quickly overshadowed by much bigger news out of OpenAI: the resignation of the companys co-founder and chief scientist, Ilya Sutskever, who also led its superalignment team, as well as that of his co-team leader Jan Leike (who we put on the Future Perfect 50 list last year).

The resignations didnt come as a total surprise. Sutskever had been involved in the boardroom revolt that led to Altmans temporary firing last year, before the CEO quickly returned to his perch. Sutskever publicly regretted his actions and backed Altmans return, but hes been mostly absent from the company since, even as other members of OpenAIs policy, alignment, and safety teams have departed.

But what has really stirred speculation was the radio silence from former employees. Sutskever posted a pretty typical resignation message, saying Im confident that OpenAI will build AGI that is both safe and beneficialI am excited for what comes next.

Leike ... didnt. His resignation message was simply: I resigned. After several days of fervent speculation, he expanded on this on Friday morning, explaining that he was worried OpenAI had shifted away from a safety-focused culture.

Questions arose immediately: Were they forced out? Is this delayed fallout of Altmans brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking.

It turns out theres a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI, has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

While nondisclosure agreements arent unusual in highly competitive Silicon Valley, putting an employees already-vested equity at risk for declining or violating one is. For workers at startups like OpenAI, equity is a vital form of compensation, one that can dwarf the salary they make. Threatening that potentially life-changing money is a very effective way to keep former employees quiet.

OpenAI did not respond to a request for comment in time for initial publication. After publication, an OpenAI spokespersonsent me this statement: We have never canceled any current or former employees vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.

Sources close to the company I spoke to told me that this represented a change in policy as they understood it.When I askedthe OpenAI spokespersonif thatstatement representeda change,theyreplied, This statement reflects reality.

On Saturday afternoon, a little more than a day after this article published, Altman acknowledged in a tweet that there had been a provision in the companys off-boarding documents about potential equity cancellation for departing employees, but said the company was in the process of changing that language.

All of this is highly ironic for a company that initially advertised itself as OpenAI that is, as committed in its mission statements to building powerful systems in a transparent and accountable manner.

OpenAI long ago abandoned the idea of open-sourcing its models, citing safety concerns. But now it has shed the most senior and respected members of its safety team, which should inspire some skepticism about whether safety is really the reason why OpenAI has become so closed.

OpenAI has spent a long time occupying an unusual position in tech and policy circles. Their releases, from DALL-E to ChatGPT, are often very cool, but by themselves they would hardly attract the near-religious fervor with which the company is often discussed.

What sets OpenAI apart is the ambition of its mission: to ensure that artificial general intelligence AI systems that are generally smarter than humans benefits all of humanity. Many of its employees believe that this aim is within reach; that with perhaps one more decade (or even less) and a few trillion dollars the company will succeed at developing AI systems that make most human labor obsolete.

Which, as the company itself has long said, is as risky as it is exciting.

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the worlds most important problems, a recruitment page for Leike and Sutskevers team at OpenAI states. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction. While superintelligence seems far off now, we believe it could arrive this decade.

Naturally, if artificial superintelligence in our lifetimes is possible (and experts are divided), it would have enormous implications for humanity. OpenAI has historically positioned itself as a responsible actor trying to transcend mere commercial incentives and bring AGI about for the benefit of all. And theyve said they are willing to do that even if that requires slowing down development, missing out on profit opportunities, or allowing external oversight.

We dont think that AGI should be just a Silicon Valley thing, OpenAI co-founder Greg Brockman told me in 2019, in the much calmer pre-ChatGPT days. Were talking about world-altering technology. And so how do you get the right representation and governance in there? This is actually a really important focus for us and something we really want broad input on.

OpenAIs unique corporate structure a capped-profit company ultimately controlled by a nonprofit was supposed to increase accountability. No one person should be trusted here. I dont have super-voting shares. I dont want them, Altman assured Bloombergs Emily Chang in 2023. The board can fire me. I think thats important. (As the board found out last November, it could fire Altman, but it couldnt make the move stick. After his firing, Altman made a deal to effectively take the company to Microsoft, before being ultimately reinstated with most of the board resigning.)

But there was no stronger sign of OpenAIs commitment to its mission than the prominent roles of people like Sutskever and Leike, technologists with a long history of commitment to safety and an apparently genuine willingness to ask OpenAI to change course if needed. When I said to Brockman in that 2019 interview, You guys are saying, Were going to build a general artificial intelligence, Sutskever cut in.Were going to do everything that can be done in that direction while also making sure that we do it in a way thats safe, he told me.

Their departure doesnt herald a change in OpenAIs mission of building artificial general intelligence that remains the goal. But it almost certainly heralds a change in OpenAIs interest in safety work; the company hasnt announced who, if anyone, will lead the superalignment team.

And it makes it clear that OpenAIs concern with external oversight and transparency couldnt have run all that deep. If you want external oversight and opportunities for the rest of the world to play a role in what youre doing, making former employees sign extremely restrictive NDAs doesnt exactly follow.

This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god?

The companys leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the worlds input into how to do it justly and wisely.

But when theres real money at stake and there are astounding sums of real money at stake in the race to dominate AI it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees those who know the most about whats happening inside OpenAI cant tell the rest of the world whats going on.

The website may have high-minded ideals, but their termination agreements are full of hard-nosed legalese. Its hard to exercise accountability over a company whose former employees are restricted to saying I resigned.

ChatGPTs new cute voice may be charming, but Im not feeling especially enamored.

Update, May 18, 7:30 pm ET: This story was published on May 17 and has been updated multiple times, most recently to include Sam Altmans response on social media.

A version of this story originally appeared in theFuture Perfectnewsletter.Sign up here!

Youve read 1 article in the last month

Here at Vox, we believe in helping everyone understand our complicated world, so that we can all help to shape it. Our mission is to create clear, accessible journalism to empower understanding and action.

If you share our vision, please consider supporting our work by becoming a Vox Member. Your support ensures Vox a stable, independent source of funding to underpin our journalism. If you are not ready to become a Member, even small contributions are meaningful in supporting a sustainable model for journalism.

Thank you for being part of our community.

Swati Sharma

Vox Editor-in-Chief

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Continued here:

OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com

Meta AI Head: ChatGPT Will Never Reach Human Intelligence – PYMNTS.com

Metaschief AI scientist thinks large language models will never reach human intelligence.

Yann LeCunasserts that artificial intelligence (AI) large language models (LLMs) such as ChatGPT have alimited grasp on logic, the Financial Times (FT) reported Wednesday (May 21).

These models, LeCun told the FT, do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan...hierarchically.

He argued against depending on LLMs to reach human-level intelligence, as these models need the right training data to answer prompts correctly, thus making them intrinsically unsafe.

LeCun is instead working on a totally new cohort of AI systems that aim to power machines with human-level intelligence, though this could take 10 years to achieve.

The report notes that this is a potentially risky gamble, as many investors are hoping for quick returns on their AI investments. Meta recently saw its value shrink by almost $200 billion after CEO Mark Zuckerbergpledged to up spendingand turn the tech giant into the leading AI company in the world.

Meanwhile, other companies are moving forward with enhanced LLMs in hopes of creating artificial general intelligence (AGI), or machines whose cognition surpasses humans.

For example, this week saw AI firmScaleraise $1 billion in a Series F funding round that valued the startupat close to $14 billion, with founder Alexandr Wang discussing the companys AGI ambitions in the announcement.

Hours later, the French startup called H revealed it had raised $220 million, with CEO Charles Kantor telling Bloomberg News the company is working towardfull-AGI.

However, some experts question AIs ability to think like humans. Among them isAkli Adjaoute, who has spent 30 years in the AI field and recently authored the book Inside AI.

Rather than speculating about whether the technology willthink and reason, he views AIs role as an effective tool, stressing the importance of understanding AIs roots in data and its limitations in replicating human intelligence.

AI does not have theability to understandthe way that humans understand, Adjaoute told PYMNTS CEO Karen Webster.

It follows patterns. As humans, we look for patterns. For example, when I recognize the number 8, I dont see two circles. I see one. I dont need any extra power or cognition. Thats what AI is based on. Its the recognition of algorithms and thats why theyre designed for specific tasks.

Go here to read the rest:

Meta AI Head: ChatGPT Will Never Reach Human Intelligence - PYMNTS.com

Will superintelligent AI sneak up on us? New study offers reassurance – Nature.com

Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy

Will an artificial intelligence (AI) superintelligence appear suddenly, or will scientists see it coming, and have a chance to warn the world? Thats a question that has received a lot of attention recently, with the rise of large language models, such as ChatGPT, which have achieved vast new abilities as their size has grown. Some findings point to emergence, a phenomenon in which AI models gain intelligence in a sharp and unpredictable way. But a recent study calls these cases mirages artefacts arising from how the systems are tested and suggests that innovative abilities instead build more gradually.

I think they did a good job of saying nothing magical has happened, says Deborah Raji, a computer scientist at the Mozilla Foundation who studies the auditing of artificial intelligence. Its a really good, solid, measurement-based critique.

The work was presented last week at the NeurIPS machine-learning conference in New Orleans.

Large language models are typically trained using huge amounts of text, or other information, whch they use to generate realistic answers by predicting what comes next. Even without explicit training, they manage to translate language, solve mathematical problems and write poetry or computer code. The bigger the model is some have more than a hundred billion tunable parameters the better it performs. Some researchers suspect that these tools will eventually achieve artificial general intelligence (AGI), matching and even exceeding humans on most tasks.

ChatGPT broke the Turing test the race is on for new ways to assess AI

The new research tested claims of emergence in several ways. In one approach, the scientists compared the abilities of four sizes of OpenAIs GPT-3 model to add up four-digit numbers. Looking at absolute accuracy, performance differed between the third and fourth size of model from nearly 0% to nearly 100%. But this trend is less extreme if the number of correctly predicted digits in the answer is considered instead. The researchers also found that they could also dampen the curve by giving the models many more test questions in this case the smaller models answer correctly some of the time.

Next, the researchers looked at the performance of Googles LaMDA language model on several tasks. The ones for which it showed a sudden jump in apparent intelligence, such as detecting irony or translating proverbs, were often multiple-choice tasks, with answers scored discretely as right or wrong. When, instead, the researchers examined the probabilities that the models placed on each answer a continuous metric signs of emergence disappeared.

Finally, the researchers turned to computer vision, a field in which there are fewer claims of emergence. They trained models to compress and then reconstruct images. By merely setting a strict threshold for correctness, they could induce apparent emergence. They were creative in the way that they designed their investigation, says Yejin Choi, a computer scientist at the University of Washington in Seattle who studies AI and common sense.

Study co-author Sanmi Koyejo, a computer scientist at Stanford University in Palo Alto, California, says that it wasnt unreasonable for people to accept the idea of emergence, given that some systems exhibit abrupt phase changes. He also notes that the study cant completely rule it out in large language models let alone in future systems but adds that "scientific study to date strongly suggests most aspects of language models are indeed predictable.

Raji is happy to see the community pay more attention to benchmarking, rather than to developing neural-network architectures. Shed like researchers to go even further and ask how well the tasks relate to real-world deployment. For example, does acing the LSAT exam for aspiring lawyers, as GPT-4 has done, mean that a model can act as a paralegal?

The work also has implications for AI safety and policy. The AGI crowd has been leveraging the emerging-capabilities claim, Raji says. Unwarranted fear could lead to stifling regulations or divert attention from more pressing risks. The models are making improvements, and those improvements are useful, she says. But theyre not approaching consciousness yet.

Originally posted here:

Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com

Amazon reportedly preparing paid Alexa version powered by its own Titan AI model – SiliconANGLE News

Amazon.com Inc. engineers are reportedly working on a new, more capable version of Alexa that is expected to become available through a paid subscription.

Sources familiar with the project told CNBC today that the service is set to roll out later this year. Apple Inc. is also expected to introduce a new version of Siri, its competing artificial intelligence assistant, in the coming months. Both the iPhone maker and Amazon reportedly plan to include new generative AI features in their respective product updates.

The upcoming paid version of Alexa will reportedly run on an algorithm from the Amazon Titan series of large language models. Introduced last year by the companys cloud unit, the series comprises three LLMs with varying capabilities and pricing.

The most advanced Titan model, Amazon Titan Text Premier, can process prompts that contain up to 32,000 tokens worth of information. A token is a unit of data that comprises a few letters or numbers. The model includes a RAG, or retrieval-augmented generation, feature that allows it to incorporate information from external applications into prompt responses.

On the other end of the price range Amazon Titan Text Lite. Positioned as the Titan series entry-level offering, the model supports prompts with up to 4,000 tokens and is geared towards relatively simple text processing tasks. Its unclear if Amazon plans to power the next version of Alexa with an existing Titan model or a yet-unannounced future addition to the series.

According to todays report, the generative AI model that will underpin Alexa costs two cents per query to run. For comparison, generating 1,000 tokens of output with the entry-level Titan Text Lite model costs100 times less for Amazon Web Services Inc. customers. That suggests the LLM in the upgraded version of Alexa features a significantly more advanced architecture.

Amazon will reportedly charge for the AI assistants upgraded version to offset the cost of the underlying LLM. According to one of CNBCs sources, the company is considering asking $20 per month, the price at which OpenAI sells ChatGPT Plus. Another tipster indicated that the Alexa subscription might become available for a single-digit dollar amount.

The team that develops the AI assistant has reportedly undergone a massive reorganization as part of an effort by Amazon to streamline its business operations. Its believed that many members of the team, which comprises thousands of employees, now focus on developing artificial general intelligence. This is a term for a hypothetical future type of AI that can perform a wide range of tasks with human-like accuracy.

THANK YOU

See the original post here:

Amazon reportedly preparing paid Alexa version powered by its own Titan AI model - SiliconANGLE News

AI consciousness: scientists say we urgently need answers – Nature.com

A standard method to assess whether machines are conscious has not yet been devised.Credit: Peter Parks/AFP via Getty

Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows and they are expressing concern about the lack of inquiry into the question.

In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?

Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Bidens executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes.

With everything thats going on in AI, inevitably theres going to be other adjacent areas of science which are going to need to catch up, Mason says. Consciousness is one of them.

The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany.

It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress, says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.

The worlds week on AI safety: powerful computing efforts launched to boost research

Such concerns are no longer just science fiction. Companies such as OpenAI the firm that created the chatbot ChatGPT are aiming to develop artificial general intelligence, a deep-learning system thats trained to perform a wide range of intellectual tasks similar to those humans can do. Some researchers predict that this will be possible in 520 years. Even so, the field of consciousness research is very undersupported, says Mason. He notes that to his knowledge, there has not been a single grant offer in 2023 to study the topic.

The resulting information gap is outlined in the AMCS leaders submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology. The AMCS leaders submission has not been publicly released, but the body confirmed to the authors that the groups comments will be part of its foundational material documents that inform its recommendations about global oversight of AI systems.

Understanding what could make AI conscious, the AMCS researchers say, is necessary to evaluate the implications of conscious AI systems to society, including their possible dangers. Humans would need to assess whether such systems share human values and interests; if not, they could pose a risk to people.

But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we dont recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: We dont really have a great track record of extending moral consideration to entities that dont look and act like us. Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that dont need protection.

If AI becomes conscious: heres how researchers will know

Some of the questions raised by the AMCS comments to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes.

And then there is the need for scientists to educate others. As companies devise ever-more capable AI systems, the public will wonder whether such systems are conscious, and scientists need to know enough to offer guidance, Mason says.

Other consciousness researchers echo this concern. Philosopher Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University in Boca Raton, says that chatbots such as ChatGPT seem so human-like in their behaviour that people are justifiably confused by them. Without in-depth analysis from scientists, some people might jump to the conclusion that these systems are conscious, whereas other members of the public might dismiss or even ridicule concerns over AI consciousness.

To mitigate the risks, the AMCS comments call on governments and the private sector to fund more research on AI consciousness. It wouldnt take much funding to advance the field: despite the limited support to date, relevant work is already underway. For example, Long and 18 other researchers have developed a checklist of criteria to assess whether a system has a high chance of being conscious. The paper1, published in the arXiv preprint repository in August and not yet peer reviewed, derives its criteria from six prominent theories explaining the biological basis of consciousness.

Theres lots of potential for progress, Mason says.

See the article here:

AI consciousness: scientists say we urgently need answers - Nature.com

AI Technologies Set to Revolutionize Multiple Industries in Near Future – Game Is Hard

According to Nvidia CEO Jensen Huang, the world is on the brink of a transformative era in artificial intelligence (AI) that will see it rival human intelligence within the next five years. While AI is already making significant strides, Huang believes that the true breakthrough will come in the realm of artificial general intelligence (AGI), which aims to replicate the range of human cognitive abilities.

Nvidia, a prominent player in the tech industry known for its high-performance graphics processing units (GPUs), has experienced a surge in business as a result of the growing demand for its GPUs in training AI models and handling complex workloads across various sectors. In fact, the companys fiscal third-quarter revenue tripled, reaching an impressive $9.24 billion.

An important milestone for Nvidia was the recent delivery of the worlds first AI supercomputer to OpenAI, an AI research lab co-founded by Elon Musk. This partnership with Musk, who has shown great interest in AI technology, signifies the immense potential of AI advancements. Huang expressed confidence in the stability of OpenAI, despite recent upheavals, emphasizing the critical role of effective corporate governance in such ventures.

Looking ahead, Huang envisions a future where the competitive landscape of the AI industry will foster the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology. While current limitations exist, including the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.

Nvidias success in 2023 has exceeded expectations, as the company consistently surpassed earnings projections and witnessed its stock rise by approximately 240%. The impressive third-quarter revenue of $18.12 billion further solidifies investor confidence in the promising AI market. Analysts maintain a positive outlook on Nvidias long-term potential in the AI and semiconductor sectors, despite concerns about sustainability. The future of AI is undoubtedly bright, with transformative applications expected across various industries in the near future.

FAQ:

Q: What is the transformative era in artificial intelligence (AI) that Nvidia CEO Jensen Huang mentions? A: According to Huang, the transformative era in AI will see it rival human intelligence within the next five years, particularly in the realm of artificial general intelligence (AGI).

Q: Why has Nvidia experienced a surge in business? A: Nvidias high-performance graphics processing units (GPUs) are in high demand for training AI models and handling complex workloads across various sectors, leading to a significant increase in the companys revenue.

Q: What is the significance of Nvidia delivering the worlds first AI supercomputer to OpenAI? A: Nvidias partnership with OpenAI and the delivery of the AI supercomputer highlights the immense potential of AI advancements, as well as the confidence in OpenAIs stability and the critical role of effective corporate governance in such ventures.

Q: What is Nvidias vision for the future of the AI industry? A: Nvidia envisions a future where the competitive landscape of the AI industry will lead to the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology.

Q: What are the current limitations and future capabilities of AI technologies according to Huang? A: While there are still limitations, such as the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.

Key Terms:

Artificial intelligence (AI): The simulation of human intelligence processes by machines, especially computer systems, to perform tasks that typically require human intelligence. Artificial general intelligence (AGI): AI that can perform any intellectual task that a human being can do. Graphics processing unit (GPU): A specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.

Suggested Related Links:

Nvidia website OpenAI website Artificial intelligence on Wikipedia

Continued here:

AI Technologies Set to Revolutionize Multiple Industries in Near Future - Game Is Hard