Daily Archives: June 6, 2024

Opposing players aren’t fond of Caitlin Clark … which should be good for the WNBA – Yahoo Sports

Posted: June 6, 2024 at 8:49 am

Its pretty obvious that some WNBA players arent too fond of Caitlin Clark.

Which should turn out to be a pretty good thing for the WNBA.

The anti-Caitlin sentiment has been growing clearer with each passing game. It flared up in the biggest way Saturday, when Chicagos Chennedy Carter threw a shoulder into an unsuspecting Clark, knocking Clark to the ground. Carters teammate, Angel Reese, an old Clark rival from college, cheered the move on from the bench.

This was a clear escalation of some head-to-head play between the two over the previous few possessions, including Clark appearing to throw an elbow at Carter and appearing to say something to her face.

Clark is a marked woman and its adding plenty of spice to a season where, unlike back in college, she cant dominate the competition.

Anyone tuning in expecting 40-point performances with Magic Johnson-like passes was always going to be disappointed. Watching Clark fight through adversity and rack up rivals is going to have to be drama enough. Besides, Indiana (2-9) is terrible.

Clark went just 1 of 10 from the field and scored a meager three points in a Sunday blowout loss to New York. She left that game with an apparent ear injury after getting bumped on a screen.

The league clearly believes it can physically knock her off her game, a fairly common tactic against young players at all levels of basketball.

We understand who kind of the head of the monster is on that team, New York veteran guard Sabrina Ionescu said of Clark. We are trying to just make everything tough and difficult.

Some of it is simply business. Some of it, though, appears personal. Neither one is wrong.

Maybe its her fame. Maybe its her money. Maybe its the attention she commands. Maybe its just being the hotshot rookie who still needs to prove herself. Youd find all of these motivations in other sports and other circumstances as well.

Or maybe it's just that Clark plays a hard, physical and in-your-face game herself.

Whatever it is, the spice and shoving has become a constant and that should add a nice bit of interest to things. The most popular sport in America? Controversy. Nothing like some bad blood and uncertainty of what might happen next to draw in fans, or at least keep them as Clarks game comes together.

"Yeah, I wasn't expecting that," Clark said of the shoulder knock down. "But it's just like, Respond, calm down and let your play do the talking. It is what it is. It's a physical game, go make the free throw and then execute on offense. Feel like that's what we did."

Carter, for her part, didnt want to discuss it with the media but made her points on social media.

"Beside three point shooting what does she bring to the table man," Carter asked in a post.

Later she embraced the backlash of those siding with Clark.

I grew up with all brothers, Carter, a fourth-year player, wrote. All we did is fight and argue. I love the hate more than the love Id rather you hate me [than] love me and I mean that on my dead aunt.

The mere fact that I mean that on my dead aunt has entered the lexicon is enough to make this kerfuffle fun.

What would make it even better is if everyone embraced what appears to be Caitlin Clarks mindset this isnt a big deal. Carters shoulder was deemed a Flagrant 1 foul by the league and Chicago head coach Teresa Weatherspoon said in a statement that the action was "not appropriate," but there will be no discipline. The way to end this is courtesy of a hard screen or a push back or, best of all, Clark using it as motivation to win.

The fact that Golden States Draymond Green weighed in saying the Fever need to sign an enforcer to protect their star like Green has done for Steph Curry who was constantly physically challenged and most people agreed is its own small victory for womens sports legitimacy. No kids gloves here. Play ball.

Yes, ideally, every game is played with sportsmanship and respect, but that isnt how the real world, especially in competitive sports, is played. Nor would many fans even want that.

In a pure business sense, WNBA players should love Caitlin Clark for the sponsorship money, fan attention and media coverage she is bringing to a league that failed to truly break through in over a quarter century of existence.

Maybe Carter is correct and Clarks rookie game is mostly just 3-point shooting. Its a big adjustment from the college ranks, where there are only a few good teams and players, versus the W, with just 144 roster slots featuring the best players in the world. It stands to reason Clark will continue to settle in and show the passing skills, scoring and leadership that she did back at Iowa.

What Clark brings, undoubtedly, is attention. If this happens a year ago, with another player, then only the diehard fans even know. Or care.

Everything is bigger with Caitlin Clark, which is why that smackdown spun heads and wont stop until she smacks back in one way or another.

Caitlin Clark was never going to instantly overwhelm the WNBA. Her attempt to get there against those who dont appear to care for her at all will be worth watching.

More here:

Opposing players aren't fond of Caitlin Clark ... which should be good for the WNBA - Yahoo Sports

Posted in Yahoo | Comments Off on Opposing players aren’t fond of Caitlin Clark … which should be good for the WNBA – Yahoo Sports

Colts Pro Bowler, NFLPA rep Ryan Kelly calls out Roger Goodell, blasts talk of 18-game schedule: ‘Absolutely not’ – Yahoo Sports

Posted: at 8:49 am

The NFL has been testing the public-relations waters in floating an 18-game schedule.

Ryan Kelly is here for the counterpoint. The Pro Bowl center and NFLPA representative for the Indianapolis Colts spoke candidly against an expanded schedule on Wednesday. He's "absolutely not" interested in an 18-game slate or commissioner Roger Goodell's effort's to push one.

"Yeah, 18 games sounds great when Roger is saying it on the Pat McAfee podcast," Kelly said in a post-practice media scrum from Colts minicamp. "But until youre the one going out there and putting a helmet on for 18 of those games, yeah, then come talk to me."

Roger, in this instance, is Roger Goodell. Kelly, who's also a vice president on the NFLPA's executive committee, was referring to Goodell's appearance on the "The Pat McAfee Show" in April at the NFL Draft in Detroit.

Speaking to McAfee in front a crowd of fans, Goodell took on the role of cheerleader in promoting an 18-game schedule.

"I think we're good at 17 now," Goodell said. "But, listen, we're looking at how we continue. I'm not a fan of the preseason. I don't think we need three preseason games anymore. I don't buy it."

Goodell then turned and pointed to the crowd.

"I don't think these guys like it either," he continued. " ... The reality is, I think I'd rather replace a preseason game with a regular-season game any day. That's picking quality, right? If we got to 18-2, that's not an unreasonable thing."

Goodell got no pushback from a ginned-up crowd that showed up to watch the second day of the draft in person. Fans cheered on his proposal of a schedule of 18 regular-season and two preseason games.

Kelly didn't find the proposal so reasonable. He lamented the creep of additional games while citing the league's expansion from 16 games to 17 in 2021. That expansion was collectively bargained and made amid the backdrop of an increased awareness of the physical toll football takes on players that includes concussions and CTE.

"If people understood how hard it was to play 16, then they [add] another one, right?" Kelly said. "They get rid of preseason games. Well, OK. Who's that gonna hurt? The guys who don't have a shot, the undrafted guys or late-round guys that need to go out there and improve themselves.

"The fans see it as they don't watch the preseason games. But they have no idea what goes on inside the building, right?"

The back-and-forth on expansion is happening amid the backdrop of a reported NFLPA proposal to revamp the league's offseason schedule. Some have speculated that the proposal is a precursor to inevitable schedule expansion.

The NFL, meanwhile, is taking cues from fans, who continue to consume as much football as the league will offer. Moving forward, that means multiple games on Christmas, including two in 2024 when the holiday falls on a Wednesday. Goodell clearly hopes that also means an 18-game schedule and the additional revenue it will produce.

Will the NFLPA ultimately sign off? Kelly's not on board, but it appears inevitable. It will ultimately come down to a vote of players who approved the expansion from 16 to 17 games and whether ownership will offer enough concessions to entice them in the bargaining process.

Link:

Colts Pro Bowler, NFLPA rep Ryan Kelly calls out Roger Goodell, blasts talk of 18-game schedule: 'Absolutely not' - Yahoo Sports

Posted in Yahoo | Comments Off on Colts Pro Bowler, NFLPA rep Ryan Kelly calls out Roger Goodell, blasts talk of 18-game schedule: ‘Absolutely not’ – Yahoo Sports

French Open 2024: How to watch the Alexander Zverev vs. Alex De Minaur match – Yahoo Sports

Posted: at 8:49 am

Germany's Alexander Zverev plays Australia's Alex De Minaur this Wednesday in the French Open Quarterfinals. (AP Photo/Thibault Camus)

The 2024 French Open at Roland Garros is now in full swing, and it's time for Alexander Zverev's next match. The No. 4 seed on the men's side will face Alex De Minaur this Wednesday, June 5 in the early afternoon (in the US). The Quarterfinals match will start shortly following the Sabalenka vs. Andreeva match on Court Philippe Chatrier. You can find the full order of play at Roland Garros here. Are you ready to watch Alexander Zverev vs. Alex De Minaur at the 2024 French Open? Heres everything you need to know about the tennis tournament at Roland Garros, including the full broadcast schedule, where to stream matches for free and more.

Date: Wednesday, June 5

Time: Afternoon, not before 2:15 p.m. ET

Location: Roland Garros, Paris, FR

Court: Court Philippe-Chatrier

Round: Quarterfinals

TV channel: Tennis Channel

Streaming: Fubo, DirecTV, VPN

No. 4 seed Zverev plays Alex De Minaur this Wednesday in the quarterfinals.

The Zverev vs. De Minaur match will be played on Court Philippe-Chatrier, beginning sometime after Sabalenka vs. Andreeva, but not before 2:15 p.m. ET. You can find the exact order of play at Roland Garros here.

You'll need access to the Tennis Channel to tune into Alexander Zverev's match against Alex De Minaur. This Wednesday, the US broadcast schedule for the French Open is as follows:

Wednesday, June 5: Quarterfinals

No tennis channel? No problem. You could always catch an uninterrupted livestream of the tennis tournament with the help of a VPN more on that below.

Fubo TVs Elite tier will get you access to NBC, NBC Sports and the Tennis Channel, along with 200+ more live channels. At $90 per month, the live TV streaming service is definitely the priciest option on this list, but still leaves you with major savings compared to a traditional cable package, and is also a great option for NFL fans. So if you're a sports fan looking for one simple subscription, Fubo might be it for you. Fubo subscribers also get 1000 hours of cloud DVR storage. The platform offers a free trial period, so you can stream the start of the French Open totally free.

If you want to catch every match of the French Open and dont want to have to hop around between NBC, Peacock and the Tennis Channel all week, in Australia a majority of the action is streaming free with ads on 9Now, and in Austria it's all streaming free with ads on ServusTV.

Dont live in either of those places? Don't worry, you can still stream like you do with the help of a VPN. A VPN (virtual private network) helps protect your data, can mask your IP address and is perhaps most popular for being especially useful in the age of streaming. Whether youre looking to watch Friends on Netflix (which left the U.S. version of the streamer back in 2019) or tune in to the F1 race this weekend without a cable package, a VPN can help you out. Looking to try a VPN for the first time? This guide breaks down the best VPN options for every kind of user.

ExpressVPN offers internet without borders, meaning you can tune into an Austrian or Australian livestream this month as opposed to paying for Peacock and the Tennis Channel for US coverage of the tennis tournament. All you'll need to do is sign up for ExpressVPN, change your server location and then find free livestream coverage on 9Now or ServusTV.

ExpressVPNs added protection, speed and range of location options make it an excellent choice for first-time VPN users looking to stretch their streaming abilities, plus, it's Endgadget's top pick for the best streaming VPN. New users can save 49% when they sign up for ExpressVPNs 12-month subscription. Plus, the service offers a 30-day money-back guarantee, in case you're nervous about trying a VPN.

The Roland Garros tennis tournament runs for two weeks, ending with the mens final on June 9.

Unfortunately for US fans, matches start bright and early at 5 a.m. for those in the Eastern timezone (and even earlier or later, depending on how you look at it for those on Pacific time).

US coverage of the French Open will be split across NBC Sports, the Tennis Channel and Peacock this year. This Sunday and Monday, French Open matches will air live on NBC and Peacock, before the action moves to the Tennis Channel for the week. Then the semifinals and finals will return to NBC/peacock.

All the NBC coverage will also be available to stream on NBCSports.com and the NBC Sports app for those with an eligible cable or live TV streaming package. For the tennis super fan, the Tennis Channel now offers streaming directly through their app, Tennis Channel+. So if you really want to catch every early morning match (without the help of a VPN), you may want to check out Tennis Channel+.

All times Eastern.

Wednesday, June 5: Quarterfinals

Thursday, June 6: Women's Semis

6 a.m.-2 p.m. - Tennis Channel

11 a.m.-2 p.m. - NBC, Peacock

Friday, June 7: Mens Semis

8 a.m.-4 p.m. - Tennis Channel

11 a.m.-3 p.m. - NBC, Peacock

Saturday, June 8: Womens Final

Sunday, June 9: Mens Final

Men's singles seeds

Novak Djokovic

Jannik Sinner

Carlos Alcaraz

Alexander Zverev

Daniil Medvedev

Andrey Rublev

Casper Ruud

Hubert Hurkacz

Stefanos Tsitsipas

Grigor Dimitrov

Alex de Minaur

Taylor Fritz

Holger Rune

Tommy Paul

Ben Shelton

Nicolas Jarry

Ugo Humbert

Karen Khachanov

Alexander Bublik

Sebastian Baez

Felix Auger-Aliassime

Adrian Mannarino

Francisco Cerundolo

Alejandro Tabilo

Frances Tiafoe

Tallon Griekspoor

Sebastian Korda

Tomas Martin Etcheverry

Arthur Fils

Lorenzo Musetti

Mariano Navone

Cam Norrie

Women's singles seeds

Iga Swiatek

Aryna Sabalenka

Coco Gauff

Elena Rybakina

Marketa Vondrousova

Maria Sakkari

Qinwen Zheng

Ons Jabeur

Jelena Ostapenko

Daria Kasatkina

Danielle Collins

Jasmine Paolini

Beatriz Haddad Maia

Madison Keys

Elina Svitolina

Ekaterina Alexandrova

Liudmila Samsonova

Marta Kostyuk

Victoria Azarenka

Anastasia Pavlyuchenkova

Carolina Garcia

Emma Navarro

Anna Kalinskaya

Barbora Krejcikova

Elise Mertens

Katie Boulter

Linda Noskova

Sorana Cirstea

Veronika Kudermetova

Dayana Yastremska

Leylah Fernandez

Katerina Siniakova

US viewers can tune into NBC's French Open coverage live on NBCSports.com or the NBC Sports app if they have a cable or satellite subscription to log in with.

View original post here:

French Open 2024: How to watch the Alexander Zverev vs. Alex De Minaur match - Yahoo Sports

Posted in Yahoo | Comments Off on French Open 2024: How to watch the Alexander Zverev vs. Alex De Minaur match – Yahoo Sports

Why the Caitlin Clark-Chennedy Carter incident has struck such a chord with the public – Yahoo Sports

Posted: at 8:49 am

At a certain point, with all the tinder lying about, a spark was bound to set fire.

Chennedy Carter was the flint. Caitlin Clark was the stone. And days later, the landscape is still raging with ever-growing flames.

Its true that Carters shoulder check on Clark was not a basketball play. Its also true that type of competitive physicality happens in basketball, and especially in the WNBA, quite a bit. If things were different, if history were more kind to womens athletic and professional endeavors, it could have stayed a moment in the seasons timeline. A video to put in the bucket for a rivalry feature.

Instead, it gave fuel to the growing discourse around Clark and the WNBA. The same way leaders have described the rising tide in womens basketball that resulted in another sold-out crowd watching Clarks Indiana Fever defeat Carters Chicago Sky on Saturday afternoon, the play in question prompted the collision of too many atoms that were already active.

Clark is almost undeniably the most well-known name to enter the 28-year-old WNBA. There have been plenty of other superstars, but none were able to come into the league already in national TV commercials and on window stickers at the grocery store. Only the inaugural 1996 players Sheryl Swoopes, Rebecca Lobo, Lisa Leslie, Cynthia Cooper, etc. might come close. Because of that, many people are watching the WNBA for the first time. There are media personalities talking about it for the first time, and their takes arent always rooted in historical knowledge. Players are faced with media coverage and criticism theyve rarely received at this level.

Carters shoulder check was cheap, even within the accepted reality of physical W basketball. It was clear it wasnt a basketball play, nor was it necessary. Referees often go to the monitors to review for possible upgrades on lesser, more incidental acts, and befuddling technical calls could be considered a WNBA hallmark. The flagrant should have been assessed for unnecessary contact, and we should have all moved on.

Before Saturdays game, three players received fouls that were upgraded during in-game reviews. Mercury guard Kahleah Copper received a flagrant 1 on opening night when her shooting hand came down on Kelsey Plums face. Sparks guard Aari McDonalds foul on Clark was upgraded to a flagrant 1 for a reckless closeout while defending a deep transition 3-pointer. Alyssa Thomas drew the previously most high-profile flagrant this season when she threw Angel Reese to the ground on a rebound opportunity and was ejected with a flagrant 2.

That Clarks most recent incident wasnt reviewed nor upgraded in the moment as it clearly should have been set the initial spark. Even in the hours after the incident, fans and personalities on social media continued to insist it wasnt a big deal because the play, in fact, was called a technical. Add in the TV angle and slo-mo replays making the hit look worse with Carter yelling something at Clark as she hit her, and Angel Reese jumping up in celebration on the bench, and we had all the ingredients of a good, old-fashioned disagreement. And the talk continues no matter how misguided, with Pat McAfee being another bold, ill-advised example.

Clark followed up a grueling college schedule with 11 games in 20 days for the Fever. Thats about one-third of a collegiate season crammed into three weeks, and shes the No. 1 target in defensive game plans for the leagues best teams. A lot of the physicality shes facing is part of the game and part of being a star rookie by whom veterans dont want to get embarrassed. Clark herself has said repeatedly she understands the nature of the league, and with a full offseason, shell have time to bulk up and compete better just as guards Sabrina Ionescu and Kelsey Plum have done in recent years. She doesnt need anyone to protect her from that reality. Shes an actual fan of the game, having grown up attending Minnesota Lynx games with her dad during their dynasty run. She knows.

Other stuff, like that hit from Carter, is borderline and shouldnt be let go without repercussion. And while on paper that meant merely one more free throw and the ball for the Fever, in reality, a stronger message should have been sent that that type of play wont be tolerated.

Physical play, intensity and a competitive spirit are hallmarks of Chicago Sky basketball, Sky head coach Teresa Weatherspoon said in a statement Monday. Chennedy got caught up in the heat of the moment in an effort to win the game. She and I have discussed what happened and that it was not appropriate, nor is it what we do or who we are.

Weatherspoon is right. Sports are a competitive atmosphere and the emotions can get away from a player. Why the Hall of Fame player didnt say anything regarding the play after the game other than her blunt all they're doing is competing only dropped more brush on the fire.

Weatherspoon cut off postgame media questions directed to Carter that offered the player an opportunity to explain the incident in her own words. In the Fevers room down the hall, Clark took the high road in answering multiple questions about the interaction and didnt place blame on anyone. It is what it is, she said a few times. She sits for 10-15 minutes at a time, three times a day on game days, of which shes already had 11 to the Skys seven, and answers easy, tough and sometimes repetitive questions. Fans see that and react to it.

Meanwhile Reese, one-half of the headlining rookie duo alongside Clark, declined Saturday to speak with the media. That lit anew the charred branch of media access and player accountability in a league known to ask for more coverage. WNBA media protocol requires teams to make two players and a head coach available in a news conference after a maximum 10-minute cooling off period. Every other healthy player is required to be made available should they be requested by a media member via written and verbal communication. The arrangement was agreed upon by the WNBA Players Association and league to replace open locker room access that was closed ahead of the 2023 season.

Multiple media members requested to speak with Reese on Saturday. It doesnt help that the Sky franchise has a history of not abiding by the rules and often makes access to players difficult. The WNBA fined Reese and the franchise, as it has done with the 2023 Finals runner-up Liberty and 2021 runner-up Mercury.

In the absence of context from the players themselves, the controversy spread further. It opened up room for people, some of whom have never watched womens basketball but saw a clip on their social-media timeline, to fill in their own assumptions and misguided claims about intent.

Carters only significant postgame comment I aint answering no Caitlin Clark questions just added to it all. And she kept her feet away from the heat, because while she might not have wanted to answer questions about Clark, she clearly had things to say about the star rookie. She shut off replies and bounced wherever she wanted on social media after the game.

Carter, a 2020 lottery pick who has a rocky history in the league, can talk whatever trash she wants. Anyone who has followed Clark, a well-known talker, to the WNBA should appreciate that. But if youre going to talk trash, stand on it when it matters.

Cheap shot aside, though, the league could use the beef. It used to market itself as the 144, a nod to the number of roster spots. It now wants to lean into rivalries and marketing superstars, because thats how sports work. More people saw Carters dustup with Clark live because they tuned in to see Clark, Reese and Kamilla Cardoso. The number of people who are now eyeing the Sky-Fever rematch is growing.

Years of flagrant fouls predated Carters and drew significantly less attention. An iconic clip of Diana Taurasi bumping Seimone Augustus and giving her a peck on the cheek in the 2013 playoffs made the rounds this weekend as an example of W drama. Taurasi was issued a technical. They each answered to it in postgame media availability (again, iconic).

Yet, that was a different time on a smaller platform. The game is growing now, and the players need to grow along with it.

More:

Why the Caitlin Clark-Chennedy Carter incident has struck such a chord with the public - Yahoo Sports

Posted in Yahoo | Comments Off on Why the Caitlin Clark-Chennedy Carter incident has struck such a chord with the public – Yahoo Sports

Celebrini interviews with Sharks, unsure about going pro yet – Yahoo Sports

Posted: at 8:49 am

Celebrini interviews with Sharks, unsure about going pro yet originally appeared on NBC Sports Bay Area

The San Jose Sharks interviewed Macklin Celebrini on Monday.

The Sharks and Celebrini are both in Buffalo, in preparation for the 2024 Draft Combine. San Jose is expected to make Celebrini the first-overall pick of the 2024 Draft on Jun. 28 in Las Vegas.

"The interview was good, it went really well," Celebrini told NHL.com. "It was great to kind of meet everyone. It was a great experience. I've heard a lot about (Sharks) general manager Mike Grier while at Boston University, so it was good to finally meet him."

According to NHL.com, the Sharks are one of Celebrinis seven interviews at the Combine. The Boston University center says he hasnt decided if hes going to turn pro yet.

I havent made up my mind yet, Celebrini told NHL.com. Thats a decision that Im going to make a little bit later. I wish I could tell you Ive made up my mind because thatd be a lot easier.

That may well be the case, but its still San Jose Hockey Nows expectation, speaking with a variety of sources in and around the Celebrini circle, that he will forgo his college eligibility to play for the San Jose Sharks next year.

Download and follow the San Jose Hockey Now podcast

Follow this link:

Celebrini interviews with Sharks, unsure about going pro yet - Yahoo Sports

Posted in Yahoo | Comments Off on Celebrini interviews with Sharks, unsure about going pro yet – Yahoo Sports

Rams rookie Blake Corum hits the ground running during offseason workouts – Yahoo Sports

Posted: at 8:49 am

He was a two-time All-American at Michigan and the offensive most valuable player in the Wolverines national championship game victory over Washington.

So running back Blake Corum had pedigree when the Rams selected him in the third round of the NFL draft.

Yet Corum has gone through the Rams offseason program with an attitude that tilted more neophyte than seasoned performer.

You just have to go in with a humble heart, and the mindset of, I dont know anything, Corum said Tuesday after practice, adding, Im going to grow from the good I do and whatever bad I do. Im never going to stop growing so it was easy for me to come in and basically start over.

The Rams selected the 5-foot-8, 210-pound Corum to complement third-year pro Kyren Williams, who made the Pro Bowl last season after rushing for 1,144 yards and scoring 15 touchdowns for a team that finished 10-7 and made the playoffs.

With Williams sidelined for voluntary offseason workouts because of a foot injury, Corum is taking advantage of increased reps as he learns the offense.

Read more: Rams' Stetson Bennett is getting back in form after improving mental health

Ive seen a very mature rookie, coach Sean McVay said, adding, I love his mental makeup, love the way he handles himself as a person and how locked in and focused he is.

Corum is part of a position group that includes Williams, Ronnie Rivers, second-year pro Zach Evans and recently signed veteran Boston Scott.Each hasdisplayed talent and growth in the meeting room and on the field during the offseason program, running backs coach Ron Gould said.

Upon Corums arrival in Thousand Oaks, Gould sat down with the rookie and urged him not to be too hard on himself if he committed mistakes. The worst thing Corum could do, Gould said, was put pressure on himself.

He hasnt done that up to this point, Gould said. Hes taken everything in stride. Hes a learner, and I see him growing every single day.

Corum used a tough running style to rush for 3,737 yards and 58 touchdowns during four seasons at Michigan.

After selecting Florida State edge rusher Jared Verse in the first round of the draft, and trading up to pick Seminoles defensive tackle Braden Fiske in the second, the Rams chose Corum with the 83rd pick.

I went into the draft just planning on going on Day 2, Corum said. I didnt know if it would be the second or third [round] for me I just wanted to go to the right situation.

When I got the call from the Rams I was like, 'You know what? This is the right situation.'

McVay, Gould, offensive coordinator Mike LaFleur and the other running backs have eased his transition to the pros, Corum said.

I thought I had a great family at Michigan, Corum said. We built a great culture there, and I didnt know what to expect coming to the NFL where it is a business now. But it feels like family.

After the Rams conclude workouts next week, Corum said he would leave Southern California briefly to conduct football camps in Michigan and home-state Virginia. He will return quickly to prepare for training camp, which begins in late July.

Maybe go down to Malibu, check out a beach or two, he said. But other than that, Im going to be spacing my time wisely, taking care of my body, training every day to make sure Im in shape for camp and staying in the playbook.

McVay provided no update on quarterback Matthew Staffords contract situation. Stafford is signed through 2026, but during the NFL draft McVay acknowledged a report that the 15-year veteran wants the deal adjusted. Stafford has participated in the offseason program and will participate in final workouts next week, McVay said. Linebacker Ernest Jones IV has participated in the offseason program but not on-field team workouts because of a knee issue, McVay said.

Get the best, most interesting and strangest stories of the day from the L.A. sports scene and beyond from our newsletter The Sports Report.

This story originally appeared in Los Angeles Times.

Go here to read the rest:

Rams rookie Blake Corum hits the ground running during offseason workouts - Yahoo Sports

Posted in Yahoo | Comments Off on Rams rookie Blake Corum hits the ground running during offseason workouts – Yahoo Sports

Employees claim OpenAI, Google ignoring risks of AI and should give them ‘right to warn’ public – New York Post

Posted: at 8:48 am

A group of AI whistleblowers claim tech giants like Google and ChatGPT creator OpenAI are locked in a reckless race to develop technology that could endanger humanity and demanded a right to warn the public in an open letter Tuesday.

Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that AI companies have strong financial incentives to avoid effective oversight and cited a lack of federal rules on developing advanced AI.

The workers point to potential risks including the spread of misinformation, worsening inequality and even loss of control of autonomous AI systems potentially resulting in human extinction especially as OpenAI and other firms pursue so-called advanced general intelligence, with capacities on par with or surpassing the human mind.

Companies are racing to develop and deploy ever more powerful artificial intelligence, disregarding the risks and impact of AI, former OpenAI employee Daniel Kokotajlo, one of the letters organizers, said in a statement. I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence.

They and others have bought into the move fast and break things approach and that is the opposite of what is needed for technology this powerful and this poorly understood, Kokotajlo added.

Kokotajlo, who joined OpenAI in 2022 as a researcher focused on charting AI advancements before leaving in April, has placed the probability that advanced AI will destroy or severely harm humanity in the future at a whopping 70%, according tothe New York Times, which first reported on the letter.

He believes theres a 50% chance that researchers will achieve artificial general intelligence by 2027.

The letter drew endorsements by two prominent experts known as the Godfathers of AI Geoffrey Hinton, who warned last year that the threat of rogue AI was more urgent to humanity than climate change, and Canadian computer scientist Yoshua Bengio. Famed British AI researcher Stuart Russell also backed the letter.

The letter asks AI giants to commit to four principles designed to boost transparency and protect whistleblowers who speak out publicly.

Those include an agreement not to retaliate against employees who speak out about safety concerns and to support an anonymous system for whistleblowers to alert the public and regulators about risks.

The AI firms are also asked to allow a culture of open criticism so long as no trade secrets are disclosed, and pledge not to enter into or enforce non-disparagement agreements or non-disclosure agreements.

As of Tuesday morning, the letters signers include a total of 13 AI workers. Of that total, 11 are formerly or currently employed by OpenAI, including Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler.

There should be ways to share information about risks with independent experts, governments, and the public, said Saunders. Today, the people with the most knowledge about how frontier AI systems work and the risks related to their deployment are not fully free to speak because of possible retaliation and overly broad confidentiality agreements.

Other signers included former Google DeepMind employee Ramana Kumar and current employee Neel Nanda, who formerly worked at Anthropic.

When reached for comment, an OpenAI spokesperson said the company has a proven track record of not releasing AI products until necessary safeguards were in place.

Were proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk, OpenAI said in a statement.

We agree that rigorous debate is crucial given the significance of this technology, and well continue to engage with governments, civil society and other communities around the world, the company added

Google and Anthropic did not immediately return requests for comment.

The letter was published just days after revelations that OpenAI has dissolved its Superalignment safety team, whose responsibilities included creating safety measures for advanced general intelligence (AGI) systems that could lead to the disempowerment of humanity or even human extinction.

Subscribe to our daily Business Report newsletter!

Two OpenAI executives who led the team, co-founder Ilya Sutskever and Jan Leike, have since resigned from the company. Leike blasted the firm on his way out the door, claiming that safety had taken a backseat to shiny products.

Elsewhere, former OpenAI board member Helen Toner who was part of the group that briefly succeeded in ousting Sam Altman as the firms CEO last year alleged that he had repeatedly lied during her tenure.

Toner claimed that she and other board members did not learn about ChatGPTs launch in November 2022 from Altman and instead found out about its debut on Twitter.

OpenAI has since established a new safety oversight committee that includes Altman as it begins training the new version of the AI model that powers ChatGPT.

The company pushed back on Toners allegations, noting that an outside review had determined that safety concerns were not a factor in Altmans removal.

Read this article:

Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post

Posted in Artificial General Intelligence | Comments Off on Employees claim OpenAI, Google ignoring risks of AI and should give them ‘right to warn’ public – New York Post

Former OpenAI researcher foresees AGI reality in 2027 – Cointelegraph

Posted: at 8:48 am

Leopold Aschenbrenner, a former safety researcher at ChatGPT creator OpenAI, has doubled down on artificial general intelligence (AGI) in his newest essay series on artificial intelligence.

Dubbed Situational Awareness, the series offers a glance at the state of AI systems and their promising potential in the next decade. The full series of essays is collected in a 165-page PDF file updated on June 4.

In the essays, the researcher paid specific attention to AGI, a type of AI that matches or surpasses human capabilities across a wide range of cognitive tasks. AGI is one of many different types of artificial intelligence, including artificial narrow intelligence (ANI) and artificial superintelligence (ASI).

AGI by 2027 is strikingly plausible, Aschenbrenner declared, predicting that AGI machines will outpace college graduates by 2025 or 2026. He wrote:

According to Aschenbrenner, AI systems could potentially possess intellectual capabilities comparable to those of a professional computer scientist. He also made another bold prediction that AI labs would be able to train general-purpose language models within minutes, stating:

Predicting the success of AGI, Aschenbrenner called on the community to face its reality. According to the researcher, the smartest people in the AI industry have converged on a perspective he calls AGI realism, which is based on three foundational principles tied to the national security and AI development of the United States.

Related: Former OpenAI, Anthropic employees call for right to warn on AI risks

Aschenbrenners AGI series comes a while after he was reportedly fired for allegedly leaking information from OpenAI. Aschenbrenner was also reportedly an ally of OpenAI chief scientist Ilya Sutskever, who reportedly participated in a failed effort to oust OpenAI CEO Sam Altman in 2023. Aschenbrenners latest series is also dedicated to Sutskever.

Aschenbrenner also recently founded an investment firm focused on AGI, with anchor investments from figures like Stripe CEO Patrick Collison, his blog reads.

Magazine: Crypto voters are already disrupting the 2024 election and its set to continue

See original here:

Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph

Posted in Artificial General Intelligence | Comments Off on Former OpenAI researcher foresees AGI reality in 2027 – Cointelegraph

What aren’t the OpenAI whistleblowers saying? – Platformer

Posted: at 8:48 am

Eleven current and former employees of OpenAI, along with two more from Google DeepMind, posted an open letter today stating that they are unable to voice concerns about risks created by their employees due to confidentiality agreements. Today lets talk about what they said, what they left out, and why lately the AI safety conversation feels like its going nowhere.

Heres a dynamic weve seen play out a few times now at companies including Meta, Google, and Twitter. First, in a bid to address potential harms created by their platforms, companies hire idealistic workers and charge them with building safeguards into their systems. For a while, the work of these teams gets prioritized. But over time, executives enthusiasm wanes, commercial incentives take over, and the team is gradually de-funded.

When those roadblocks go up, some of the idealistic employees will speak out, either to a reporter like me, or via the sort of open letter that the AI workers published today. And the company responds by reorganizing the team out of existence, while putting out a statement saying that whatever that team used to work on is now everyones responsibility.

At Meta, this process gave us the whistleblower Frances Haugen. On Googles AI ethics team, a slightly different version of the story played out after the firing of researcher Timnit Gebru. And in 2024, the story came to the AI industry.

OpenAI arguably set itself up for this moment more than those other tech giants. After all, it was established not as a traditional for-profit enterprise, but as a nonprofit research lab devoted to safely building an artificial general intelligence.

OpenAIs status as a relatively obscure nonprofit changed forever in November 2022. Thats when it released ChatGPT, a chatbot based on the latest version of its large language model, which by some estimates soon became the fastest-growing consumer product in history.

ChatGPT took a technology that had been exclusively the province of nerds and put it in the hands of everyone from elementary school children to state-backed foreign influence operations. And OpenAI soon barely resembled the nonprofit that was founded out of a fear that AI poses an existential risk to humanity.

This OpenAI placed a premium on speed. It pushed the frontier forward with tools like plugins, which connected ChatGPT to the wider internet. It aggressively courted developers. Less than a year after ChatGPTs release, the company a for-profit subsidiary of its nonprofit parent was valued at $90 billion.

That transformation, led by CEO Sam Altman, gave many in the company whiplash. And it was at the heart of the tensions that led the nonprofit board to fire Altman last year, for reasons related to governance.

The five-day interregnum between Altmans firing and his return marked a pivotal moment for the company. The board could have recommitted to its original vision of slow, cautious development of powerful AI systems. Or it could endorse the post-ChatGPT version of OpenAI, which closely resembled a traditional Silicon Valley venture-backed startup.

Almost immediately, it became clear that a vast majority of employees preferred working at a more traditional startup. Among other things, that startups commercial prospects meant that their (unusual) equity in the company would be worth millions of dollars. The vast majority of OpenAI employees threatened to quit if Altman didnt return.

And so Altman returned. Most of the old board left. New, more business-minded board members replaced them. And that board has stood by Altman in the months that followed, even as questions mount about his complex business dealings and conflicts of interest.

Most employees seem content under the new regime; positions at OpenAI are still highly sought after. But like Meta and Google before it, OpenAI had its share of conscientious objectors. And increasingly, were hearing what they think.

The latest wave began last month when OpenAI co-founder Ilya Sutskever, who initially backed Altmans firing and who had focused on AI safety efforts, quit the company. He was followed out the door by Jan Leike, who led the superalignment team, and a handful of other employees who worked on safety.

Then on Tuesday a new group of whistleblowers came forward to complain. Heres handsome podcaster Kevin Roose in the New York Times:

They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.

OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there, said Daniel Kokotajlo, a former researcher in OpenAIs governance division and one of the groups organizers.

Anyone looking for jaw-dropping allegations from the whistleblowers will likely leave disappointed. Kokotajlos sole specific complaint in the article is that some employees believed Microsoft had released a new version of GPT-4 in Bing without proper testing; Microsoft denies that this happened.

But the accompanying letter offers one possible explanation for why the charges feel so thin: employees are forbidden from saying more by various agreements they signed as a condition of working at the company. (The company has said it is removing some of the more onerous language from its agreements, after Vox reported on them last month.)

Were proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk, an OpenAI spokeswoman told the Times. We agree that rigorous debate is crucial given the significance of this technology, and well continue to engage with governments, civil society and other communities around the world.

The company also created a whistleblower hotline for employees to anonymously voice their concerns.

So how should we think about this letter?

I imagine that it will be a Rorschach test for whoever reads it, and what they see will depend on what they think of the AI safety movement in general.

For those who believe that AI poses existential risk, I imagine this letter will provide welcome evidence that at least some employees inside the big AI makers are taking those risks seriously. And for those who dont, I imagine it will provide more ammunition for the argument that the AI doomers are once again warning about dire outcomes without providing any compelling evidence for their beliefs.

As a journalist, I find myself naturally sympathetic to people inside companies who warn about problems that havent happened yet. Journalism often serves a similar purpose, and every once in a while, it can help prevent those problems from occurring. (This can often make the reporter look foolish, since they spent all that time warning about a scenario that never unfolded, but thats a subject for another day.)

At the same time, theres no doubt that the AI safety argument has begun to feel a bit tedious over the past year, when the harms caused by large language models have been funnier than they have been terrifying. Last week, when OpenAI put out the first account of how its products are being used in covert influence operations, there simply wasnt much there to report.

Weve seen plenty of problematic misuse of AI, particularly deepfakes in elections and in schools. (And of women in general.) And yet people who sign letters like the one released today fail to connect high-level hand-wringing about their companies to the products and policy decisions that their companies make. Instead, they speak through opaque open letters that have surprisingly little to say about what safe development might actually look like in practice.

For a more complete view of the problem, I preferred another (and much longer) piece of writing that came out Tuesday. Leopold Aschenbrenner, who worked on OpenAIs superalignment team and was reportedly fired for leaking in April, published a 165-page paper today laying out a path from GPT-4 to superintelligence, the dangers it poses, and the challenge of aligning that intelligence with human intentions.

Weve heard a lot of this before, and the hypotheses remain as untestable (for now) as they always have. But I find it difficult to read the paper and not come away believing that AI companies ought to prioritize alignment research, and that current and former employees ought to be able to talk about the risks they are seeing.

Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered, Aschenbrenner concludes. As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.

And if those who feel the weight of what is coming work for an AI company, it seems important that they be able to talk about what theyre seeing now, and in the open.

For more good posts every day, follow Caseys Instagram stories.

(Link)

(Link)

Send us tips, comments, questions, and situational awareness: casey@platformer.news and zoe@platformer.news.

Excerpt from:

What aren't the OpenAI whistleblowers saying? - Platformer

Posted in Artificial General Intelligence | Comments Off on What aren’t the OpenAI whistleblowers saying? – Platformer

What Ever Happened to the AI Apocalypse? – New York Magazine

Posted: at 8:48 am

Photo: Intelligencer; Photo: Getty Images

For a few years now, lots of people have been wondering what Sam Altman thinks about the future or perhaps what he knows about it as the CEO of OpenAI, the company that kicked off the recent AI boom. Hes been happy to tell them about the end of the world. If this technology goes wrong, it can go quite wrong, he told a Senate committee in May 2023. What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT, he said last June. A misaligned superintelligent AGI could cause grievous harm to the world, he wrote in a blog post on OpenAIs website that year.

Before the success of ChatGPT thrust him into the spotlight, he was even less circumspect. AI will probably, like, most likely lead to the end of the world, but in the meantime, therell be great companies, he cracked during an interview in 2015. Probably AI will kill us all, he joked at an event in New Zealand around the same time; soon thereafter, he would tell a New Yorker reporter about his plans to flee there with friend Peter Thiel in the event of an apocalyptic event (either there or a big patch of land in Big Sur he could fly to).Then Altman wrote on his personal blog that superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Returning, again, to last year: The bad case and I think this is important to say is like lights out for all of us. He wasnt alone in expressing such sentiments. In his capacity as CEO of OpenAI, he signed his name to a group statement arguing that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, alongside a range of people in and interested in AI, including notable figures at Google, OpenAI, Microsoft, and xAI.

The tech industrys next big thing might be a doomsday machine, according to the tech industry, and the race is on to summon a technology that might end the world. Its a strange mixed message, to say the least, but its hard to overstate how thoroughly the apocalypse invoked as a serious worry or a reflexive aside has permeated the mainstream discourse around AI. Unorthodox thinkers and philosophers have seen longstanding theories and concerns about superintelligence get mainstream consideration. But the end of the world has also become product-event material, fundraising fodder. In discussions about artificial intelligence, acknowledging the outside chance of ending human civilization has come to resemble a tic. On AI-startup websites, the prospect of human annihilation appears as boilerplate.

In the last few months, though, companies including OpenAI have started telling a slightly different story. After years of warning about infinite downside risk and acting as though they had no choice but to take it theyre focusing on the positive. The doomsday machine were working on? Actually, its a powerful enterprise software platform. From the Financial Times:

The San Francisco-based company said on Tuesday that it had started producing a new AI system to bring us to the next level of capabilities and that its development would be overseen by a new safety and security committee.

But while OpenAI is racing ahead with AI development, a senior OpenAI executive seemed to backtrack on previous comments by its chief executive Sam Altman that it was ultimately aiming to build a superintelligence far more advanced than humans.

Anna Makanju, OpenAIs vice-president of global affairs, told the Financial Times in an interview that its mission was to build artificial general intelligence capable of cognitive tasks that are what a human could do today.

Our mission is to build AGI; I would not say our mission is to build superintelligence, Makanju said.

The story also notes that in November, in the context of seeking more money from OpenAI partner Microsoft, Altman said he was spending a lot of time thinking about how to build superintelligence, but also, more gently, that his companys core product was, rather than a fearsome self-replicating software organism with unpredictable emergent traits, a form of magic intelligence in the sky.

Shortly after that statement, Altman would be temporarily ousted from OpenAI by a board that deemed him not sufficiently candid, a move that triggered external speculation that a major AI breakthrough had spooked safety-minded members. (More recent public statements from former board members were forceful but personal, accusing Altman of a pattern of lying and manipulation.)

After his return, Altman consolidated his control of the company, and some of his internal antagonists left or were pushed out. OpenAI then dissolved the team charged with achieving superalignment in the companys words, managing risks that could lead to the disempowerment of humanity or even human extinction and replaced it with a new safety team run by Altman himself, who also stood accused of voice theft by Scarlett Johansson. Its safety announcement was terse and notably lacking in evocative doomsaying. This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations, the company said. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment. Its the sort of careful, vague corporate language you might expect from a company thats comprehensively dependent on one tech giant (Microsoft) and is closing in on a massive licensing deal with its competitor (Apple).

In other news, longtime AI doomsayer Elon Musk, who co-founded OpenAI but split with the firm and later (incoherently and perhaps disingenuously) sued it for abandoning its nonprofit mission in pursuit of profit, raised $6 billion for his unapologetically for-profit competitor xAI. His grave public warnings about superintelligence now take the form of occasional X posts about memes:

There are a few different ways to process this shift. If youre deeply worried about runaway AI, this is just a short horror story in which a superintelligence is manifesting itself right in front of our eyes, helped along by the few who both knew better and were in any sort of position to stop it, in some sort of short-sighted exchange for wealth. Whats happened so far is basically compatible with your broad prediction and well-articulated warnings that far predated the current AI boom: All it took for mankind to summon a vengeful machine god was the promise of ungodly sums of money.

Similarly, if you believe in and are excited about runaway AI, this is all basically great. The system is working, the singularity is effectively already here, and failed attempts to alter or slow AI development were, in fact, near misses with another sort of disaster (this perspective exists among at least a few people at OpenAI).

If youre more skeptical of AI-doomsday predictions, you might generously credit this shift to a gradual realization among industry leaders that current generative-AI technologynow receiving hundreds of billions of dollars of investment and deployed in the wild at scaleis not careening toward superintelligence, consciousness, or rogue malice. Theyre simply adjusting their story to fit the facts of what theyre seeing.

Or maybe, for at least some in the industry, apocalyptic stories were plausible in the abstract, compelling, attention-grabbing, and interesting to talk about, and turned out to be useful marketing devices. They werestories that dovetailed nicely with the concerns of some of the domain experts they needed to work at the companies, but which seemed like harmless and ultimately cautious intellectual exercises to domain experts who didnt share them (Altman, it should be noted, is an investor and executive, not a machine-learning engineer or AI researcher). Apocalyptic warnings were an incredible framing device for a class of companies that needed to raise enormous amounts of money to function, a clever and effective way to make an almost cartoonishly brazen proposal to investors we are the best investment of all time, with infinite upside in the disarming passive voice, as concerned observers with inside knowledge of an unstoppable trend and an ability to accept capital. Routine acknowledgments of abstract danger were also useful for feigning openness to theoretical regulation help us help you avoid the end of the world! while fighting material regulation in private. They raised the stakes to intoxicating heights.

As soon as AI companies made actual contact with users, clients, and the general public, though, this apocalyptic framing flipped into a liability. It suggested risk where risk wasnt immediately evident. In a world where millions of people engage casually with chatbots, where every piece of software suddenly contains an awkward AI assistant, and where Google is pumping AI content into search pages for hundreds of millions of users to see and occasionally laugh at, the AI apocalypse can, somewhat counterintuitively, feel a bit like a non sequitur. Encounters with modern chatbots and LLM-powered software might cause users to wonder about their jobs, or trigger a general sense of wonder or unease about the future; they do not, in their current state, seem to strike fear in users hearts. Mostly, theyre showing up as new features in old software used at work.

The AI industrys sudden disinterest in the end of the world might also be understood as an exaggerated version of corporate Americas broader turn away from talking about ESG and DEI: as profit-driven, sure, but also as evidence that initial commitments to mitigating harmful externalities were themselves disingenuous and profit motivated at the time, and simply outlived their usefulness as marketing stories. It signals a loss of narrative control. In 2022, OpenAI could frame the future however it wanted. In 2024, its dealing with external expectations about the present, from partners and investors that are less interested in speculating about the future of mankind, or conceptualizing intelligence, than they are getting returns on their considerable investments, preferably within the fiscal year.

Again, none of this is particularly comforting if you think that Altman and Musk were right to warn about ending the world, even by accident, even out of craven self-interest, or if youre concerned about the merely very bad externalities the many small apocalypses that AI deployment is already producing and is likely to produce.

But AIs sudden rhetorical downgrade might be clarifying, too, at least about the behaviors of the largest firms and their leaders. If OpenAI starts communicating more like a company, it will be less tempting to mistake it for something else, as it argues for the imminence of benign but barely less speculative variation of AGI, with its softer implication of infinite returns by way of semi-apocalyptic workplace automation. If its current leadership ever believed what they were saying, theyre certainly not acting like it, and in hindsight, they never really were. The apocalypse was just another pitch. Let it be a warning about the next one.

Daily news about the politics, business, and technology shaping our world.

By submitting your email, you agree to our Terms and Privacy Notice and to receive email correspondence from us.

More here:

What Ever Happened to the AI Apocalypse? - New York Magazine

Posted in Artificial General Intelligence | Comments Off on What Ever Happened to the AI Apocalypse? – New York Magazine