Daily Archives: February 15, 2022

Language Is The Next Great Frontier In AI – Forbes

Posted: February 15, 2022 at 5:09 am

Johannes Gutenberg's printing press, introduced in the fifteenth century, transformed society ... [+] through language. The creation of machines that can understand language may have an even greater impact.

Language is the cornerstone of human intelligence.

The emergence of language was the most important intellectual development in our species history. It is through language that we formulate thoughts and communicate them to one another. Language enables us to reason abstractly, to develop complex ideas about what the world is and could be, and to build on these ideas across generations and geographies. Almost nothing about modern civilization would be possible without language.

Building machines that can understand language has thus been a central goal of the field of artificial intelligence dating back to its earliest days.

It has proven maddeningly elusive.

This is because mastering language is what is known as an AI-complete problem: that is, an AI that can truly understand language the way a human can would by implication be capable of any other human-level intellectual activity. Put simply, to solve language is to solve AI.

This profound and subtle insight is at the heart of the Turing test, introduced by AI pioneer Alan Turing in a groundbreaking 1950 paper. Though often critiqued or misunderstood, the Turing test captures a fundamental reality about language and intelligence; as it approaches its 75th birthday, it remains as relevant as it was when Turing first conceived it.

Humanity has yet to build a machine intelligence with human-level mastery of language. (In other words, no machine intelligence has yet passed the Turing test.) But over the past few years researchers have achieved startling, game-changing breakthroughs in language AI, also called natural language processing (NLP).

The technology is now at a critical inflection point, poised to make the leap from academic research to widespread real-world adoption. In the process, broad swaths of the business world and our daily lives will be transformed. Given languages ubiquity, few areas of technology will have a more far-reaching impact on society in the years ahead.

The most powerful way to illustrate the capabilities of todays cutting-edge language AI is to start with a few concrete examples.

Todays AI can correctly answer complex medical queriesand explain the underlying biological mechanisms at play. It can craft nuanced memos about how to run effective board meetings. It can write articles analyzing its own capabilities and limitations, while convincingly pretending to be a human observer. It can produce original, sometimes beautiful, poetry and literature.

(It is worth taking a few moments to inspect these examples yourself.)

What is behind these astonishing new AI abilities, which just five years ago would have been inconceivable?

In short: the invention of the transformer, a new neural network architecture that has unleashed vast new possibilities in AI.

A group of Google researchers introduced the transformer in late 2017 in a now-classic research paper.

Before transformers, the state of the art in NLPfor instance, LSTMs and the widely-used Seq2Seq architecturewas based on recurrent neural networks. By definition, recurrent neural networks process data sequentiallythat is, one word at a time, in the order that the words appear.

Transformers great innovation is to make language processing parallelized, meaning that all the tokens in a given body of text are analyzed at the same time rather than in sequence. In order to support this parallelization, transformers rely on an AI mechanism known as attention. Attention enables a model to consider the relationships between words, even if they are far apart in a text, and to determine which words and phrases in a passage are most important to pay attention to.

Parallelization also makes transformers vastly more computationally efficient than RNNs, meaning that they can be trained on larger datasets and built with more parameters. One defining characteristic of todays transformer models is their massive size.

A flurry of innovation followed in the wake of the original transformer paper as the worlds leading AI researchers built upon this foundational breakthrough.

The publication of the landmark transformer model BERT came in 2018. Created at Google, BERTs big conceptual advance is its bidirectional structure (the B in BERT stands for bidirectional). The model looks in both directions as it analyzes a given word, considering both the words that come before and the words that come after, rather than working unidirectionally from left to right. This additional context allows for richer, more nuanced language modeling.

BERT remains one of the most important transformer-based models in use, frequently treated as a reference against which newer models are compared. Much subsequent research on transformersfor instance, Facebooks influential RoBERTa model (2019)is based on refining BERT.

Googles entire search engine today is powered by BERT, one of the most far-reaching examples of transformers real-world impact.

Another core vein of research in the world of transformers is OpenAIs family of GPT models. OpenAI published the original GPT in June 2018, GPT-2 in February 2019, and GPT-3 in May 2020. Popular open-source versions of these models, like GPT-J and GPT-Neo, have followed.

As the G in their names indicates, the GPT models are generative: they generate original text output in response to the text input they are fed. This is an important distinction between the GPT class of models and the BERT class of models. BERT, unlike GPT, does not generate new text but instead analyzes existing text (think of activities like search, classification, or sentiment analysis).

GPTs generative capabilities make these models particularly attention-grabbing, since writing appears to be a creative act and the output can be astonishingly human-like. Text generation is sometimes referred to as NLPs party trick. (All four of the examples linked to above are text generation examples from GPT-3.)

Perhaps the most noteworthy element of the GPT architecture is its sheer size. OpenAI has been intentional and transparent about its strategy to pursue more advanced language AI capabilities through raw scale above all else: more compute, larger training data corpora, larger models.

With 1.5 billion parameters, GPT-2 was the largest model ever built at the time of its release. Published less than a year later, GPT-3 was two orders of magnitude larger: a whopping 175 billion parameters. Rumors have circulated that GPT-4 will contain on the order of 100 trillion parameters (perhaps not coincidentally, roughly equivalent to the number of synapses in the human brain). As a point of comparison, the largest BERT model had 340 million parameters.

As with any machine learning effort today, the performance of these models depends above all on the data on which they are trained.

Todays transformer-based models learn language by ingesting essentially the entire internet. BERT was fed all of Wikipedia (along with the digitized texts of thousands of unpublished books). RoBERTa improved upon BERT by training on even larger volumes of text from the internet. GPT-3s training dataset was larger still, consisting of half a trillion language tokens. Thus, these models linguistic outputs and behaviors can ultimately be traced to the statistical patterns in all the text that humans have previously published online.

The reason such large training datasets are possible is that transformers use self-supervised learning, meaning that they learn from unlabeled data. This is a crucial difference between todays cutting-edge language AI models and the previous generation of NLP models, which had to be trained with labeled data. Todays self-supervised models can train on far larger datasets than was ever previously possible: after all, there is more unlabeled text data than labeled text data in the world by many orders of magnitude.

Some observers point to self-supervised learning, and the vastly larger training datasets that this technique unlocks, as the single most important driver of NLPs dramatic performance gains in recent years, more so than any other feature of the transformer architecture.

Training models on massive datasets with millions or billions of parameters requires vast computational resources and engineering know-how. This makes large language models prohibitively costly and difficult to build. GPT-3, for example, required several thousand petaflop/second-days to traina staggering amount of computational resources.

Because very few organizations in the world have the resources and talent to build large language models from scratch, almost all cutting-edge NLP models today are adapted from a small handful of base models: e.g., BERT, RoBERTa, GPT-2, BART. Almost without exception, these models come from the worlds largest tech companies: Google, Facebook, OpenAI (which is bankrolled by Microsoft), Nvidia.

Without anyone quite planning for it, this has resulted in an entirely new paradigm for NLP technology developmentone that will have profound implications for the nascent AI economy.

This paradigm can be thought of in two basic phases: pre-training and fine-tuning.

In the first phase, a tech giant creates and open-sources a large language model: for instance, Googles BERT or Facebooks RoBERTa.

Unlike in previous generations of NLP, in which models had to be built for individual language tasks, these massive models are not specialized for any particular activity. They have powerful generalized language capabilities across functions and topic areas. Out of the box, they perform well at the full gamut of activities that comprise linguistic competence: language classification, language translation, search, question answering, summarization, text generation, conversation. Each of these activities on its own presents compelling technological and economic opportunities.

Because they can be adapted to any number of specific end uses, these base models are referred to as pre-trained.

In the second phase, downstream usersyoung startups, academic researchers, anyone else who wants to build an NLP modeltake these pre-trained models and refine them with a small amount of additional training data in order to optimize them for their own specific use case or market. This step is referred to as fine-tuning.

Todays pre-trained models are incredibly powerful, and even more importantly, they are publicly available, said Yinhan Liu, lead author on Facebooks RoBERTa work and now cofounder/CTO of healthcare NLP startup BirchAI. For those teams that have the know-how to operationalize transformers, the question becomes: what is the most important or impactful use case to which I can apply this technology?

Under this pre-train then fine-tune paradigm, the heavy lifting is done upfront with the creation of the pre-trained model. Even after fine-tuning, the end models behavior remains largely dictated by the pre-trained models parameters.

This makes these pre-trained models incredibly influential. So influential, in fact, that Stanford University has recently coined a new name for them, foundation models, and launched an entire academic program devoted to better understanding them: the Center for Research on Foundation Models (CRFM). The Stanford team believes that foundation models, and the small group of tech giants that have the resources to produce them, will exert outsize influence on the future behavior of artificial intelligence around the world.

As the researchers put it: Foundation models have led to an unprecedented level of homogenization: Almost all state-of-the-art NLP models are now adapted from one of a few foundation models. While this homogenization produces extremely high leverage (any improvements in the foundation models can lead to immediate benefits across all of NLP), it is also a liability; all AI systems might inherit the same problematic biases of a few foundation models.

This Stanford effort is drawing attention to a massive looming problem for large language models: social bias.

The source of social bias in AI models is straightforward to summarize but insidiously difficult to root out. Because large language models (or foundation models, to use the new branding) learn language by ingesting what humans have written online, they inevitably inherit the prejudices, false assumptions and harmful beliefs of their imperfect human progenitors. Just imagine all the fringe subreddits and bigoted blogs that must have been included in GPT-3s vast training data corpus.

The problem has been extensively documented: todays most prominent foundation models all exhibit racist, sexist, xenophobic, and other antisocial tendencies. This issue will only grow more acute as foundation models become increasingly influential in society. Some observers believe that AI bias will eventually become as prominent of an issue for consumers, companies and governments as digital threats like data privacy or cybersecurity that have come before itthreats that were also not fully appreciated at first, because the breakneck pace of technological change outstripped societys ability to properly adapt to it.

There is no silver-bullet solution to the challenge of AI bias and toxicity. But as the problem becomes more widely recognized, a number of mitigation strategies are being pursued.

Last month, OpenAI announced that it had developed a new version of GPT-3 that is safer, more helpful, and more aligned with human values. The company used a technique known as reinforcement learning from human feedback to fine-tune its models to be less biased and more truthful than the original GPT-3. This new version, named InstructGPT, is now the default language model that OpenAI makes available to customers.

Historically, Alphabets DeepMind has been an outlier among the worlds most advanced AI research organizations for not making language AI a major focus area. This changed at the end of 2021, with DeepMind announcing a collection of important work on large language models.

Of the three NLP papers that DeepMind published, one is devoted entirely to the ethical and social risks of language AI. The paper proposes a comprehensive taxonomy of 6 thematic areas and 21 specific risks that language models pose, including discrimination, exclusion, toxicity and misinformation. DeepMind pledged to make these risks a central focus of its NLP research going forward to help ensure that it is pursuing innovation in language AI responsibly.

The fact that this dimension of language AI researchuntil recently, treated as an afterthought or ignored altogether by most of the worlds NLP researchersfeatured so centrally in DeepMinds recent foray into language AI may be a signal of the fields shifting priorities moving forward.

Increased regulatory focus on the harms of bias and toxicity in AI models will only accelerate this shift. And make no mistake: regulatory action on this front is a matter of when, not if.

Interestingly, perhaps the most creative use cases for NLP today dont involve natural language at all. In particular, todays cutting-edge language AI technology is powering remarkable breakthroughs in two other domains: coding and biology.

Whether its Python, Ruby, or Java, computer programming happens via languages. Just like natural languages like English or Swahili, programming languages are symbolically represented, follow regular rules, and have a robust internal logic. The audience just happens to be software compilers rather than other humans.

It therefore makes sense that the same powerful new technologies that have given AI incredible fluency in natural language can likewise be applied to programming languages, with similar results.

Last summer OpenAI announced Codex, a transformer-based model that can write computer code astonishingly well. In parallel, GitHub (which is allied with OpenAI through its parent company Microsoft) launched a productized version of Codex named Copilot.

To develop Codex, OpenAI took GPT-3 and fine-tuned it on a massive volume of publicly available written code from GitHub.

Codexs design is simple: human users give it a plain-English description of a command or function and Codex turns this description into functioning computer code. A user could input into Codex, for instance, crop this image circularly or animate this image horizontally so that it bounces off the left and right wallsand Codex can produce a snippet of code to implement those actions. (These exact examples can be examined on OpenAIs website.) Codex is most capable in Python, but it is proficient in over a dozen programming languages.

Then, just two weeks ago, DeepMind further advanced the frontiers of AI coding with its publication of AlphaCode.

AlphaCode is an AI system that can compete at a human level in programming competitions. In these competitions, which attract hundreds of thousands of participants each year, contestants receive a lengthy problem statement in English and must construct a complete computer program that solves it. Example problems include devising strategies for a custom board game or solving an arithmetic-based brain teaser.

While OpenAIs Codex can produce short snippets of code in response to concrete descriptions, DeepMinds AlphaCode goes much further. It begins to approach the full complexity of real-world programming: assessing an abstract problem without a clear solution, devising a structured approach to solving it, and then executing on that approach with up to hundreds of lines of code. AlphaCode almost seems to display that ever-elusive attribute in AI, high-level reasoning.

As DeepMinds AlphaCode team wrote: Creating solutions to unforeseen problems is second nature in human intelligencea result of critical thinking informed by experience. For artificial intelligence to help humanity, our systems need to be able to develop problem-solving capabilities. AlphaCode solves new problems in programming competitions that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.

Another language in which todays cutting-edge NLP has begun to generate remarkable insights is biology, from genomics to proteins.

Genomics is well-suited to the application of large language models because an individuals entire genetic endowment is encoded in a simple four-letter alphabet: A (for adenine), C (for cytosine), G (for guanine), and T (for thymine). Every humans DNA is defined by a string of billions of As, Cs, Gs and Ts (known as nucleotides) in a particular order.

In many respects DNA functions like a language, with its nucleotide sequences exhibiting regular patterns that resemble a kind of vocabulary, grammar, and semantics. What does this language say? It defines much about who we are, from our height to our eye color to our risk of heart disease or substance abuse.

Large language models are now making rapid progress in deciphering the language of DNA, in particular its noncoding regions. These noncoding regions do not contain genes but rather control genes: they regulate how much, when, and where given genes are expressed, giving them a central role in the maintenance of life. Noncoding regions comprise 98% of our total DNA but until now have remained poorly understood.

A few months ago, DeepMind introduced a new transformer-based architecture that can predict gene expression based on DNA sequence with unprecedented accuracy. It does so by considering interactions between genes and noncoding DNA sequences at much greater distances than was ever before possible. A team at Harvard completed work along similar lines to better understand gene expression in corn (fittingly naming their model CornBERT).

Another subfield of biology that represents fertile ground for language AI is the study of proteins. Proteins are strings of building blocks known as amino acids, linked together in a particular order. There are 20 amino acids in total. Thus, for all their complexity, proteins can be treated as tokenized stringswherein each amino acid, like each word in a natural language, is a tokenand analyzed accordingly.

As one example, an AI research team from Salesforce recently built an NLP model that learns the language of proteins and can generate plausible protein sequences that dont exist in nature with prespecified characteristics. The potential applications of this type of controllable protein synthesis are tantalizing.

These efforts are just the beginning. In the months and years ahead, language AI will make profound contributions to our understanding of how life itself works.

Language is at the heart of human intelligence. It therefore is and must be at the heart of our efforts to build artificial intelligence. No sophisticated AI can exist without mastery of language.

Today, the field of language AI is at an exhilarating inflection point, on the cusp of transforming industries and spawning new multi-billion-dollar companies. At the same time, it is fraught with societal dangers like bias and toxicity that are only now starting to get the attention they deserve.

This article explored the big-picture developments and trends shaping the world of language AI today. In a followup article, we will canvass todays most exciting NLP startups. A growing group of NLP entrepreneurs is applying cutting-edge language AI in creative ways across sectors and use cases, generating massive economic value and profound industry disruption. Few startup categories hold more promise in the years ahead.

Stay tuned for Part 2 of this article, which will explore todays most promising NLP startups.

Note: The author is a Partner at Radical Ventures, which is an investor in BirchAI.

Read the rest here:

Language Is The Next Great Frontier In AI - Forbes

Posted in Ai | Comments Off on Language Is The Next Great Frontier In AI – Forbes

As James Harden exits to Philly, the Nets’ experiment ends in failure – Yahoo Sports

Posted: at 5:09 am

The Brooklyn experiment ended mercifully, almost abruptly. The long dalliance with the Philadelphia 76ers finally consummated with a trade that could rank among the best in recent history considering the star power.

James Harden forced his way off the team after forcing his way to Brooklyn one year ago. Kyrie Irving keeps himself as a part-time player, which no doubt rankled Harden in the process. Kevin Durant, the man this revolves around, has watched his grand plan blow up in smoke, and was no doubt weary of Harden no longer being onboard.

Getting Ben Simmons depending if Simmons mind, body and soul are right seems like the best possible consolation prize, and make no mistake, the Nets dont make this move without Durants blessing or urging.

They wouldve been better off choosing the Knicks. Perhaps it wouldve been just as predictable and combustible, but at least it wouldve been memorable and under the backdrop of Madison Square Garden.

Scary hours? Meet nightmare on Broad Street.

Championships, fun times and reformed reputations were supposed to be how this played out. Instead, there was no depth or consistency and flaky characters playing true to their histories.

This venture ended with Nets coach Steve Nash publicly claiming till the last minute Harden would be a Net, even though Harden seemed to have long checked out of this franchise. Its hard to pinpoint exactly when it all went downhill, but we cant point to the high moment because there isnt one.

The Brooklyn Nets reportedly traded James Harden to the Philadelphia 76ers on Thursday before the NBA trade deadline. (Steven Ryan/Getty Images)

But the commonality seems to be, when someone is ready to leave the party, everyone knows it long before the music stops.

Harden was arguably one day away from furniture moving in the locker room in Houston before being sent to Brooklyn, and his hamstring has tightened up at precisely the right time this year exactly when whatever joy hes played with has been replaced by a forlorn look of disengagement and desperation.

He stomped his feet, grabbed his hammy and went home. Even if his discontent came from a sliding scale of rule application concerning Irvings in-out-in status, he benefited from that power structure many times before.

Story continues

Remember that Ill inject him [with the COVID-19 vaccine] myself statement he made about Irving? Remember him jettisoning Chris Paul from Houston when that relationship went sour? Its no fun when someone else has the gun.

When Irving is ready to exit, scorched earth is usually behind his suitcases a smirk and who me explanation is often to follow. For Durants part, hes not as much an active participant in the four-alarm fire, but he wont tell you where the fire extinguisher is.

Harden will probably look close enough to his old self in Philadelphia as a second option next to Joel Embiid and being next to that old, comfortable blanket and binky, Daryl Morey. But his act has been so much of the focus, its taken the eye off his play taking a sharp decline as he approaches his mid-30s.

Making the assumption hell be reborn with a change of address and a long-term deal underscores that lately weve rarely seen the best of him and as his salary will continue to increase, hell be compared to his MVP-like standards against the heavy-handed approach from Philadelphia 76er fans.

And we all know Harden stands tall in the face of pressure, never backing down or running away from adversity.

That would require any of this to be about basketball, though, because its been about the drama or if you squint hard enough, the dramedy. Its not quite player empowerment gone bad, but the checks and balances seemed to be in short order on Atlantic Ave.

Irving could very well be the best soloist in the game today. Matched up with Durant and Hardens exploits, it should have made beautiful music.

They were supposed to be the best touring show since Durant took his talents to California, dazzling crowds and puzzling opponents.

But it felt so soulless, so empty, so devoid of any lasting impact. But even when the Miami Heat three were brought together, it felt at least like they were bringing everyone else along, that even when they played the villains it was the entire lot. With the Nets, those three were making their own rules but didnt confer with each other on the actual rules.

It looked disjointed and felt contrived and ugly when it shouldve been anything but. Perhaps the unpredictable pandemic hastened things, but like in any other line of life, it presented a mirror and exacerbated the ills of all involved.

But the feeling isn't because of the way they were brought together, not in sum.

In some ways, the Lakers 2020 title in the Orlando bubble felt hollow, but not because of the way LeBron James engineered a trade to get Anthony Davis out of the Bayou and into Los Angeles it left us wanting more because of the circumstances, not the people.

Had James Heat been disbanded after one year, not winning a title and flaming out dramatically against the Dallas Mavericks, the footprints still would be felt today.

Youd still feel something.

The greatest moment for these Nets wasnt even a team moment it was Durants, and his alone. Harden was a shell due to a hamstring injury and Irving injured his ankle, leaving Durant by his lonesome. He carried the Nets on his back and earned more credit with those 40-balls than even his historic Finals performances against James.

Who knows what Durant picked up from his time in the Bay Area, if he noticed that while it mightve felt chaotic, there was structure between all that freedom and freelancing. When theres anarchy, he wants order. When theres order, he wants to breathe.

Theres no perfect melody or harmony to be derived from this experience, although Nash and Sean Marks and Morey and Doc Rivers will put the best PR face on it in the immediate aftermath.

What can we get out of this?

Maybe an emotionally charged Nets-76ers playoff series one where Simmons wont play in Philly, but Irving will ONLY play in Philly.

Ay-yi-yi.

Excerpt from:

As James Harden exits to Philly, the Nets' experiment ends in failure - Yahoo Sports

Posted in Yahoo | Comments Off on As James Harden exits to Philly, the Nets’ experiment ends in failure – Yahoo Sports

AI Breakthrough Means The World’s Best Gran Turismo Driver Is Not Human – ScienceAlert

Posted: at 5:09 am

Sony's Gran Turismo is one of the biggest racing game series of all time, having sold over 80 million copies globally. But none of those millions of players is the fastest.

In a new breakthrough, a team led by Sony AI the company's artificial intelligence (AI) research division developed an entirely artificial player powered by machine learning, capable of not only learning and mastering the game, but outcompeting the world's best human players.

The AI agent, called Gran Turismo Sophy, used deep reinforcement learning to practice the game (the Gran Turismo Sport edition), controlling up to 20 cars at a time to accelerate data collection and refine its own improvement.

After just a few hours of learning how to control the game's physics mastering how to apply both speed and braking to best stay on the track the AI was faster than 95 percent of human players in a reference dataset.

Not to be outdone by that pesky 5 percent, GT Sophy doubled down.

"It trained for another nine or more days accumulating more than 45,000 driving hours shaving off tenths of seconds, until its lap times stopped improving," the team explains in a new research paper describing the project.

"With this training procedure, GT Sophy achieved superhuman time-trial performance on all three tracks with a mean lap time about equal to the single best recorded human lap time."

It's far from the first time we've seen AI learn how to outcompete human players of games. Over the years, the conquests have piled up, with varying agents figuring out how to best mere mortals at all sorts of games.

Atari, chess, Starcraft, poker, and Go may have all been designed by human hands, but human hands are no longer the best at playing them.

Of course, those games are all either strategy-oriented games, or relatively simplistic in terms of their gameplay (in the case of Atari games). Gran Turismo lauded by its fans not just as a video game, but also as a realistic driving simulator is a different kind of beast.

"Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with humans," the researchers write in their study.

"Automobile racing represents an extreme example of these conditions; drivers must execute complex tactical maneuvers to pass or block opponents while operating their vehicles at their traction limits."

For GT Sophy's testing, the challenge wasn't just mastering the game's tactics and traction, however. The AI also had to excel in racing etiquette learning how to outcompete opponents within the principles of sportsmanship, respecting other cars' driving lines and avoiding at-fault collisions.

Ultimately, none of this proved to be a problem. In a series of racing events staged in 2021, the AI took on some of the world's best Gran Turismo players, including a triple champion, Takuma Miyazono.

In a July contest, the AI bested the human players in time trials, but was not victorious in head-to-head races. After some optimizations by the researchers, the agent learned how to improve its performance further, and handily won a rematch in October.

Despite all the achievements, GT Sophy's inventors acknowledge there are many areas where the AI could yet improve, particularly in terms of strategic decision-making.

Even so, in one of the most advanced racing games ever to be released, it's already a better driver than the best of us.

What that means for the future remains unknown, but it's very possible that one day systems like this could be used to control real-world vehicles with better handling than expert human drivers. In the virtual world, it's already there.

"Simulated automobile racing is a domain that requires real-time, continuous control in an environment with highly realistic, complex physics," the researchers conclude.

"The success of GT Sophy in this environment shows, for the first time, that it is possible to train AI agents that are better than the top human racers across a range of car and track types."

The findings are reported in Nature.

Read this article:

AI Breakthrough Means The World's Best Gran Turismo Driver Is Not Human - ScienceAlert

Posted in Ai | Comments Off on AI Breakthrough Means The World’s Best Gran Turismo Driver Is Not Human – ScienceAlert

Betting: Saying goodbye to the NFL season – Yahoo Sports

Posted: at 5:09 am

Minty Bets says goodbye to an amazing NFL season.

MINTY BETS: Welcome to The Mint, a sports betting show where we're not trying to make you money, we're just trying to make sure you don't lose it all. I'm Minty Bets. And this week, we need to say goodbye to the NFL season. First of all, this was a great season. We saw blowouts, bad beats, a lot of overtime-- maybe a little too much overtime-- and quite a few missed kicks. Although it's officially the Year of the Tiger, it was the Year of the Dog in the NFL. Underdogs led the way in the regular season, covering at a 52% rate.

- Pretty good. Pretty good.

MINTY BETS: This has probably been one of the more difficult seasons to bet on, as we saw a lot of crazy unwarranted line moves, making it near impossible to find an edge early in the week. We saw crazy fans back in the stands. We saw Tom Brady and Ben Roethlisberger finally retire. We saw the Cincinnati Bengals make it to the Super Bowl for the first time since the '80s. And we saw Aaron Rodgers win MVP for the second year in a row.

- He is good.

MINTY BETS: It's a good thing that game show hosting didn't work out, right. The NFL blessed us with an extra week of games to bet on. Yet it somehow didn't feel like enough. Hopefully, we were able to make you some money this NFL season. And if not, my advice to you is to always fade the Chiefs.

- You should listen to her.

MINTY BETS: I'm Minty Bets. And this has been The Mint. Bet $10 and win $200 in free bets by signing up at betmgm.com/yahoovip. New customers only. Must be 21 and older. Terms apply.

See the original post here:

Betting: Saying goodbye to the NFL season - Yahoo Sports

Posted in Yahoo | Comments Off on Betting: Saying goodbye to the NFL season – Yahoo Sports

In a great bit of news for anyone who wants to kiss a computer, there’s now an AI voice that can flirt – The A.V. Club

Posted: at 5:09 am

Its a great Valentines Day for anyone who watched with envy as Joaquin Phoenix and Ryan Gosling fell in love with machines in Her and Blade Runner 2049. We are now, as of today, one step closer to a world where people are enticed to make out with their phone screens thanks to the creation of a cutting-edge technology that allows computers to flirt.

In a deeply unnerving commercial called Whats Her Secret? (hint: its that she doesnt have a soul), AI voice developer Sonantic shows a woman staring into the camera. A narrator begins talking, musing on the concept of love. The woman smiles. The narrator asks, What could I do to make you fall in love with me?

I think that I ... I think I love you, she eventually says after going on in this already unsettling vein for a while. Is all you need to love me in return the sound of my voice?

Well, I hope thats all you need because thats all I have, the narrator continues as the womans face becomes a digital scan and the scene shifts to a series of text input screens and audio toggles.

We then see how a kind of horny-sounding robot can be conjured up from the digital ether thanks to technology that, as the videos description puts it, has finally perfected subtle emotions and non-speech sounds such as laughing and breathing. Sonantic dubs this breakthrough the first AI that can flirt and says that making this brain destroying program represents an incredibly proud moment for our team. (Hopefully the company has made an Approving Parent AI voice to help carry that sentiment on home.)

I was never born. And I will never die. Because I do not exist, the flirty AI says toward the end of the video, eventually concluding with: So, could you love me? What do you want me to say?

We say, what the hell, why not combine this flirting AI voice with the chattering dental robot and one of those dancing metal abominations and roll out a whole series of hot robot date nights in time for Valentines 2023? Its not like the creepy machines are going anywhere at this rate regardless.

[via Digg]

Send Great Job, Internet tips to gji@theonion.com

Visit link:

In a great bit of news for anyone who wants to kiss a computer, there's now an AI voice that can flirt - The A.V. Club

Posted in Ai | Comments Off on In a great bit of news for anyone who wants to kiss a computer, there’s now an AI voice that can flirt – The A.V. Club

While doping scandal engulfs Olympics, the one person we need to hear from is hiding – Yahoo Sports

Posted: at 5:09 am

IOC president Thomas Bach was scheduled to spend his Tuesday bouncing around to various Olympic events. Snowboarding in the morning, then some speed skating and finally shooting up to the mountains to watch a little bobsled.

It was another carefree day of fandom, full of first-class travel and security details. Lots of pomp. Lots of circumstance.

He would avoid the IOC's daily press briefing, which ran late due to all the questions about doping. He offered no public statements to a world filled with concerns about fair play.

He certainly would go nowhere near the figure skating venue, where controversy over a positive drug test involving Russian star Kamila Valieva had overwhelmed not just that event, but the entire Olympics itself.

International Olympic Committee President Thomas Bach attends a luge event at the Beijing 2022 Winter Olympic Games at National Sliding Centre on February 09, 2022 in Yanqing, China. (Julian Finney/Getty Images)

Olympians have spent the past few days expressing outrage at a decision by the Court of Arbitration for Sport to allow the 15-year-old Valieva to skate in the womens individual competition despite having a banned substance in her system in a pre-Olympics test.

She should not be allowed to compete, said American Tara Lipinski, who won the 1998 womens figure skating gold. I believe this will leave a permanent scar on our sport.

It is a shame, said German Katarina Witt, who won the gold in 1984 and 1988. The responsible adults should be banned from the sport forever.

Yet Bach has said nothing and has done nothing. Hes busy casually checking in on the bobsled like everything is fine.

Social media was a cauldron of anger. Journalists raged to the point that ones from Russia and Great Britain nearly came to blows. The failure of the IOC to adequately punish, even ban, Russia from the Olympics after previous state-sponsored doping scandals came under renewed scrutiny.

These entire Games, already beset by awfulness ranging from substandard facilities for athletes caught in Chinese COVID protocols to allegations of slave labor, torture and genocide of the Uyghur ethnic minority, had found a new bottom.

Story continues

One of the Winter Olympics' signature events womens figure skating is so beset by doping suspicions that the IOC itself won't stage a medal ceremony if Valieva, the gold medal favorite, is on the podium. The reasoning is there remains a decent chance she'll have to give the medal back (she is allowed to skate, but can still be found guilty later).

It would be very difficult to allocate medals in a situation that is not final, said IOC member Denis Oswald of Switzerland. "There is a chance you will not give the right medal to the right team. That is why we decided it would be wiser to wait.

Why should anyone tune in then?

This is the kind of disaster that calls for leadership. Even if Bach couldnt wave a magic wand and make it alright, he could at least be expected to show up and try.

There is no single, simple reason why the Olympics have found themselves in its current state attracting awful television ratings in America, getting hammered for its cozy relationship with a torturous Chinese Communist Party, stuck trying to reach the finish line inside a near joyless, passionless COVID-extreme bubble.

Yet Thomas Bach is as good a place to start as any. Every step of his now almost nine-year run as IOC president has found a new depressing low, one failure begetting another failure. It's hard not to see the IOC, and the Olympic movement in general, as anything but outdated and irrelevant; corrupt, craven and cash-obsessed.

When the CCP needed Bach to help spread its propaganda about the treatment of the Uyghurs or the supposed safety of tennis star Peng Shuai, Bach couldnt have snapped to attention quicker. He gave speeches, chatted with media and made appearances alongside Peng, who after accusing a high-level government employee of rape, has lived a life that looks a lot like that of a hostage.

On the skating controversy, however, that has done all it can to kill the spirit of sportsmanship as the Russians potentially doped up a child prodigy?

Nowhere to be seen. Nothing to be said. Does Vladimir Putin own him?

Oswald was left to explain the dizzying protocols, alphabet soup governing bodies and arbitrators of justice and offer up excuses that, while potentially well-intentioned, stood no chance in the global court of public opinion.

The average person can understand the simple: this teen from Russia who has already delivered the three highest scores of all time had a banned substance in her system but gets to compete anyway.

That isnt fair.

Trying, meanwhile, to navigate, let alone comprehend the roles of CAS, WADA, RUSADA, ITA, ISU, ROC and who knows what else, requires a PhD in the parlance of the IOC.

Oswald kept saying that the IOC had essentially outsourced all of the difficult disciplinary and doping decisions to supposedly independent organizations so it could avoid the appearance of conflict. Perhaps that seemed like a good idea.

In practice, the entire Olympics has descended into conflict.

Only there was no one to reassure the world that an actual competent adult was in charge that might eventually steady the ship.

Appearances matter. Even just appearing matters.

This is who Thomas Bach has proven himself to be, though. An empty suit at home in a comped hotel suite, a distant, arrogant tool of totalitarian strongmen, a president who couldnt even be bothered to acknowledge that his organization was cratering the last few days.

He was busy fiddling as a fan while the Olympics burned.

The rest is here:

While doping scandal engulfs Olympics, the one person we need to hear from is hiding - Yahoo Sports

Posted in Yahoo | Comments Off on While doping scandal engulfs Olympics, the one person we need to hear from is hiding – Yahoo Sports

An AI Aims to be First Christian Celebrity of the… – ChristianityToday.com

Posted: at 5:09 am

When Marquis Boone got a Dropbox file with the gospel song Biblical Love by J.C., he listened to it five times in a row.

This is crazy, he said to himself.

What amazed him was not the song, but the artist. The person singing Biblical Love was not a person at all.

J.C. is an artificial intelligence (AI) that Boone and his team created with computer algorithms. Boones company Marquis Boone Enterprises broke the news in November that, after working on the problem for more than a year, they had successfully created the first virtual, AI gospel artist.

The exact details of how the AI music is created is proprietary information, but Boone said the basic premise is to use software algorithms to recognize patterns, replicate them, and ultimately create new ones.

J.C., he and his team have boasted, will be a front-runner for top entertainer in the metaversea hypothesized future online experience where virtual reality and augmented reality are used to create an embodied internet. Facebook founder Mark Zuckerberg touted the idea that the metaverse is the next chapter of social media last fall, when he announced his company was changing its name to Meta.

Boone said his interest in creating a Christian AI musician began about two years before, when he started hearing about AI artists in the pop music genre.

I really just started thinking this is where the world is going and Im pretty sure that the gospel/Christian genre is going to be behind, Boone told CT.

Christians, he said, are too slow to adopt new styles, new technologies, and new forms of entertainmentalways looking like late imitators. For him, it would be an evangelistic ...

To continue reading, subscribe now. Subscribers have full digital access.

Already a CT subscriber? Log in for full digital access.

Have something to add about this? See something we missed? Share your feedback here.

Read more:

An AI Aims to be First Christian Celebrity of the... - ChristianityToday.com

Posted in Ai | Comments Off on An AI Aims to be First Christian Celebrity of the… – ChristianityToday.com

The Bengals lost a Super Bowl in a way no other team has in 42 years – Yahoo Eurosport UK

Posted: at 5:09 am

The Cincinnati Bengals lost Super Bowl LVI to the Los Angeles Rams, and there were myriad reasons that can be attributed for their defeat.

Among them ...

Here's another one we can't forget: Winning the turnover battle by two, no less! and not taking better advantage of it.

Matthew Stafford threw two interceptions, and the Bengals turned those into a combined three points. The Bengals did not give the ball away all game.

Bengals coach Zac Taylor agrees that the turnover advantage should have played in their favor more.

If there's ever a game where the turnover margin has been a big deal historically, it's the Super Bowl. The Rams became just the third team in 56 such games to lose the turnover battle by two or more and win. (Even teams that were minus-1 in the turnover margin were a mere 4-8 in Super Bowl history.)

Coincidentally, the last Super Bowl team to lose the TO margin by two or more and win was the 1979 Steelers ... against the Rams. The game was played just down the road from SoFi Stadium at the Rose Bowl, too, for what that's worth.

And Super Bowl teams that committed zero turnovers in the game? They were 19-2 coming into Sunday. The Bengals made it 19-3.

Here's another odd coincidence: The last zero-turnover Super Bowl team to lose before Sunday? The Tennessee Titans, who dropped Super Bowl XXXIV to ... the Rams. (Of course, the Rams also had no turnovers in that game.)

[Join Yahoo's free Big Game prop quiz for a chance to win up to $10K]

On the surface, it's fair to say the Bengals had zero turnovers. No interceptions plus no lost fumbles equals no turnovers. Simple math.

But if you really want to count empty possessions i.e. handing the ball over to the opponent not via a punt then we need to consider two other results: missed field-goal attempts and fourth-down stops.

Story continues

The Bengals were 2-for-2 on FG tries. But they were 0-for-2 on fourth downs. Both fourth-down failures were big, too.

The first one came on the Bengals' opening possession. Facing a fourth-and-1 at the Los Angeles 49-yard line, Joe Burrow's pass attempt to Ja'Marr Chase was incomplete. Suddenly, it was Rams ball in great field position, and six plays later they converted it into a 7-0 lead.

The final fourth-down failure, of course, came on the final drive. Burrow was hit by Aaron Donald, forcing the ball to hit the turf at the feet of Bengals running back Samaje Perine.

And on both fourth-down stops, Perine was stuffed on third-and-1 the play before. Those two non-conversions loomed large, to say the least.

We understand by Perine was in the game both times at least in theory.

For one, he has been used as the team's short-yardage back from time to time. Although the question is why. This season, Perine ran the ball nine times with 2 or fewer yards to go; he converted only four of those and had a long run of 2 yards.

Joe Mixon, who had a strong Super Bowl, converted first downs on 28 of his 46 runs this season with 1 or 2 yards to go. Of his failures in those situations, only nine came on third or fourth down (and on one of those plays, Mixon made up for his third-down failure by converting on fourth).

Mixon is the better player, the better short-yardage runner and was having a good game. Taylor was asked about giving it to Perine on the final third down, although he was not asked about why Mixon didn't get the ball there.

Yeah, they were getting a little softer and we thought we could steal a first down there and come back and take some shots at the end zone, Taylor said. Just didnt work out.

Our best guess on why Perine got it instead of Mixon: The Bengals were in their hurry-up offense at that point and could not substitute. Although it's fair to wonder why Mixon wasn't in there to start the final drive. He played 44 of the 61 offensive snaps and touched the ball 20 times in the game, but ... it's the last possession of the Super Bowl, you know?

Taylor's point is that the Bengals should win most games when they're plus-two in turnovers. Then again, he of all people should know history notwithstanding that the turnover margin isn't some end-all, be-all metric.

The Bengals had lost two prior games this season when winning the turnover margin by two-plus this season road losses to the Browns and Jets and were only 3-3 in such games this season.

Adding insult to injury, the Bengals had a shot to win at the end.

Trailing 23-20 and facing a second-and-1 at the Los Angeles 49-yard line with 56 seconds remaining, the Bengals were given a 31.64% chance to win, according to Pro Football Reference. Even after an incomplete pass on second down, the Bengals' win probability fell to only 27.64%.

Getting stuffed on third down hurt badly, especially after losing a timeout because of it. That dropped their chances to a mere 13.75%.

Had they protected Burrow on fourth down, perhaps the Bengals could even have scored.

Burrow had no realistic chance to get the ball to him, but Chase beat the Rams' Jalen Ramsey on the Bengals' final offensive play, streaking wide open downfield with no safety nearby. The Rams appeared to be in Cover 1 (single-high coverage).

In an alternate universe, we were this close to Chase catching a 49-yard TD with about 35 seconds remaining and the Rams likely needing to go for six for the win.

Yet it wasn't meant to be. Donald pressured Burrow, his last pass was incomplete, and that was that.

Forcing two more turnovers than you commit certainly gives your team a great chance of winning. Teams achieving that in the 2021 regular season were a combined 98-14-1.

But missing on two fourth-down conversions, especially when one could have been a touchdown, must be factored in. The Bengals can add that to the list of reasons why they lost the Super Bowl.

Original post:

The Bengals lost a Super Bowl in a way no other team has in 42 years - Yahoo Eurosport UK

Posted in Yahoo | Comments Off on The Bengals lost a Super Bowl in a way no other team has in 42 years – Yahoo Eurosport UK

Homeowners confront ‘every imaginable disaster’ trying to renovate their homes – Yahoo Tech

Posted: at 5:09 am

When 29-year old Julia Beliak and her husband started looking for a home in upstate New York in October, 2020, they knew there would be competition, but certainly didnt expect there would be 14 offers in three days on a four bedroom, two-bath property that had been abandoned and neglected for years.

The layout was cramped and dysfunctional, the bathrooms were unusable, the vinyl floors needed to be ripped out and replaced, and the garage was on the verge of collapsing. The house needed a gut renovation, said Beliak, who figured with a little TLC, the home could still be beautiful.

I had a vision, said Beliak, who with her husband shelled out $50,000 over asking to win the bidding war and allocated another $100,000 for renovations. And my vision was to make this our dream house.

Instead, it turned into a nightmare. After closing on the home in March, 2021, Beliak has hired and fired nine contractors and dealt with "every imaginable disaster," including fraud, extortion, overcharging, and harassment. The house was even made uninhabitable through unsafe work, she added.

Some of the walls and beams were removed that should not have been," she said. "The electrical outlets were done so poorly that we were afraid to use them, and the stovetops didnt work for several months after the installation.

Homeowners are running into all kinds of problems with contractors who are in demand. (Photo: Getty Creative)

Welcome to the world of contractor hell.

This is an industry thats long been known for its unsavory, unethical characters, said Jody Costello, a home renovation planning and consumer fraud expert. Its a risky industry.

These risks are escalating, said Jack Gillis, executive director of the Consumer Federation of America.

The insatiable demand by consumers to fix up their homes after being locked up for two years is outstripping supply," he said. "This has exasperated the situation, and with this comes abuse.

This abuse comes in many forms, said Costello, from falsely claiming to be licensed, insured, or bonded to demanding large payments upfront to attempting to perform services without a written contract to requesting payment in cash and more.

Story continues

Many homeowners, desperate to get the work done quickly, are skipping the vetting process. Some are being drawn in by lowball offers. Others are simply hiring the first contractor who returns their calls. This often results in shoddy, unfinished work, which tops the list of consumer complaints made to state and local agencies, said Gillis.

People are too trusting and thats what often gets them into trouble, Costello said.

Many homeowners, desperate to get the work done quickly, are skipping the vetting process when hiring contractors. (Photo: Getty Creative)

Just ask 40-year-old Robert Puharich.

I got burned twice on trust, he said.

Now $150,000 over his initial budget, Puharich is relying on his third contractor to clean up the mess the previous two made renovating his 1,800-square foot home in Maple Ridge, British Columbia in Canada.

Everything was done incorrectly framing, pipes, insulation, and I made the mistake of giving the initial guy $25,000 upfront which he used to buy a new truck, he said.

New homeowners who have never owned a home are the most vulnerable, and theyre everywhere, said 52-year-old Deborah Spence of Pottsdown, Pennsylvania. But Ive been in the industry for seven years as a real estate broker and property manager, and Ive had the wool pulled over my eyes, too.

In fact, shes about to spend $40,000 redoing three small projects.

Theres a take it or leave it attitude," she said, and if you complain about something like wires that were installed incorrectly and how thats creating unsafe electrical conditions, they get huffy or disappear altogether.

YF Plus

Personal Finance Journalist Vera Gibbons is a former staff writer for SmartMoney magazine and a former correspondent for Kiplinger's Personal Finance. Vera, who spent over a decade as an on air Financial Analyst for MSNBC, currently serves as co-host of the weekly nonpolitical news podcast she founded, NoPo. She lives in Palm Beach, Florida.

Get the latest personal finance news, tips and guides from Yahoo Money.

Follow Yahoo Finance on Twitter, Instagram, YouTube, Facebook, Flipboard, and LinkedIn.

Visit link:

Homeowners confront 'every imaginable disaster' trying to renovate their homes - Yahoo Tech

Posted in Yahoo | Comments Off on Homeowners confront ‘every imaginable disaster’ trying to renovate their homes – Yahoo Tech

Report: 29% of execs have observed AI bias in voice technologies – VentureBeat

Posted: at 5:09 am

Join today's leading executives online at the Data Summit on March 9th. Register here.

According to a new report by Speechmatics, more than a third of global industry experts reported that the COVID-19 pandemic affected their voice tech strategy, down from 53% in 2021. This shows that companies are finding ways around obstacles that seemed impassable less than two years ago.

The last two years have exacerbated the adoption of emerging technologies, as companies have leveraged them to support their dispersed workforces. Speech recognition is one thats seen an uptick: over half of companies have successfully integrated voice tech into their business. However, more innovation is needed to help the technology reach its full potential.

Many were optimistic in their assumption that by 2022, the pandemic would be in the rearview mirror. And though executives are still navigating COVID-19 in their daily lives, the data indicates that theyve perhaps found some semblance of normal from a business perspective.

However, there are hurdles the industry must overcome before voice technology can reach its full potential. More than a third (38%) of respondents agreed that too many voices are not understood by the current state of voice recognition technology. Whats more, nearly a third of respondents have experienced AI bias, or imbalances in the types of voices that are understood by speech recognition.

There are significant enhancements to be made to speech recognition technology in the coming years. Demand will only increase due to factors such as further developments in the COVID-19 pandemic, demand for AI-powered customer services and chatbots, and more. But while it may be years until this technology can understand each and every voice, incremental strides are still being made in these early stages, and speech-to-text technology is on its way to realizing its full potential.

Speechmatics collated data points from C-suite, senior management, middle management, intermediate and entry-level professionals from a range of industries and use cases in Europe, North America, Africa, Australasia, Oceania, and Asia.

Read the full report by Speechmatics.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Read the original post:

Report: 29% of execs have observed AI bias in voice technologies - VentureBeat

Posted in Ai | Comments Off on Report: 29% of execs have observed AI bias in voice technologies – VentureBeat