Will Generative AI Supplant or Supplement Hollywoods Workforce? – Variety

Illustration: VIP+: Adobe Stock

Note: This article is based on Variety Intelligence Platforms special reportGenerative AI & Entertainment,available only to subscribers.

The rapidly advancing creative capabilities of generative AI have led to questions about artificial intelligence becoming increasingly capable of replacing creative workers across film and TV production, game development and music creation.

Talent might increasingly view and use generative AI in more straightforward ways as simply a new creative tool in their belt, just as other disruptive technologies through time have entered and changed how people make and distribute their creative work.

In effect, there will still and always be a need for people to be the primary agents in the creative development process.

Talent will incorporate AI tools into their existing processes or to make certain aspects of their process more efficient and scalable, said Brent Weinstein, chief development officer at Candle Media, who has worked extensively with content companies and creators in developing next-gen digital-media strategies and pioneering new businesses and models that sit at the intersection of content and technology.

The disruptive impact of generative AI will certainly be felt in numerous creative roles, but fears about total machine takeover of creative professions are most likely overblown. Experts believe generative AI wont be a direct substitute for artists, but it can be a tool that augments their capabilities.

For the type of premium content that has always defined the entertainment industry, the starting point will continue to be extraordinarily and uniquely talented artists, Weinstein continued. Actors, writers, directors, producers, musicians, visual effects supervisors, editors, game creators and more, along with a new generation of artists that similar to the creators who figured out YouTube early on learns to master these innovative new tools.

Joanna Popper, chief metaverse officer at CAA, brings expertise on all emerging technologies relevant for creative talent and the potential to impact content creation, distribution and community engagement.

Ideally, creatives use AI tools to collaborate and enhance our abilities, similar to creatives using technical tools since the beginning of filmmaking, Popper said. Weve seen technology used throughout history to help filmmakers and content creators either produce stories in innovative ways, enable stories to reach new audiences and/or enable audiences to interact with those stories in different ways.

A Goldman Sachs study released last month of how AI would impact economic growth estimated that 26% of work tasks would be automated within the arts, design, sports, entertainment and media industries, roughly in line with the average across all industries.

In February, Netflix received backlash after releasing a short anime film that partly used AI-driven animation. Voice actors in Latin America who were replaced by automated software have also spoken out.

Julian Togelius, associate professor of computer science and engineering and director of the Game Innovation Lab at the NYU Tandon School of Engineering, has done extensive research in artificial intelligence and games. Generative AI is more like a new toolset that people need to master within existing professions in the game industry, he said. In the end, someone still needs to use the tool. People will always supervise and initiate the process, so theres no true replacemen. Game developers now just have more powerful tools.

Read more of VIP+s AI assessments:

Takeaways for diligence and risk mitigation

Coming April 24: Efficiency in the gen AI production process

Plus, dive into the expansive special report

Continued here:

Will Generative AI Supplant or Supplement Hollywoods Workforce? - Variety

Posted in Ai

How artificial intelligence is matching drugs to patients – BBC

17 April 2023

Image source, Natalie Lisbona

Dr Talia Cohen Solal, left, is using AI to help her and her team find the best antidepressants for patients

Dr Talia Cohen Solal sits down at a microscope to look closely at human brain cells grown in a petri dish.

"The brain is very subtle, complex and beautiful," she says.

A neuroscientist, Dr Cohen Solal is the co-founder and chief executive of Israeli health-tech firm Genetika+.

Established in 2018, the company says its technology can best match antidepressants to patients, to avoid unwanted side effects, and make sure that the prescribed drug works as well as possible.

"We can characterise the right medication for each patient the first time," adds Dr Cohen Solal.

Genetika+ does this by combining the latest in stem cell technology - the growing of specific human cells - with artificial intelligence (AI) software.

From a patient's blood sample its technicians can generate brain cells. These are then exposed to several antidepressants, and recorded for cellular changes called "biomarkers".

This information, taken with a patient's medical history and genetic data, is then processed by an AI system to determine the best drug for a doctor to prescribe and the dosage.

Although the technology is currently still in the development stage, Tel Aviv-based Genetika+ intends to launch commercially next year.

Image source, Getty Images

The global pharmaceutical sector had revenues of $1.4 trillion in 2021

An example of how AI is increasingly being used in the pharmaceutical sector, the company has secured funding from the European Union's European Research Council and European Innovation Council. Genetika+ is also working with pharmaceutical firms to develop new precision drugs.

"We are in the right time to be able to marry the latest computer technology and biological technology advances," says Dr Cohen Solal.

A senior lecturer of biomedical AI and data science at King's College London, she says that AI has so far helped with everything "from identifying a potential target gene for treating a certain disease, and discovering a new drug, to improving patient treatment by predicting the best treatment strategy, discovering biomarkers for personalised patient treatment, or even prevention of the disease through early detection of signs for its occurrence".

New Tech Economy is a series exploring how technological innovation is set to shape the new emerging economic landscape.

Yet fellow AI expert Calum Chace says that the take-up of AI across the pharmaceutical sector remains "a slow process".

"Pharma companies are huge, and any significant change in the way they do research and development will affect many people in different divisions," says Mr Chace, who is the author of a number of books about AI.

"Getting all these people to agree to a dramatically new way of doing things is hard, partly because senior people got to where they are by doing things the old way.

"They are familiar with that, and they trust it. And they may fear becoming less valuable to the firm if what they know how to do suddenly becomes less valued."

However, Dr Sailem emphasises that the pharmaceutical sector shouldn't be tempted to race ahead with AI, and should employ strict measures before relying on its predictions.

"An AI model can learn the right answer for the wrong reasons, and it is the researchers' and developers' responsibility to ensure that various measures are employed to avoid biases, especially when trained on patients' data," she says.

Hong Kong-based Insilico Medicine is using AI to accelerate drug discovery.

"Our AI platform is capable of identifying existing drugs that can be re-purposed, designing new drugs for known disease targets, or finding brand new targets and designing brand new molecules," says co-founder and chief executive Alex Zhavoronkov.

Image source, Insilico Medicine

Alex Zhavoronkov says that using AI is helping his firm to develop new drugs more quickly than would otherwise be the case

Its most developed drug, a treatment for a lung condition called idiopathic pulmonary fibrosis, is now being clinically trialled.

Mr Zhavoronkov says it typically takes four years for a new drug to get to that stage, but that thanks to AI, Insilico Medicine achieved it "in under 18 months, for a fraction of the cost".

He adds that the firm has another 31 drugs in various stages of development.

Back in Israel, Dr Cohen Solal says AI can help "solve the mystery" of which drugs work.

Continue reading here:

How artificial intelligence is matching drugs to patients - BBC

Posted in Ai

Marrying Human Interaction and AI with Navid Alipour – Healio

April 20, 2023

43 min listen

Disclosures: Jain reports no relevant financial disclosures. Alipour reports he is the founder of CureMatch.

ADD TOPIC TO EMAIL ALERTS

Receive an email when new articles are posted on

Back to Healio

In this episode, host Shikha Jain, MD, speaks with CureMatch CEO Navid Alipour about the rise of AI in the health care space, how technology can elevate the ways in which we diagnose and deliver health care and more.

Navid Alipour is the co-founder and CEO of CureMatch, a company focused on artificial intelligence (AI) technology. He is also a founder of an AI-focused VC fund, Analytics Ventures.

Wed love to hear from you! Send your comments/questions to Dr. Jain at oncologyoverdrive@healio.com. Follow us on Twitter @HemOncToday and @ShikhaJainMD. Alipour can be reached at curematch.com and curemetrix.com, or on Twitter @CureMatch and @CureMetrix.

ADD TOPIC TO EMAIL ALERTS

Receive an email when new articles are posted on

Back to Healio

See the original post here:

Marrying Human Interaction and AI with Navid Alipour - Healio

Posted in Ai

These are the tech jobs most threatened by ChatGPT and A.I. – CNBC

As if there weren't already enough layoff fears in the tech industry, add ChatGPT to the list of things workers are worrying about, reflecting the advancement of this artificial intelligence-based chatbot trickling its way into the workplace.

So far this year, the tech industry already has cut 5% more jobs than it did in all of 2022, according to Challenger, Gray & Christmas.

The rate of layoffs is on track to pass the job loss numbers of 2001, the worst year for tech layoffs due to the dot-com bust.

As layoffs continue to mount, workers are not only scared of being laid off, they're scared of being replaced all together. A recent Goldman Sachs report found 300 million jobs around the world stand to be impacted by AI and automation.

But ChatGPT and AI shouldn't ignite fear among employees because these tools will help people and companies work more efficiently, according to Sultan Saidov, co-founder and president of Beamery, a global human capital management software-as-a-service company, which has its own GPT, or generative pretrained transformer, called TalentGPT.

"It's already being estimated that 300 million jobs are going to be impacted by AI and automation," Saidov said. "The question is: Does that mean that those people will change jobs or lose their jobs? I think, in many cases, it's going to be changed rather than lose."

ChatGPT is one type of GPT tool that uses learning models to generate human-like responses, and Saidov says GPT technology can help workers do more than just have conversations. Especially in the tech industry, specific jobs stand to be impacted more than others.

Saidov points to creatives in the tech industry, like designers, video game creators, photographers, and those who create digital images, as those whose jobs will likely not be completely eradicated. It will help these roles create more and do their jobs quicker, he said.

"If you look back to the industrial revolution, when you suddenly had automation in farming, did it mean fewer people were going to be doing certain jobs in farming?" Saidov said. "Definitely, because you're not going to need as many people in that area, but it just means the same number of people are going to different jobs."

Just like similar trends in history, creative jobs will be in demand after the widespread inclusion of generative AI and other AI tech in the workplace.

"With video game creators, if the number of games made globally doesn't change year over year, you'll probably need fewer game designers," Saidov said. "But if you can create more as a company, then this technology will just increase the number of games you'll be able to get made."

Due to ChatGPT buzz, many software developers and engineers are apprehensive about their job security, causing some to seek new skills and learn how to engineer generative AI and add these skills to their resume.

"It's unfair to say that GPT will completely eliminate jobs, like developers and engineers," says Sameer Penakalapati, chief executive officer at Ceipal, an AI-driven talent acquisition platform.

But even though these jobs will still exist, their tasks and responsibilities could likely be diminished by GPT and generative AI.

There's an important distinction to be made between GPT specifically and generative AI more broadly when it comes to the job market, according to Penakalapati. GPT is a mathematical or statistical model designed to learn patterns and provide outcomes. But other forms of generative AI can go further, reconstructing different outcomes based on patterns and learnings, and almost mirroring a human brain, he said.

As an example, Penakalapati says if you look at software developers, engineers, and testers, GPT can generate code in a matter of seconds, giving software users and customers exactly what they need without the back and forth of relaying needs, adaptations, and fixes to the development team. GPT can do the job of a coder or tester instantly, rather than the days or weeks it may take a human to generate the same thing, he said.

Generative AI can more broadly impact software engineers, and specifically devops (development and operations) engineers, Penakalapati said, from the development of code to deployment, conducting maintenance, and making updates in software development. In this broader set of tasks, generative AI can mimic what an engineer would do through the development cycle.

While development and engineering roles are quickly adapting to these tools in the workplace, Penakalapati said it'll be impossible for the tools to totally replace humans. More likely we'll see a decrease in the number of developers and engineers needed to create a piece of software.

"Whether it's a piece of code you're writing, whether you're testing how users interact with your software, or whether you're designing software and choosing certain colors from a color palette, you'll always need somebody, a human, to help in the process," Penakalapati said.

While GPT and AI will heavily impact more roles than others, the incorporation of these tools will impact every knowledge worker, commonly referred to as anyone who uses or handles information in their job, according to Michael Chui, a partner at the McKinsey Global Institute.

"These technologies enable the ability to create first drafts very quickly, of all kinds of different things, whether it's writing, generating computer code, creating images, video, and music," Chui said. "You can imagine almost any knowledge worker being able to benefit from this technology and certainly the technology provides speed with these types of capabilities."

A recent study by OpenAI, the creator of ChatGPT, found that roughly 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of learning models in GPT tech, while roughly 19% of workers might see 50% of their tasks impacted.

Chui said workers today can't remember a time when they didn't have tools like Microsoft Excel or Microsoft Word, so, in some ways, we can predict that workers in the future won't be able to imagine a world of work without AI and GPT tools.

"Even technologies that greatly increased productivity, in the past, didn't necessarily lead to having fewer people doing work," Chui said. "Bottom line is the world will always need more software."

The rest is here:

These are the tech jobs most threatened by ChatGPT and A.I. - CNBC

Posted in Ai

Deepfake porn could be a growing problem amid AI race – The Associated Press

NEW YORK (AP) Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.

But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.

Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.

The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button, said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. And as long as that happens, people will undoubtedly ... continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.

Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesnt know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didnt respond. Others took it down but she soon found it up again.

You cannot win, Martin said. This is something that is always going to be out there. Its just like its forever ruined you.

The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment essentially blaming her for the images instead of the creators.

Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they dont comply with removal notices for such content from online safety regulators.

But governing the internet is next to impossible when countries have their own laws for content thats sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.

In the meantime, some AI models say theyre already curbing access to explicit images.

OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But its possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AIs license extends to third-party applications built on Stable Diffusion and strictly prohibits any misuse for illegal or immoral purposes.

Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate theyre fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content even if its intended to express outrage will be removed and will result in an enforcement, the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

The same app removed by Google and Apple had run ads on Metas platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the companys policy restricts both AI-generated and non-AI adult content and it has restricted the apps page from advertising on its platforms.

In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content which has become a growing concern for child safety groups.

When people ask our senior leadership what are the boulders coming down the hill that were worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes, said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

We have not ... been able to formulate a direct response yet to it, Portnoy said.

Continue reading here:

Deepfake porn could be a growing problem amid AI race - The Associated Press

Posted in Ai

AI cameras: More than 2 on two-wheelers, even if children, will invite fine – Onmanorama

Two-wheelers ferrying more than two, including children, will be penalised as the Artificial Intelligence (AI) cameras that become operational in Kerala from April 20 will treat those as a traffic violation.

A total of 726 AI cameras have been installed on state and national highways in Kerala.

Transport Commissioner S Sreejith said the fines will be imposed on five types of violations. The fine for more than two persons travelling on a two-wheeler is Rs 2,000.

"More than two persons travelling on a two-wheeler is a traffic violation even now. But a such loose executive of the law will not be allowed once the AI cameras are operational," said S Sreejith.

"Helmetless travel, mobile phone usage, not using seat-belts, red light violation and more than two persons riding a two-wheeler are the violations that will be penalised in the first phase. The visuals of those who follow the rules will not be captured by the cameras," he said.

'Change the culture'The Transport Commissioner said it was about time two-wheeler users adopted safe practices on the road. "We have to stop the culture of ferrying a whole middle-class family in a two-wheeler. If four people need to travel, arrange an appropriate vehicle or use two two-wheelers," said Sreejith.

Bluetooth calls okThe seat-belt usage among front seat passengers will be checked in the first phase, said S Sreejith. "Using the Bluetooth system to make calls will not be a violation. But other practices will be penalised."

Read the original post:

AI cameras: More than 2 on two-wheelers, even if children, will invite fine - Onmanorama

Posted in Ai

Microsoft reportedly working on its own AI chips that may rival Nvidia’s – The Verge

Microsoft is reportedly working on its own AI chips that can be used to train large language models and avoid a costly reliance on Nvidia. The Information reports that Microsoft has been developing the chips in secret since 2019, and some Microsoft and OpenAI employees already have access to them to test how well they perform for the latest large language models like GPT-4.

Nvidia is the key supplier of AI server chips right now, with companies racing to buy up these chips and estimates suggesting OpenAI will need more than 30,000 of Nvidias A100 GPUs for the commercialization of ChatGPT. Nvidias latest H100 GPUs are selling for more than $40,000 on eBay, illustrating the demand for high-end chips that can help deploy AI software.

While Nvidia races to build as many as possible to meet demand, Microsoft is reportedly looking in-house and hoping it can save money on its AI push. Microsoft has reportedly accelerated its work on codename Athena, a project to build its own AI chips. While its not clear if Microsoft will ever make these chips available to its Azure cloud customers, the software maker is reportedly planning to make its AI chips available more broadly inside Microsoft and OpenAI as early as next year. Microsoft also reportedly has a road map for the chips that includes multiple future generations.

Microsofts custom SQ1 processor. Photo by Amelia Holowaty Krales / The Verge

Microsofts own AI chips arent said to be direct replacements for Nvidias, but the in-house efforts could cut costs significantly as Microsoft continues its push to roll out AI-powered features in Bing, Office apps, GitHub, and elsewhere.

Microsoft has also been working on its own ARM-based chips for several years. Bloomberg reported in late 2020 that Microsoft was looking at designing its own ARM-based processors for servers and possibly even a future Surface device. We havent seen those ARM chips emerge yet, but Microsoft hasworked with AMD and Qualcommfor custom chips for its Surface Laptop and Surface Pro X devices.

If Microsoft is working on its own AI chips, it would be the latest in a line of tech giants. Amazon, Google, and Meta also have their own in-house chips for AI, but many companies are still relying on Nvidia chips to power the latest large language models.

More here:

Microsoft reportedly working on its own AI chips that may rival Nvidia's - The Verge

Posted in Ai