Get ready for the next generation of AI – MIT Technology Review

Posted: October 4, 2022 at 1:21 pm

Researchers from Google also submitted a paper to the conference about their new model calledDreamFusion, which generates 3D images based on text prompts. The 3D models can be viewed from any angle, the lighting can be changed, and the model can be plonked into any 3D environment.

Dont expect that youll get to play with these models anytime soon.Meta isn't releasing Make-A-Video to the public yet. Thats a good thing. Metas model is trained using the same open-source image-data set that was behind Stable Diffusion. The company says it filtered out toxic language and NSFW images, but thats no guarantee that they will have caught all the nuances of human unpleasantness when data sets consist of millions and millions of samples. And the company doesnt exactly have a stellar track record when it comes to curbing the harm caused by the systems it builds, to put it lightly.

The creators of Pheraki write in theirpaperthat while the videos their model produces are not yet indistinguishable in quality from real ones, it is within the realm of possibility, even today. The models creators say that before releasing their model, they want to get a better understanding of data, prompts, and filtering outputs and measure biases in order to mitigate harms.

Its only going to become harder and harder to know whats real online, and video AI opens up a slew of unique dangers that audio and images dont, such as the prospect of turbo-charged deepfakes. Platforms like TikTok and Instagram are alreadywarping our sense of realitythrough augmented facial filters. AI-generated video could be a powerful tool for misinformation, because people have a greater tendency to believe and share fake videos than fake audio and text versions of the same content,accordingto researchers at Penn State University.

In conclusion, we havent come even close to figuring outwhat to do about thetoxic elements of language models. Weve only just started examining the harms around text-to-image AI systems. Video? Good luck with that.

The EU wants to put companies on the hook for harmful AI

The EU is creating new rules to make it easier to sue AI companies for harm.A new bill published last week, which is likely to become law in a couple of years, is part of a push from Europe to force AI developers not to release dangerous systems.

The bill, called the AI Liability Directive, will add teeth to the EUsAI Act, which is set to become law around a similar time. The AI Act would require extra checks for high risk uses of AI that have the most potential to harm people. This could include AI systems used for policing, recruitment, or health care.

See original here:

Get ready for the next generation of AI - MIT Technology Review

Related Posts