Artists astound with AI-generated film stills from a parallel universe – Ars Technica

Enlarge / An AI-generated image from an #aicinema still series called "Vinyl Vengeance" by Julie Wieland, created using Midjourney.

Since last year, a group of artists have been using an AI image generator called Midjourney to create still photos of films that don't exist. They call the trend "AI cinema." We spoke to one of its practitioners, Julie Wieland, and asked her about her technique, which she calls "synthography," for synthetic photography.

Last year, image synthesis models like DALL-E 2, Stable Diffusion, and Midjourney began allowing anyone with a text description (called a "prompt") to generate a still image in many different styles. The technique has been controversial among some artists, but other artists have embraced the new tools and run with them.

While anyone with a prompt can make an AI-generated image, it soon became clear that some people possessed a special talent for finessing these new AI tools to produce better content. As with painting or photography, the human creative spark is still necessary to produce notable results consistently.

Not long after the wonder of generating solo images emerged, some artists began creating multiple AI-generated images with the same themeand they did it using a wide, film-like aspect ratio. They strung them together to tell a story and posted them on Twitter with the hashtag #aicinema. Due to technological limitations, the images didn't move (yet), but the group of pictures gave the aesthetic impression that they all came from the same film.

The fun part is that these films don't exist.

The first tweet we could find that included the #aicinema tag and the familiar four film-style images with a related theme came from Jon Finger on September 28, 2022. Wieland, a graphic designer by day who has been practicing AI cinema for several months now, acknowledges Finger's pioneering role in the art form, along with another artist. "I probably saw it first from John Meta and Jon Finger," she says.

It's worth noting that the AI cinema movement in its current still-image form may be short-lived once text2video models such as Runway's Gen-2 become more capable and widespread. But for now, we'll attempt to capture the zeitgeist from this brief moment in AI time.

To get more of an inside look at the #aicinema movement, we spoke to Wieland, who's based in Germany and has racked up a sizable following on Twitter by posting eye-catching works of art generated by Midjourney. We've previously featured her work in an article about Midjourney v5, a recent upgrade to the model that added more realism.

AI art has been a fruitful field for Wieland, who feels that Midjourney not only gives her a creative outlet but speeds up her professional workflow. This interview was conducted via Twitter direct messages, and her answers have been edited for clarity and length.

An image from an AI cinema still image series called "la dolce vita" by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series called "la dolce vita" by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series called "la dolce vita" by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series called "la dolce vita" by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

Ars: What inspired you to create AI-generated film stills?

Wieland: It started out with dabbling in DALL-E when I finally got my access from being on the waitlist for a few weeks. To be honest, I don't like the "painted astronaut dog in space" aesthetic too much that was very popular in the summer of 2022, so I wanted to test what else is out there in the AI universe. I thought that photography and movie stills would be really hard to nail, but I found ways to get good results, and I used them pretty quickly in my day-to-day job as a graphic designer for mood boards and pitches.

With Midjourney, I reduced my time from looking for inspiration from Pinterest and stock sites from two days of work to maybe 24 hours, because I can generate the exact feeling I need, to get it across for clients to know how it will "feel." Onboarding illustrators, photographers, and videographers has never been easier ever since.

A photo of graphic designer Julie Wieland.

Julie Wieland

Ars: You often call yourself a "synthographer" and your artform "synthography." Can you explain why?

Wieland: In my current exploration of AI-based works I find "synthographer" to be the most logical term to apply to me personally. While photographers are able to capture real moments in time, synthographers are able to capture moments that never have and never will happen.

When asked, I usually refer to Stephan Angos words on synthography: "This new kind of camera replicates what your imagination does. It receives words and then synthesizes a picture from its experience seeing millions of other pictures. The output doesnt have a name yet, but Ill call it a synthograph (meaning synthetic drawing)."

Ars: What process do you use to create your AI cinema images?

Wieland: My process right now looks like this. I use Midjourney for the "original" or "raw" images, then do outpainting (and small inpainting) in DALL-E 2. Finally, I do editing and color correction in Adobe Photoshop or Adobe Lightroom.

An image from an AI cinema still image series by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

An image from an AI cinema still image series by Julie Wieland, generated with Midjourney v5 and refined with Photoshop.

Ars: Do you often encounter any particular challenges with the tools or with prompting?

Wieland: I've never run into a challenge I couldn't solve. Being pretty fluent in photo editing, I always find a way to get the images to look like I needed or wanted them. For me, it's become a tool just like Photoshop to speed up my process and helps create my visions. I skip the image search on stock sites, and I've replaced that process with prompts. The results are usually better, more accurate, and unique.

Ars: What has the reaction been like to your art?

Wieland: Quite mixed, I would say. My Twitter grew only because of posting AI content. On Instagram and Tiktok, I haven't found "my crowd" just yet, and the content feels more like it's getting ignored or brushed over. Maybe because my following is more established on graphic design and tutorials rather than photography or AI tools.

In the first months, I had a hard time with seeing my content as "art," coming from a designer's perspective, I approach my work really calculated. But in 2023, I embraced the process of creating a bit more freely, and Im also exploring different fields in the industry other than just my day-to-day job in graphic design.

The community surrounding AI photography, AI cinema, and synthography has grown quite a bit over the past few weeks and months, and I appreciate the positive feedback on Twitter a lot. I also appreciate seeing others getting inspired by my postsand vice versa, of course.

An image from an AI cinema still image series called "when we all fall asleep, where do we go?" by Julie Wieland, generated with Midjourney v4 and refined with Photoshop.

An image from an AI cinema still image series called "when we all fall asleep, where do we go?" by Julie Wieland, generated with Midjourney v4 and refined with Photoshop.

An image from an AI cinema still image series called "when we all fall asleep, where do we go?" by Julie Wieland, generated with Midjourney v4 and refined with Photoshop.

An image from an AI cinema still image series called "when we all fall asleep, where do we go?" by Julie Wieland, generated with Midjourney v4 and refined with Photoshop.

Ars: What would you say to someone who might say you are not the artist behind your works because Midjourney creates them for you?

Wieland: The people that say Midjourney is just "writing 3 words and pushing a button" are the same ones that stand in front of a Rothko painting or Duchamp Readymades and go, "Well, I could've done this too." It's about the story you're telling, not the tool you're using.

Link:

Artists astound with AI-generated film stills from a parallel universe - Ars Technica

Related Posts

Comments are closed.