Magic to Pixels: AI is Rewriting the Rules of Video Creation

While video remains far and away the most engaging form of content for audience retention, both cost and technical ability can be prohibitive— particularly for small businesses. In an ongoing effort to stay abreast of the constantly evolving AI technology and tools, our team at Anchor Line is frequently analyzing how AI can help us optimize our video production and editing processes to make video creation more accessible to businesses of all sizes.

We’re all familiar with the impressive (and sometimes downright creepy) results that AI can generate when it comes to still images, but video generated by AI has, to date, not quite hit the mark. However, platforms like Runway and Pika, and companies like Meta and OpenAI, have made huge strides in making high quality, realistic video content a reality much sooner than expected. 

Here are a few of my favorite AI tools we’ve been using at Anchor Line to generate full motion video:

#1: Stable Diffusion

There are many services online that offer different ways to interact with AI through chat bots like ChatGPT, or image diffusion like Midjourney, however these are all running on a server that is filled to the brim with powerful GPUs. You need the internet to access these platforms, and sometimes they can be weirdly restrictive (looking at you, Google). This is great for people who want to access a lot of the technology, but don’t have a stack of dedicated graphics cards at their disposal. But when you work for a video production company, sometimes you do conveniently have a stack of GPUs at the ready to speed up the tools we use every day like Adobe Creative Suite or Cinema 4D.

For those creating video at home using a standard graphics card, you do need an Nvidia GPU, and a setup of Stable Diffusion. There’s a model of Stable Diffusion that focuses on video called “Stable Video Diffusion,” and personally I found this worked great using the node-based workflow, ComfyUI. The great thing about using a node-based workflow with video, is I can input various styles I may want to influence the video, or I can input wireframes of how I want to see a generated character move. There’s clearly a massive amount of potential when working on these node sheets, however it can be pretty intimidating for new users, and can take some time to learn.

The cloud based options for generating video are great as they don’t require a lot of the know-how that local video generation does, but they can be quite restrictive. For example, they usually only allow generation to be done via text or image input, and you can’t mix further elements or influences.

Additionally, they’ll argue with you about what you’re asking for. Recently, I asked Google’s Gemini AI to create an image of a clown driving a small car for my son. (He has a weird fascination with clowns… don’t ask.) Gemini AI told me that an image of a clown might be scary for some, and determined that since I could potentially be trying to cause harm online, it would not produce the image. Kudos to Google on trying to keep the internet a safer place, but that’s an annoying judgment call! These kinds of restrictions and questioning of my intent happens more than I’d like, which is another reason I’ve always preferred using AI locally.

The other major downside to generating AI video locally is that if I’m looking for something very detailed, it often generates sort of a dreamy looking mess. I’m confident this will continue to improve in time, but for now it means I predominantly use this local tool for generating motion design elements, backgrounds, and other less prominent assets as opposed to video content.

#2: and and are great options for anyone who wants to generate video without needing a powerful computer, or may not fully understand how all of this works. You can provide a reference image and ask Runway or Pika to bring the image to life, or provide a text description of what you’re looking for, and within a couple minutes, in most cases, you have a video. is something we’ve used quite a bit at Anchor Line for generating elements for our 3D scenes or as fun motion design backgrounds for kinetic text. Often I find the best results come from providing Pika an image to start with, and I’ve been very impressed with its fast, easy, no-nonsense approach to video generation.

If you need a bit more customization or flexibility in how you work, but really don’t want to learn all about those scary nodes with local generation, then may be your cup of tea. Runway provides various tools such as StyleGAN, Pix2Pix, and BigGAN… all different tools you can use to generate, edit, or remix videos. Have a background you want to remove but don’t want to roto it yourself? Runway does a pretty bang-up job. Have an existing video but want it to look like claymation? Runway is your tool. Runway can also do text-based generation like Pika can.

While all this sounds amazing on paper, these platforms still have their own limitations that we see often within Stable Diffusion: a lack of clarity or that “dreamy” effect I described above. This is a technology that is rapidly evolving, and it’s clear the future is very bright for generative video.

In Conclusion

AI has come a long (long) way in generating still images since Dall-E was introduced back in 2021. In only a few short years, we’re already seeing images that blur the lines between what is real is what is not. Without a doubt, it’s only a matter of time until video is in the same place. While I personally hope to see local options continue to be accessible, my hunch is that the biggest advances will be happening more with more closed models like Sora, or Google’s Imagen.

There are still (and will likely always be) ethical and legal concerns around AI, however the genie is out of the bottle with this technology, so we’ll continue to use it in ways that we feel are safe and responsible, and our clients will benefit from that. And we remain confident that the best AI tools available cannot replace human creativity, judgment, and overall responsibility.

I’m always down for chatting about AI. Have questions or thoughts?
Get in touch using the form below.