Runway vs Pika Labs — which is the best AI video tool?

 Runway vs Pika Labs.
Runway vs Pika Labs.

There has been a flurry of new artificial intelligence video generators in the past few months. Despite this, Runway’s Gen-2 and Pika Lab’s Pika 1.0 are among the most rounded and high profile, largely because they got there early and through continued innovation.

In addition to impressive video generation, both have fine-tuned control over the motion in the video and custom functionality such as Motion Brush in Runway or Modify Region from Pika Labs.

Both services have similar price points and commercial agreements. They also produce between 2-3 second clips, have the option to extend clips and take text, image or video inputs when generating a new clip. I’ve put them head to head to see how well they compare.

Comparing Runway vs Pika Labs

Doing a comparison between two video AI models involves picking a series of prompts and having them both generate output based on that prompt. I’ve tried to come up with a range of ideas testing camera motion, individual item motion and motion among more than one item.

With all of the tests there are ways to get better output with custom instructions, better prompting or using the native features within the tools. As both Runway and Pika Labs takes a slightly different approach I made as few changes as possible.

Test 1: Drone footage over a forest fire

Runway vs Pika Labs
Runway vs Pika Labs

(Image: ? Runway vs Pika Labs)

This was a simple text-to-video test on both models, using default settings and no custom camera motion. It presents a chance to test the raw output of the underlying model.

I used the prompt for this test: “Drone footage flying over a forest fire. Photorealistic. Flames leaping towards the camera.” This tests different aspects of the model including how it handles the camera motion from the drone, the flames moving and the visuals of trees and fire.

Runway was an easy and clear winner on this test. The flames were more natural and it felt closest to a real video. Pika seemed to struggle with realistic flames and it threw the video off.

Test 2: A Yeti walking in snow

Runway vs Pika Labs
Runway vs Pika Labs

(Image: ? Runway vs Pika Labs)

This was a chance to see how well each of the two AI video tools handled animation character movement within a scene. Again it was text-to-video with default settings but with a 4-second extension to see how the models handle consistency.

The prompt was created by ChatGPT as I wanted to see how well one AI could instruct another, suggesting they visualize: “An imposing, majestic Yeti, towering at eight feet tall with a thick, shaggy coat of white fur that glistens in the sun, is captured mid-stride as it traverses a narrow, snow-covered mountain pass in the heart of the Himalayas during mid-winter.”

The overall scenery for the Runway creation was more photorealistic, but the character animation from Pika Labs was better. I think overall Pika Labs wins this round.

Test 3: A future city from an image

Runway vs Pika Labs
Runway vs Pika Labs

(Image: ? Runway vs Pika Labs)

For this test, I pulled in an image as the prompt rather than using just text. It was a text and image to video but all other settings kept on default. The image was generated in Night Cafe Studio using the SDXL 1.0 model from StabilityAI.

The prompt for the video: “The camera moves through the city, showcasing towering skyscrapers with holographic advertisements, flying vehicles zooming past, and a bustling, technologically advanced urban landscape.”

Neither provided an outright victory in this round but I’m giving it to Pika Labs as it produced animation closer to the description and the point is a test of motion. It was close though as Runway generated a crisper video closer to the original image.

Test 4: Multiple characters in a frame

Runway vs Pika Labs
Runway vs Pika Labs

(Image: ? Runway vs Pika Labs)

One of the areas all generative video models struggle with is multiple characters moving within a single view. For this experiment, I once again used both text and image but ramped up the motion level on both by two points. I didn’t change any other settings.

The prompt for both image and video: “Norman knights charge against Saxon shield wall. The camera pans over clashing swords and spears, focusing on William the Conqueror leading the charge, with Harold Godwinson defending.”

Neither won this round. It’s not because the output is similar, as it is surprisingly different given both come from the same source image and text. The issue is that as with all AI video models, neither coped well with multiple characters. Runway would have won if I were using their multi-motion brush feature as it allows you to set motion by region.

Test 5: Fish swimming in a clear sea

Runway vs Pika Labs
Runway vs Pika Labs

(Image: ? Runway vs Pika Labs)

For this test, it was all about the text. A well-formed prompt designed to see how well each of the models handled complex, but less cluttered motion against a simple environment.

The prompt: “A vibrant coral reef teeming with marine life. The scene is from the perspective of slowly gliding through the water, with colorful fish darting in and out of the coral, and a gentle current causing the sea plants to sway.”

I liked both approaches here and they were distinct from one another. Both also did a great job on diversity of motion, with Pika Labs going down the more diverse but simpler lane and Runway adding considerably more fish into the scene.

It was a close call but I gave it to Pika Labs because the Runway video was less consistent in its character motion. Some of the fish merge and others seem to move backwards.

Test 6: Unique feature test

Runway vs Pika Labs
Runway vs Pika Labs

(Image: ? Runway vs Pika Labs)

For the sixth test, I once again turned to image-to-video. Specifically, I had Leonardo.ai generate an image of an alien creature overlooking a vast expanse.

The goal of this test wasn’t so much how it handled the motion, as the type of image is normally relatively easy for AI video tools — minimal movement, large background and few characters.

The test here was one of the features unique to each of the models and which proved more useful. For Runway it was Multi Motion Brush and for Pika Labs I turned to Modify Region.

While I think Modify Region is an impressive feature, especially if you only want to change a small aspect of a source image, Runway’s multi-motion brush is a game changer and that is why I’ve given this round to Runway.

Test 7: Video-to-video

Runway vs Pika Labs
Runway vs Pika Labs

(Image: ? Runway vs Pika Labs)

This final test is one of the video-to-video features available from each model. Runway has a slight edge going into the test as it has a dedicated video-to-video model that includes a range of style options including from a text, image or preset prompt.

Both models work by overlaying the uploaded source video with the style from your prompt. For Pika Labs this is only available from a test prompt, although you could use Modify Region to set a specific part of the frame to be changed.

As I had the Yeti on my mind from an earlier test I gave it the prompt: “Yeti as a YouTuber,” with a video of me talking to the camera as the source material. Neither did a great job so, to be fair to the test I tried again with the prompt: “Futuristic android character as a YouTuber.”

I’ve used video-to-video models before and they’ve managed to make the mouth move in time to the words being spoken but this time around Runway struggled. However, visually it fit the brief well and mirrored the source video movement.

Pika Labs failed to create a well-formed character, kept flipping me upside down and didn’t change the overall view by much. Runway was the easy winner in this round.

Runway vs Pika Labs: Winner

Neither of the models won outright across the board. Runway took three of the tests, Pika Labs also took three and in one category neither model won.

However, the one that they both lost would have gone to Runway under normal circumstances thanks to its Multi-Motion Brush. The issue was that characters merged into each other, which can be solved by defining individual motion for each.

While I'm a big fan of Pika Labs and its approach, I think I will give the final victory to Runway, particularly its Gen-2 model largely because of the work on creating new features and refining pre-generation settings.

More from Tom's Guide