Video inputs

This section deals with video inputs from other sources than regular capture cards (which are typically known as “live inputs”, although they also carry video). The most obvious example would be a video file on disk (say, to play in a loop during pauses), but video inputs are quite flexible and can be used also for other things.

Before reading trying to use video inputs, you should read and understand how themes work in general (see The theme). You can get audio from video inputs like any other input. (Be advised, though, that making a general video player that can maintain A/V sync on all kinds of video files is a hard problem, so there may still be bugs in this support.)

If a file contains multiple video streams, like different angles or resolutions, only the first will be used. Similarly, only the first audio stream is used, and it’s always converted to 48 kHz stereo.

Basic video inputs

Adding video to an existing chain happens in two phases; first, the video must be loaded, giving it an identifier, and then that video can be used as inputs in a chain, much like images or regular live inputs. Anything FFmpeg accepts, including network streams, can be used (probably even V4L input cards, although this is untested). Video hardware acceleration is used if available, although the decoded data currently takes a round trip through the GPU.

When loading a video, you need to decide what format to use; Y’CbCr or BGRA. (Whatever you choose, if you get a mismatch with what the video actually is in, FFmpeg will convert it on the CPU with no warning.) Most video is Y’CbCr, so this should be your default unless you know the video is encoded as RGB/BGR, and/or it has an alpha channel you want to use. Getting the format right makes for better efficiency; you not only save a conversion step on the CPU, but sometimes also on the GPU.

Videos are loaded like this:

local video ="filename.mp4", Nageru.VIDEO_FORMAT_YCBCR)

or, for a network stream, perhaps:

local video ="http://localhost/file.nut", Nageru.VIDEO_FORMAT_BGRA)

It can then be display on an input like usual:


Note that interlaced video is currently not supported, not even with deinterlacing.

Videos run in the correct frame rate and on their own timer (generally the system clock in the computer), and loop when they get to the end or whenever an error occurs. If a video is changed while you’re running Nageru, it will be reloaded (just like images) when it’s at its end—but be aware, unless you’re moving the new file atomically into place, you could end up corrupting the file Nageru is playing from, causing it to automatically rewind before the end of the segment.

Videos are assigned an arbitrary signal number when loaded. Whenever you need to refer to this signal number (say, to get its width or height for display), you should use video:get_signal_num(). Like any other signal, videos have a width and height, an interlaced flag (currently always false), a frame rate (which can vary during playback) and has_signal/is_connected member functions. The former is always true, but the former will be false if the video isn’t currently playing for whatever reason (e.g., the file is corrupted, or a network stream is broken and hasn’t reconnected yet).

Controlling video playback

Themes have some programmatic control over video playback. In particular, if you want to make a video start from the beginning, you can do:


which will instantly make it start from the first frame again. This can be useful if you e.g. want the video to start when you’re switching to it, or if you’re not really using it to loop (e.g. as a transition marker).

You can also change its rate, e.g. by:


This will make it play at twice its usual speed. Your rate should not be negative nor exactly zero. You can set a rate to e.g. 1e-6 if you want to in practice stop the video; once you change it back to normal speed, the next frame will resume playing. Be aware that changing the rate may make the audio behave unpredictably; there are no attempts to do time stretching or change the pitch accordingly.

Finally, if you want to forcibly abort the playing of a video, even one that is blocking on I/O, you can use:


This is particularly useful when dealing with network streams, as FFmpeg does not always properly detect if the connection has been lost. See Theme menus for a way to expose such functionality to the operator.

Ingesting subtitles

Video streams can contain separate subtitle tracks. This is particularly useful when using Nageru and Futatabi together (see Tally and status talkback).

To get the last subtitle given before the current video frame, call “signals:get_last_subtitle(n)” from get_chain, where n is the signal number of your video signal. It will either contain nil, if there hasn’t been a subtitle, or else the raw subtitle. Note that if the video frame and the subtitle occur on the exact same timestamp, and the video frame is muxed before the subtitle packet, the subtitle will not make it in time. (Futatabi puts the subtitle slightly ahead of the video frame to avoid this.)