Nageru is designed to be as plug-and-play as possible, but by nature, a software video mixer requires a certain amount of hardware with associated drivers.
Nageru uses your computer’s graphics processing unit (GPU) extensively, through OpenGL. The use of the GPU is the reason why Nageru can deliver high-quality (as in e.g. gamma-correct fades and high-quality scaling) HD video without a monster CPU, but it also comes with certain caveats.
In particular, Nageru’s use of multithreaded OpenGL trickles bugs in some drivers, as most games access the GPU from only one thread; Mesa didn’t work properly at all before version 11.2, and there are still bugs left as of 13.0. However, in general, Intel GPUs from the Haswell generation and newer should work well with Nageru as long as you stick to 720p60 (ie., no 1080i inputs, which require deinterlacing). NVIDIA’s proprietary drivers (occasionally known as nvidia-glx) are generally excellent and should give few issues in this regard.
If you see Nageru dying with a message about “GL error”, or segfaulting with the stack trace pointing into libGL.so, your first intuition should be to check that you have the latest drivers for your GPU.
VA-API H.264 encoding (optional)¶
Even on modern networks and with today’s large SSDs, uncompressed HD video is a bit unwieldy to send around (uncompressed 720p60 4:2:0 is about 79 MB/sec, or 663 Mbit/sec). Nageru creates a high-bitrate H.264 stream of the finished output as a sort of “digital intermediate” that can much easier be stored to disk (for future editing or re-streaming) or sent to an encoder on another machine for final streaming.
Although you can (since Nageru 1.5.0) use software encoding through x264 for the digital intermediate, it is generally preferred to use a hardware encoder if it is available. Currently, VA-API is the only hardware encoding method supported for encoding the digital intermediate, although Nageru might support NVIDIA’s NVENC at some point in the future. In particular, this means that Intel Quick Sync Video (QSV), the hardware H.264 encoder present on all modern Intel GPUs, is supported.
QSV is more than fast enough to keep up with 720p60 in realtime without eating appreciably into the power budget, but it is not competitive with the top H.264 encoders in terms of quality per bit. Also, the stream is encoded using constant quality (fixed quantizer), not constant bitrate, which means the bitrate will vary strongly with content. (For practical material, the quantizer used by Nageru will end up around 25 Mbit/sec for 720p60, and be nearly visually lossless so as to allow further editing or transcoding without strong generational loss.) Thus, the QSV stream is not intended for streaming to end users of the Internet; it will need to be reencoded by some external means, or you can use Nageru’s x264 support to produce a user-facing stream in addition to the digital intermediate (see Streaming and recording).
By default, Nageru uses zerocopy from the GPU to the VA-API buffers in order to reduce memory transfer bandwidth, but this depends on EGL support (as opposed to the older GLX standard), and also that the GPU you are rendering to also supports VA-API. NVIDIA’s proprietary drivers do not support either. Unfortunately, this is somewhat cumbersome to automatically detect before it’s too late to do anything about it (Qt has already initialized using EGL), so on NVIDIA systems, Nageru will exit with an error message asking you to set –va-display to your Intel GPU manually. Simply follow the instructions printed to the terminal to select what looks like your Intel GPU, and Nageru will fall back to using GLX and transferring the memory data between the two GPUs via the CPU. (Some BIOSes automatically disable the Intel GPU if you have a discrete GPU installed; you will need to reenable it to get access to QSV, or Nageru can’t run.)
Video capture cards¶
If you do not have enough cards to satisfy your theme when you start up Nageru, fake cards will be instantiated. They produce a simple color (depending on the card) and no audio (unless you give the –fake-cards-audio command-line flag, in which case they will produce a tone). USB hotplug is supported; once you insert a new card, it will automatically be detected and takes the place of one of the fake cards.
Currently, Nageru supports only Blackmagic’s capture cards; specifically, it does not support Video4Linux. This may change in the future if cards come along that significantly improve upon Blackmagic’s lineup in terms of features, price or stability. (Most other cards fail on all three counts.)
There are separate drivers for the USB and PCI cards. (Thunderbolt cards, although rare, count as PCI cards in this respect.) The USB cards are handled by a driver called bmusb that is built into Nageru; they require working USB3 on your machine, but nothing else. (Kernel versions prior to 4.6 are not recommended, though. If you get USB issues, upgrade your kernel.) The cards autodetect their input, but unfortunately has no 1080p60 support, which means that most laptops plugged in will default to 1080i60, which probably is not what you want. (In particular, the YADIF deinterlacer employed by Nageru puts a lot of strain on the GPU; too much for most Intel GPUs.)
The PCI cards (known as DeckLink) require Blackmagic’s proprietary driver (Desktop Video) installed and working. It is non-free and thus not included in most Linux distributions. However, the SDK is not needed for building Nageru; the required headers are free and included. Most of the PCI cards autodetect, but for some older versions, you will need to right-click on the input to set the right mode.
Video format conversion¶
If you have an input source with a different resolution than the native mode (720p by default, but you can change this using the -w and -h command line parameters), Nageru will scale transparently for you using a Lanczos3 filter (or rather, the theme will). This requires some extra GPU power, so if you can avoid it, use the native mode. Similarly, if you connect an interlaced input, Nageru will automatically deinterlace for you.
Frame rates are automatically converted; one input is designated as the master clock (right-click on an input to select it as such), and gets to dictate the frame rate of the output. Inputs with differing frame rates will get frames duplicated or dropped as needed (with adaptive queuing to account for clock and jitter).
Nageru works in 16-bit floating-point RGBA internally. High-quality conversion to and from subsampled Y’CbCr (typically 4:2:2 for inputs and 4:2:0 for outputs) is done transparently on the GPU. Input and output is 8-bit Y’CbCr by default, but be aware that 8-bit Y’CbCr, however common, cannot capture the full color fidelity of 8-bit RGB (not to mention 10-bit RGB). If you have spare GPU power, you can enable 10-bit Y’CbCr input and output with –10-bit-input and –10-bit-output, respectively, although you should be aware that client support for 10-bit H.264 is very limited. Also, Quick Sync Video does not support 10-bit H.264 encoding, so in this case, the digital intermediate needs to be encoded in software.
It is strongly recommended to have the rights to run at real-time priority; it will make the USB3 threads do so, which will make them a lot more stable. (A reasonable hack for testing is probably just to run it as root using sudo, although you might not want to do that in production, but instead grant your regular user permissions in /etc/security/limits.conf.) Note also that if you are running a desktop compositor, it will steal significant amounts of GPU performance. The same goes for PulseAudio.
Nageru tries to lock itself into RAM if it has the permissions to do so, for better realtime behavior. (Writing the stream to disk tends to fill the buffer cache, eventually paging less-used parts of Nageru out.) Again, this is something you can set in limits.conf.