Streaming and recording

One of the primary purposes of Nageru is to prepare a stream that is suitable for live broadcast. However, it is not suitable for serving end-users directly; other software is much more suitable.

Depending on the amount of CPU power and bandwidth in your production machine (the one you run Nageru on) and other machines you may have available, you can choose two different approaches for streaming: Transcoded or direct.

Transcoded streaming

Transcoded streaming is in many ways the conceptually simplest from Nageru’s point of view. In this mode, Nageru outputs its “digital intermediate” H.264 stream (see VA-API H.264 encoding (optional)), and you are responsible for transcoding it into a format that is suitable for streaming to clients. Thus, you can run Nageru on a regular laptop and then use e.g. a short-term virtual machine in the cloud to do the heavy lifting for video compression.

Nageru has a built-in HTTP server that listens on port 9095. You can get the intermediate by using any filename; by default, the NUT mux is used, so e.g. “http://yourserver.example.org:9095/stream.nut” would be appropriate. The NUT mux is chosen because it is among the few that can carry uncompressed audio easily. It is private to FFmpeg, and thus understood by almost everything in the free software world, but perhaps not by all other services. You can change it with the “–http-mux” parameter, although you might then also want to change the audio codec to using “–http-audio-codec” and “–http-audio-bitrate” to something your mux can transport (see below for more information about audio transcoding).

The stream can be transcoded by a number of programs, such as VLC, but Nageru also has its own transcoder called Kaeru, named after the Japanese verb kaeru (換える), meaning roughly to replace or exchange. Kaeru is a command-line tool that is designed to transcode Nageru’s streams. Since it reuses Nageru’s decoding and encoding code, it can do almost everything you can do with direct encoding, including x264 speed control and Metacube output (see the section on Cubemap integration below).

Using Kaeru is similar to launching Nageru, e.g. to rescale a stream to 848x480 and output it to a 1.5 Mbit/sec H.264 stream suitable for most browsers in a <video> tag:

./kaeru -w 848 -h 480 --http-mux mp4 --http-audio-codec aac --http-audio-bitrate 128 \
  --x264-bitrate 1500 http://yourserver.example.org:9095/stream.nut

1.5 Mbit/sec is in the lower end of the spectrum for good 720p60 conference video (most TV channels use 12-15 Mbit/sec for the same format).

Another variation popular today is to stream using segmented HTTP; you can use e.g. the ffmpeg command-line tool to segment the HLS created by Kaeru into a stream that will be accepted by most smartphones:

ffmpeg -i http://127.0.0.1:1994/ -codec copy -f hls \
  -hls_time 2 -hls_wrap 100 -bsf:v h264_mp4toannexb \
  -hls_segment_filename $NAME-hls-%03d.ts stream.m3u8

Or, of course, you can use FFmpeg to do the transcoding if you wish.

Direct encoding

If you do not wish to run an external transcoder, Nageru can encode high-quality H.264 video directly using x264. This requires much more CPU power (a fast quadcore is recommended for a 720p60 stream), since it does not have any assistance from the GPU, but produces significantly better quality per bit, and also has much better bitrate control. Even if you use x264, the stream stored to disk is still the full-quality QSV stream.

Using Nageru’s built-in x264 support is strongly preferable to running an external transcoder on the same machine, since it saves one H.264 decoding step, and also uses speed control. Speed control automatically turns x264’s quality setting up and down to use up all remaining CPU power after Nageru itself has taken what it needs (but no more); it can use a range of settings all the way from about x264’s “superfast” preset all the way to about the equivalent of “veryslow”. Note that this comes at the cost of about one second of extra delay.

Built-in x264 is also useful if you don’t have a lot of bandwidth to your external encoding or distribution machine. You can, however, still transcode externally to get e.g. a lower-resolution for low-bandwidth users or segmented HLS; see the previous section for examples.

The built-in x264 encoder is activated using the “–http-x264-video” flag; e.g.:

./nageru --http-x264-video --x264-preset veryfast --x264-tune film \
  --http-mux mp4 --http-audio-codec aac --http-audio-bitrate 128

Note the use here of the MP4 mux and AAC audio. For speed control, you can use:

./nageru --x264-speedcontrol --x264-tune film \
  --http-mux mp4 --http-audio-codec libfdk_aac --http-audio-bitrate 128

There are many more parameters, in particular “–x264-bitrate” to control the nominal bitrate (4500 kbit/sec per default, plus audio). Most of them are usually fine at the default, though. Note that you can change the x264 bitrate on-the-fly from the video menu; this is primarily useful if your network conditions change abruptly.

Cubemap integration

Even with built-in x264 support, Nageru is not particularly efficient for delivering streams to end users. For this, a natural choice is Cubemap; Cubemap scales without problems to multiple 10 Gbit/sec NICs on a quite normal machine, and you can easily add multiple Cubemap servers if so needed. Nageru has native support for Cubemap’s Metacube2 transport encoding; to use it, add “.metacube” to to the end of the URL, e.g. with a cubemap.config fragment like this:

stream /stream.mp4 src=http://yourserver.example.org:9094/stream.mp4.metacube pacing_rate_kbit=3000 force_prebuffer=1500000

Note that you will want a pacing rate of about 2:1 to your real average bitrate, in order to provide some for temporary spikes (the default allows spikes of 2x the nominal bitrate, but only on a one-second basis) and TCP retransmits. See the Cubemap documentation for more information about how to set up pacing.

Single-camera stream

In addition to the regular mixed stream, you can siphon out MJPEG streams consisting of a single camera only. This is useful either for running a very cheap secondary stream (say, a static overview camera that you would like to show on a separate screen somewhere), or for simple monitoring during debugging.

The URL for said stream is “http://yourserver.example.org:9095/feeds/N.mp4”, where N is the card index (starting from zero). The feed is in MJPEG format and MP4 mux, regardless of other settings, just like the multicamera mux for Futatabi. (You are allowed to use a card that is not part of the multicamera mux, if you have limited the number of such cards.) For more technical details, see Internal video format details. Kaeru can transcode such streams to a more manageable bitrate/format, if you wish.