mjpeg-howto: SMP and distributed Encoding

 
 11 SMP and distributed Encoding
 *******************************
 
 The degree to which mpeg2enc tries to split work between concurrently
 executing threads is controlled by the -M or -multi-thread [0..32]
 option. This optimizes mpeg2enc for the specified number of CPUs. By
 default (-M 1), mpeg2enc runs with just a little multi-threading:
 reading of frames happens concurrently with compression. This is done
 to allow encoding pipelines that are split across several machines (see
 below) to work efficiently without the need for special buffering
 programs.  If you are encoding on a single-CPU machine where RAM is
 tight you may find turning off multithreading altogether by setting -M
 0 works slightly more efficiently.
 
    For SMP machines with two ore more processors you can speed up
 mpeg2enc by setting the number of concurrently executing encoding
 threads's you wish to utilize (e.g. -M 2). Setting -M 2 or -M 3 on a
 2-way machine should allow you to speed up encoding by around 80%.
 Values above 3 are accepted but have very little effect even on 4 cpu
 systems.
 
    If you have a real fast SMP machine (currently 1.Aug.03) like a dual
 Athlon MP 2600 or something similar the -M 2 and the filtering might not
 keep both (or more)  CPU's busy. The use of the buffer or bfr program
 with a 10-20MB buffer helps to keep both CPUs busy.
 
    Obviously if your encoding pipeline contains several filtering stages
 it is likely that you can keep two or more CPU's busy simultaneously
 even without using -M. Denoising using yuvdenoise or yuvmedianfilter is
 particular demanding and uses almost as much processing power as MPEG
 encoding.
 
    It you more than one computer you can also split the encoding
 pipeline between computers using the standard 'rsh' or 'rcmd' remote
 shell execution commands. For example, if you have two computers:
 
    `> rsh machine1 lav2yuv "mycapture.eli | yuvscaler -O SVCD |
 yuvdenoise" | mpeg2enc -f 4 -o mycapture.m2vi'
 
    Here the computer where you execute the command is doing the MPEG
 encoding and "machine1" is the machine that is decoding scaling and
 denoising the captured video.
 
    Obviously, for this to work "machine1" has to be able to access the
 video and the computer where the command is executed has to have space
 for the encoded video. In practice, it is usually well worth setting up
 network file-storage using "NFS" or other packages if you are going to
 do stuff like this.  If you have three computers you can take this a
 stage further, one computer could do the decoding and scaling, the next
 could do denoising and the third could do MPEG encoding:
 
    `> rsh machine1 "lav2yuv mycapture.eli | yuvscaler -O SVCD" |
 yuvdenoise | rsh machine3 mpeg2enc -f 4 -o mycapture.m2v'
 
    `NOTE:'How the remote command executions are set up so that the data
 is sent direct from the machine that produces it to the machine that
 consumes it.
 
    In practice for this to be worthwhile the network you are using must
 be fast enough to avoid becoming a bottleneck. For Pentium-III class
 machines or above you will need a 100Mbps Ethernet.
 
    For really fast machines a switched 100MBps Ethernet (or better!) may
 be needed.Setting up the rshd ("Remote Shell Daemon" needed for rsh to
 do its work and configuring "rsh" is beyond the scope of this document,
 but its a standard package and should be easily installed and activated
 on any Linux or BSD distribution.
 
    Be aware that this is potentially a security issue so use with care
 on machines that are visible to outside networks!