OpenVCX
    
Open Source Video Conferencing and Streaming Server

OpenVSX v1.8.0 User Guide

Table Of Contents

1 - INTRODUCTION
2 - SOFTWARE PACKAGE CONTENTS
3 - COMMAND LINE REFERENCE
    3.1 - GENERAL PURPOSE COMMAND LINE ARGUMENTS
    3.2 - CONTROLLING STREAM OUTPUT
        3.2.1 - STREAMING MEDIA OUTPUT
        3.2.2 - DYNAMIC UPDATE / CONFIG SERVER
        3.2.3 - MPEG_DASH
        3.2.4 - DTLS
        3.2.5 - FLV STREAMING
        3.2.6 - HTTPLIVE STREAMING
        3.2.7 - AUTO-FORMAT ADAPTATION SERVER
        3.2.8 - MATROSKA / WEBM
        3.2.9 - RTP / SRTP
        3.2.10 - PICTURE IN PICTURE (PIP)
        3.2.11 - STUN
        3.2.12 - RTMP
        3.2.13 - RTSP
        3.2.14 - MPEG-2 TS
    3.3 - TRANSCODING
        3.3.1 - TRANSCODING CONFIGURATION
        3.3.2 - ENCODING MULTIPLE OUTPUTS
    3.4 - LIVE INPUT CAPTURE
    3.5 - VIDEO CONFERENCING
    3.6 - COMMAND LINE EXAMPLES
        3.6.1 - STREAMING EXAMPLES
        3.6.2 - VIDEO CONFERENCE EXAMPLES
        3.6.3 - MISCELLANEOUS EXAMPLES
4 - STREAMING TRANSPORT PROTOCOLS
    4.1 - RTP
    4.2 - RTSP
    4.3 - RTMP
    4.4 - FLV ENCAPSULATION OVER HTTP
    4.5 - MATROSKA / WEBM
    4.6 - MPEG-DASH
    4.7 - MPEG-2 TS OVER HTTP
    4.8 - HTTPLIVE STREAMING
5 - STREAMING WEB SERVER
    5.1 - VIRTUAL URIS
6 - TROUBLESHOOTING

 

1. Introduction

OpenVSX Video Streaming Processor is a real-time video streaming server with capabilities to stream live content, record, and host a complete media library. OpenVSX can be used as part of a live broadcasting system, cloud based media distribution infrastructure, video conferencing server, or directly embedded in a set-top box or mobile device.

OpenVSX can be used in the following modes of operation:

OpenVCX (Video Conferencing Exchange). OpenVCX is a SIP (Session Initiation Protocol) based video conferencing server for Video VoIP (Voice over IP). OpenVCX leverages OpenVSX to provide all media related functionality. OpenVCX is packaged together with the OpenVSX download bundle.

OpenVSX Web Portal (Media Web Portal). OpenVSX Web Portal is a web portal front-end for hosting live video feeds and stored media content. OpenVSX Web Portal selectively launches OpenVSX media processes to handle transcoding and format adaptation requests based on client device type identification. The OpenVSX Web Portal is launched by invoking bin/startvsxwp.sh

Scripting Mode. OpenVSX is started from the command line to process a single streaming task such as transcoding and reformatting one or more live video streams. OpenVSX can be launched by editing bin/examplestart.sh

OpenVSX is also available as a software library for direct integration into a third-party application.

 

2. Software Package Contents

The OpenVSX download bundle contains several components. The key components are listed below.

bin/vsx - OpenVSX wrapper startup script

bin/vsxbin - OpenVSX binary executable program

bin/vsx-wpbin - OpenVSX Web Portal. vsxbin is used to service client media requests and can selectively launch and control OpenVSX child processes.

bin/startvsx.sh - Script used to start / stop the OpenVSX Web Portal Service.

bin/examplestart.sh - Example script used to show how to create an OpenVSX commandline for streaming.

lib/libvsx.so - OpenVSX shared library.

lib/libxcode.so - Transcoder interface library.

etc/vsx.conf - OpenVSX Configuration file.

etc/vsxwp.conf - OpenVSX Web Portal configuration file.

etc/devices.conf - Basic device profile configuration for client lookup based on User-Agent.

etc/pip.conf - Example PIP (Picture In Picture) Configuration script.

etc/xcode.conf - Example transcoder configuration file.

 

3. Command Line References

An OpenVSX command line can be constructed using multiple options controlling program behavior. The syntax used here is as follows.

--option_name    The option requires no argument.

--option_name[=]   The option takes an optional argument. For eg., --conf or --conf=etc/vsx.conf.

--option_name=   The option requires a mandatory argument. For eg., --delay=1.5.

--option_name, -o  The given option syntaxes are synonyms. For eg., --verbose or -v.

 

 

3.1 General Purpose Command Line Arguments

General Purpose Command Line Arguments -h general


  1. --conf[=]
  2.  - 

    Configuration input file path. If no file path is given the path etc/vsx.conf is used. The configuration file can contain parameters which may not be configurable using the command line.

  3. --log[=]
  4.  - 

    An optional log file path. The log file can also be specified in the configuration file. If the path is omitted, the default is log/vsx.log If the parameter is omitted, all log output is sent to stdout.

  5. --logfilemaxsize=
  6.  - 

    Maximum size of each log file in KB before it is rolled over. The default maximum size of a single log file is 1024KB. The number of rolled over logfiles retained can be controlled in the etc/vsx.conf configuration file. By default, up to 5 historical log files will be retained. For eg. --logfilemaxsize=2MB or --logfilemaxsize=2048KB

  7. --httplog
  8.  - 

    Use HTTP log/access.log.

  9. --pid=
  10.  - 

    Output a PID file to the specified path.

  11. --verbose, -v
  12.  - 

    Enable verbose logging. This is the same as giving the -v option. Each single instance of -v increases the verbosity level. For eg. -vv is equivalent to --verbose=3, which increases the log verbosity from the default value of 1.

 

3.2 Controlling Stream Output

 

    3.2.1 Arguments Controlling Streaming Media Output

    1. --stream, -s
    2.  - 

      Enable stream output mode.

    3. --abrauto=
    4.  - 

      Enables Dynamically Adaptive Bitrate Control (ABR). If enabled, encoder output settings, such as bitrate and framerate may be adjusted automatically by the streaming server. Adjustment may be upward or downward depending on the network conditions reported by the RTP receiver. Monitored receiver reported network metrics include RTCP NACKs, RTP loss rate, and RTP latency. When ABR is enabled, the encoder configuration should define maximum and minimum thresholds for bitrate and/or framerate. ABR is disabled by default (--abrauto=0).

      ABR is available only for the first output video stream and not for multiple output destinations, audio streams, or RTSP streaming recipients.

    5. --auth-basic
    6.  - 

      Permit HTTP Basic authentication to be used when authenticating a remote HTTP or RTSP endpoint. If not explicitly enabled, only HTTP Digest authentication is used.

    7. --auth-basic-force
    8.  - 

      Force HTTP Basic authentication instead of HTTP Digest authentication.

    9. --avsync=
    10.  - 

      Audio / Video sync base adjustment (positive or negative) in seconds expressed as a float. This argument is valid when processing both audio and video elementary streams. The audio video sync value is added to audio timestamps. A positive value will delay the audio with respect to the video.

      When using multiple parallel output encodings, each output stream can be given it's own unique a/v offset by suffixing this argument with the output encoding index. For eg., to assign an a/v offset to stream output 2 use --avsync2=0. Any index specific values will take precedence over the base --avsync=0 value.

    11. --delay=
    12.  - 

      Delay buffering factor applied to audio and video stream output. Value should be expressed as a positive float in seconds. This option is useful when processing live input, which subsequently needs to be buffered to accommodate for poor input network conditions.

    13. --duration=
    14.  - 

      Maximum media streaming duration. Value should be expressed as a positive float in seconds.

    15. --firaccept[=]
    16.  - 

      Controls how Full Intra Refresh (FIR) requests are handled by the application. The following behaviors are simultaneously affected with this parameter:

      Controls the reception of RTCP Feedback type Full Intra Refresh messages (RTCP FB FIR), as defined in RFC 5104. The default value is 1, RTCP FB FIR request handling enabled. If set to 0, RTCP FB FIR requests coming from the receiver of the RTP output stream will be ignored. This value can be individually controlled by the configuration file parameter FIRRTCPInputHandler.

      Controls the behavior of IDR request messages for the local encoder. In addition to FIR messages received from RTCP, IDR requests can be internally generated when a client connects to a server published instance of the media output. An example is a connection to the HTTP URLs /tslive, /flvlive, /mkvlive, the RTSP, or the RTMP server listener. The default value is 1, which enables an IDR request to the local encoder if a key-frame is needed to begin the format specific output. If set to 0, no internal IDR request will be dispatched to the local encoder. This value has no effect if the local output is not transcoded, if the configuration file parameter FIREncoder is disabled, or if the encoder does not support IDR requests. This value also does not affect the behavior of the videoForceIDR transcoding configuration parameter. This value can be individually controlled by the configuration file parameter FIREncoderFromRemoteConnect.

    17. --framethin[=]
    18.  - 

      Enables the frame thinning algorithm which may dynamically reduce output bandwidth during network congestion. When enabled, frame thinning is applied to any client connection which uses either FLV, MKV, or RTMP format encapsulation. Frame thinning selectively eliminates non-key video frames from the data stream thus reducing the output bitrate. This is done automatically upon detecting that the client is failing to retrieve the media contents in real-time. This behavior can result in choppy video playback while maintaining continuous audio playback. Default value is 0 (frame thinning disabled).

    19. --in=
    20.  - 

      Input media file path, SDP file, or capture device of the input media to be processed. If using separate video and audio devices, the video device should be placed in the first index. Up to two devices are permitted.

      Examples include:

      Read video and audio media from a container file stored on disk, such as .mp4, .3gp, .flv, .m2t.

      --in=inputfile.mp4

      Read an image format file such as a .png or .bmp

      --in=inputfile.png

      Read and process an input playlist, such as .m3u

      --in=inputplaylist.m3u

      Read and process an input SDP (Session Description) file. The capture session will be setup to read from live capture according to the SDP syntax.

      --in=input.sdp

      It is recommended that an SDP file is created for any input which uses RTP.

      Read and process a pcap file

      --in=pcapfile.pcap

      Capture packets directly from a local interface

      --in=eth0

      Capture packets by reading directly from a block device, such as a video frame buffer, or audio samples buffer.

      --in=/dev/framebuf0

      Capture packets by reading directly from a dummy video frame buffer. This specific device expects vsxlib_onVidFrame to be called to provide video input frames according to the input filter configuration.

      Capture packets by reading directly from a dummy audio frame buffer. This specific device expects vsxlib_onAudSamples to be called to provide audio input samples according to the input filter configuration.

      --in=/dev/dummyvideo

      The --capture argument should be used for live input stream capture

      Setup a live MPEG-2 TS RTP listener on port 10000.

      --capture=rtp://0.0.0.0:10000 --filter="type=m2t"

      This is the same as --capture --in=rtp://10000 --filter="type=m2t". Input capture defaults to native packetization format if no --filter argument is specified.

      Setup a live RTP video listener on port 10000 and audio listener on port 10002. The video port should always be specified as the first port.

      --capture=rtp://0.0.0.0:10000,10002

      Setup a live MPEG-2 TS UDP listener port 41394 bound to the loopback interface only.

      --capture=udp://127.0.0.1:41394 --filter="type=m2t"

      Request a remote MPEG-2 TS encapsulated stream via HTTP on port 8080.

      --capture=http://10.10.10.10:8080/tslive

      To load the media resource via SSL/TLS on port 8443 use --capture=https://10.10.10.10:8443/tslive

      To specify HTTP Digest authentication credentials with the username myuser and password mypassword use --capture=https://myuser:mypassword@10.10.10.10:8443/tslive

      Request a remote FLV encapsulated stream via HTTPS SSL/TLS on port 8443 which requires HTTP Digest authentication credentials.

      --capture=https://myuser:mypassword@10.10.10.10:8443/flvlive --filter="type=flv"

      Request a remote HTTPLive stream via HTTPS SSL/TLS on port 8443 which requires HTTP Digest authentication credentials. The following example loads the HTTPLive playlist.

      --capture=https://myuser:mypassword@10.10.10.10:8443/httplive/out.m3u8"

      Request a remote RTMP stream on port 1935.

      --capture=rtmp://10.10.10.10:1935/app_name/stream-name"

      Setup an RTMP server listener on the local interface port 1935 to accept a remote RTP stream publisher. An RTMP publisher such as Flash Media Encoder can be used to provide a live input stream.

      --capture=rtmp://username:password@0.0.0.0:1935"

      Request a remote RTSP stream on port 554.

      --capture=rtsp://10.10.10.10:554/stream-name"

      TODO: show rtsps, interleaved, srtp..etc..

    21. --loop
    22.  - 

      Enable looping of the input file.

    23. --localhost=
    24.  - 

      IP address or hostname which will be used in place of the default IP address obtained from the local system network interface. This value may be used to override the default IP address for SDP, RTP, or RTSP based streaming output. It may be necessary to explicitly set this value to a global machine IP address if the system is used to bridge a private and public network.

    25. --noaudio
    26.  - 

      Disable output of any audio elementary stream.

    27. --novideo
    28.  - 

      Disable output of any video elementary stream.

    29. --overwrite
    30.  - 

      Enable overwriting of any recording output file if it already exists. If this option is not set input recording will abort if the recording output filename already exists.

    31. --start=
    32.  - 

      Input file streaming start offset in seconds expressed as a float.

    33. --statusmax=
    34.  - 

      Maximum number of simultaneous HTTP Status connections. The default value is 0 (Status server disabled). The status server is used by the OpenVSX-WP (Web Portal) to interrogate OpenVSX Streaming Processor processes for real-time statistics. If enabled the status server is available on the default HTTP port of the /status URL.

    35. --streamstats[=]
    36.  - 

      Output file path of the stream output statistics file. Statistics include the overall throughput, average burst rate over the past 2 seconds, and past 8 seconds. RTP stream output will contain any RTCP Receiver Report metrics such as reported packet loss. TCP stream output will contain the current state of the output buffer. The value stdout or stderr can be used instead of an output file path. If no file path is given stdout is used.

      By default, stream statistics are printed every 4 seconds. This interval can be adjusted by modifying the parameter outputStatisticsIntervalMs in the configuration file loaded via the --conf= command line parameter.

      Stream statistics are also available for output via the status HTTP server as URI key value pair format parameters. The URL /status?streamstats can be used to access the stream statistics.


     
    3.2.2 Arguments Controlling Dynamic Update / Config Server

    1. --configsrv[=]
    2.  - 

      HTTP Dynamic configuration update interface server listening address and port string delimited by a colon. An HTTP client can perform dynamic configuration updates by accessing the server at http[s]://[username:password@][listener-address]:[port]/config

      To enable a listener on port 8080 use --configsrv=8080 or --configsrv=http://0.0.0.0:8080

      To enable a listener on localhost use --configsrv=http://127.0.0.1:8080

      To enable an SSL/TLS listener use https://

      To enable HTTP Digest authentication credentials with the username myuser and password mypassword use --configsrv=https://myuser:mypassword@0.0.0.0:8443

      If the --configsrv parameter is present with no argument then the server listener will use the properties of the previous HTTP server listener present on the command line, or 0.0.0.0:8080 if it is the first HTTP server parameter.

    3. --configmax=
    4.  - 

      Maximum number of simultaneous HTTP dynamic configuration update connections. The default value is 0 (config server disabled). The config server interface can be used to update the running configuration such as encoder configuration parameters. This can be used to adjust stream output bitrate, GOP size, fps, force I frame insertion, adjust audio volume, etc. If enabled, the config server is available on the default HTTP port of the /config URL.


     
    3.2.3 Arguments Controlling MPEG-DASH Streaming

    1. --dash[=]
    2.  - 

      MPEG-DASH live content streaming server listening address and port string delimited by a colon. An MPEG-DASH capable client can access the stream at the URL http[s]://[username:password@][listener-address]:[port]/dash. This is the same as the URL of the mpd file http[s]://[username:password@][listener-address]:[port]/dash/segtemplate.mpd

      To enable a listener on port 8080 use --dash=8080 or --dash=http://0.0.0.0:8080

      To enable a listener on localhost use --dash=http://127.0.0.1:8080

      To enable an SSL/TLS listener use https://

      To enable HTTP Digest authentication credentials with the username myuser and password mypassword use --dash=https://myuser:mypassword@0.0.0.0:8443

      If the --dash parameter is present with no argument then the server listener will use the properties of the previous HTTP server listener present on the command line, or 0.0.0.0:8080 if it is the first HTTP server parameter.

    3. --dashmax=
    4.  - 

      Maximum number of simultaneous HTTP MPEG-DASH connections. The default value is 4 if the MPEG-DASH server is enabled.

    5. --dashdir=
    6.  - 

      Output directory path where DASH media stream segments are stored. The default directory is html/dash. It is recommended that a temporary in-memory file system be used to store these temporary chunk files. For eg, on linux to create a temporary in-memory file system do: sudo mount -t tmpfs -o size=102400K tmpfs /usr/local/ram/.

    7. --dashfileprefix=
    8.  - 

      Output segment file prefix of any temporary MPEG-DASH segmented media file. The default prefix is out. This is the file prefix which will be written to any MPD file and which will be part of any URL request by an MPEG-DASH client.

    9. --dashfragduration=
    10.  - 

      The movie fragment duration as a float expressed in seconds of each .m4s MOOF box contained within the .m4s media segment. The default value is 0 seconds, which means that the .m4s segment will contain only one MOOF box. If set, this value should generally not exceed 1 second.

    11. --dashmaxduration=
    12.  - 

      The maximum .m4s media segment duration as a float expressed in seconds. The default value is 10.0 seconds.

    13. --dashminduration=
    14.  - 

      The minimum .m4s media segment duration as a float expressed in seconds. The default value is 5.0 seconds. If this value is set to 0, then the media segmentor will produce segments according to the value of dashmaxduration. If this value is greater than 0 then the media segmentor will attempt to chunk segments at key-frame boundaries only after the minimum media duration has elapsed. Each segment should begin on a video key-frame unless dashmaxduration is exceeded without encountering a key-frame.

    15. --dashmoof[=]
    16.  - 

      Enable or disable .m4s segment creation. This value is enabled by default if the MPEG-DASH server listener is enabled. To disable .m4s segment creation use --dashmoof=0.

    17. --dashmpdtype=
    18.  - 

      MPEG-DASH .mpd SegmentTemplate media naming convention. Media segments are referenced using either the index Number or Time. The default value is Time

    19. --dashnodelete[=]
    20.  - 

      Do not delete MPEG-DESH .m4s media segments.

    21. --dashsyncsegments[=]
    22.  - 

      Sync new .m4s segment creation times for both audio and video to the same timestamp. This is the default behavior when using multiple encoder outputs. Use --dashsyncsegments=0 to use independent encoded stream creation times for multiple encoder outputs. This may cause problems for clients switching bitrates.

    23. --dashts[=]
    24.  - 

      Enable or disable Transport Stream (.ts) segment creation. This value is disabled by default unless both the MPEG-DASH server listener and the HTTPLive segmentor are enabled. To enable Transport Stream segment creation use --dashts or --dashts=1.

    25. --dashtsduration=
    26.  - 

      MPEG-DASH Transport Stream (.ts) segment chunk duration in seconds. This option is effectively the same as --httplivechunk which controls the segment chunk duration for HTTPLive streaming.

    27. --dashurlhost=
    28.  - 

      Output URL hostname which will be written to an MPD file for each media hyperlink. Each hyperlink will be prepended by the specified URL hostname to allow serving of transport media files via an alternate host or virtual URL.

    29. --dashuseinit[=]
    30.  - 

      Enable or disable use of a media stream initializer segment. This value is enabled by default resulting in the use of an initializer segment. To disable the media initializer segment use --dashuseinit=0.


     
    3.2.4 Arguments Controlling DTLS / SSL Streaming

    1. --dtls[=]
    2.  - 

      Use DTLS (Datagram Transport Layer Security) RTP output. This is the same as specifying the dtls output type with the --out parameter such as --out=dtls://127.0.0.1:2000,2002. The resulting transport description included in any SDP will be UDP/TLS/RTP/SAVP

      DTLS output mode uses X.509 certificates and public-key cryptography to establish a secure connection for each RTP and RTCP media channel. Each RTP and RTCP packet is transported over a secure channel. DTLS can add significant size overhead to each RTP/RTCP packet.

    3. --dtls-srtp[=]
    4.  - 

      Use DTLS/SRTP (Datagram Transport Layer Security / Secure RTP) SRTP output. This is the same as specifying the dtlssrtp output type with the --out parameter such as --out=dtlssrtp://127.0.0.1:2000,2002. The resulting transport description included in any SDP will be RTP/SAVP

      DTLS/SRTP output mode uses X.509 certificates and public-key cryptography to establish a secure connection for each RTP and RTCP media channel. The DTLS connection is used to provide keying material to derive the SRTP key and salt according to RFC 5764. Each RTP and RTCP packet is then protected using SRTP and not DTLS.

    5. --dtlscert=
    6.  - 

      Path of an X.509 PEM encoded certificate. If omitted the default DTLS certificate file etc/dtlscert.pem is used. This certificate is provided only for demo application use and should not be used in a production environment.

      A non-trusted certificate can be created with the following OpenSSL command:

      openssl req -new -x509 -key dtlskey.pem -out dtlscert.pem

    7. --dtlskey=
    8.  - 

      Path of an PEM encoded private key file. If omitted the default DTLS private key file etc/dtlskey.pem is used. This key is provided only for demo application use and should not be used in a production environment.

      A private key file without a passphrase can be created with the following OpenSSL command:

      openssl genrsa -out dtlskey.pem

    9. --dtlsserver
    10.  - 

      Use DTLS server keying mode instead of DTLS client.


     
    3.2.5 Arguments Controlling FLV Streaming

    1. --flvdelay=
    2.  - 

      Delay factor applied to audio and video stream output for FLV format encapsulation. Value should be expressed as a positive float in seconds. This setting will cause a client media player to pre-buffer the live content and allow it to be treated like a static file. Without this value set some media players may continue to show a buffering or loading icon for the video. The default value is 1.0 second.

    3. --flvlive[=]
    4.  - 

      HTTP FLV encapsulated live content streaming server listening address and port string delimited by a colon. An HTTP client can access the live stream encapsulated as an FLV file at the URL http[s]://[username:password@][listener-address]:[port]/flvlive

      To enable a listener on port 8080 use --flvlive=8080 or --flvlive=http://0.0.0.0:8080

      To enable a listener on localhost use --flvlive=http://127.0.0.1:8080

      To enable an SSL/TLS listener use https://

      To enable HTTP Digest authentication credentials with the username myuser and password mypassword use --flvlive=https://myuser:mypassword@0.0.0.0:8443

      If the --flvlive parameter is present with no argument then the server listener will use the properties of the previous HTTP server listener present on the command line, or 0.0.0.0:8080 if it is the first HTTP server parameter.

    5. --flvlivemax=
    6.  - 

      Maximum number of simultaneous HTTP FLV Server connections. The default value is 4 if the FLV server is enabled.

    7. --flvrecord=
    8.  - 

      The output filename for recording output content to a file. The recorded media will be encapsulated using the FLV file format. An .flv suffix will be appended to the output file if none is given. The command will fail if an output codec is not supported by the container file format.

      When using multiple parallel output encodings, each individual output stream can be recorded by suffixing this argument with the output encoding index. For eg., to record output stream 2 use --flvrecord2=fileoutput2.flv.


     
    3.2.6 Arguments Controlling HTTPLive Streaming

    1. --httplive[=]
    2.  - 

      HTTPLive Streaming Server listening address and port string delimited by a colon. An HTTPLive capable client, such as any Apple iOS device (iPhone, iPad) or Safari can access the stream at the URL http[s]://[username:password@][listener-address]:[port]/httplive

      To enable a listener on port 8080 use --httplive=8080 or --httplive=http://0.0.0.0:8080

      To enable a listener on localhost use --httplive=http://127.0.0.1:8080

      To enable an SSL/TLS listener use https://

      To enable HTTP Digest authentication credentials with the username myuser and password mypassword use --httplive=https://myuser:mypassword@0.0.0.0:8443

      If the --httplive parameter is present with no argument then the server listener will use the properties of the previous HTTP server listener present on the command line, or 0.0.0.0:8080 if it is the first HTTP server parameter.

    3. --httplivemax=
    4.  - 

      Maximum number of simultaneous HTTPLive Server connections. The default value is 4 if the HTTPLive server is enabled.

    5. --httplivebw=
    6.  - 

      Sets the HTTPLive published stream output bitrate(s) when using transcoding to produce multiple bitrate specific streams. The bitrate is pulished in the master HTTPLive playlist as the BANDWIDTH field and is expressed in Kb/s. If this parameter is omitted, the default bitrate of the output .ts stream is the encoder configuration bitrate multiplied by a factor of 1.15 to account for the audio stream and any packetization overhead. Multiple stream output bitrates are specified as a CSV. For eg. --httplivebw="300, 600" to denote 300Kb/s and 600Kb/s.

    7. --httplivechunk=
    8.  - 

      HTTPLive segment chunk duration in seconds. The default value is 10.0 seconds. This value will influence the overall delay of media delivery from the capture source. Generally, a value between 5 and 15 seconds is acceptable.

    9. --httplivedir=
    10.  - 

      Output directory path where MPEG-2 TS stream output chunks are stored. The default directory is html/httplive. It is recommended that a temporary in-memory file system be used to store these temporary chunk files. For eg, on linux to create a temporary in-memory file system do:

      sudo mount -t tmpfs -o size=102400K tmpfs /usr/local/ram/

    11. --httpliveprefix=
    12.  - 

      Output chunk file prefix of any temporary HTTP Live chunk media file. The default prefix is “out”. This is the file prefix which will get written to any .m3u playlist file and which will be part of any URL request by a httplive client.

    13. --httpliveurlhost=
    14.  - 

      Output URL media host prefix which gets written to any .m3u8 playlist file for each TS file. Each playlist item will be prepended by the specified URL host prefix to allow serving of transport media files via an alternate host or virtual URL. An example would be https://httplive.cdn.mycompany.com:8080/httplive.


     
    3.2.7 Arguments Controlling Auto-Format Adaptation

    1. --live[=]
    2.  - 

      HTTP Auto-Format Adaptation Web Server listening address and port string delimited by a colon. The auto-format adaptation server listener provides a single URL to access the different available delivery formats according to the client User-Agent. The server will automatically return the appropriate media format content type according to the client User-Agent header. The User-Agent lookup is performed according to the device type definition found in etc/devices.conf. A client web browser can access the stream at the URL http[s]://[username:password@][listener-address]:[port]/live

      To enable a listener on port 8080 use --live=8080 or --live=http://0.0.0.0:8080

      To enable a listener on localhost use --live=http://127.0.0.1:8080

      To enable an SSL/TLS listener use https://

      To enable HTTP Digest authentication credentials with the username myuser and password mypassword use --live=https://myuser:mypassword@0.0.0.0:8443

      If the --live parameter is present with no argument then the server listener will use the properties of the previous HTTP server listener present on the command line, or 0.0.0.0:8080 if it is the first HTTP server parameter.

      Note: The same HTTP Server resource pool is used to service all HTTP based requests such as /tslive, /httplive, /flvlive, /dash, and /live. The maximum number of HTTP listeners is limited by the maxConn configuration file setting. The default value is 20 and maximum value is 100.

    3. --livemax=
    4.  - 

      Maximum number of simultaneous Auto-Format Adaptation Web Server connections. The default value is 4 if the Adaptation Server is enabled.


     
    3.2.8 Arguments Controlling Matroska / WebM Streaming

    1. --mkvlive[=]
    2.  - 

      HTTP Matroska / WebM encapsulated live content streaming server listening address and port string delimited by a colon. An HTML5 capable client can access the stream at the URL http[s]://[username:password@][listener-address]:[port]/mkvlive

      To enable a listener on port 8080 use --mkvlive=8080 or --mkvlive=http://0.0.0.0:8080

      To enable a listener on localhost use --mkvlive=http://127.0.0.1:8080

      To enable an SSL/TLS listener use https://

      To enable HTTP Digest authentication credentials with the username myuser and password mypassword use --mkvlive=https://myuser:mypassword@0.0.0.0:8443

      If the --mkvlive parameter is present with no argument then the server listener will use the properties of the previous HTTP server listener present on the command line, or 0.0.0.0:8080 if it is the first HTTP server parameter.

      Note: This option is the same as --webmlive.

    3. --mkvlivemax=
    4.  - 

      Maximum number of simultaneous MKV / WebM Server connections. The default value is 4 if the MKV / WebM server is enabled.

    5. --mkvdelay=
    6.  - 

      Delay factor applied to audio and video stream output for Matroska / WebM format encapsulation. Value should be expressed as a positive float in seconds. This setting will cause a client media player to pre-buffer the live content and allow it to be treated like a static file. Without this value set some media players may continue to show a buffering or loading icon for the video. The default is value is 1.0 second.

    7. --mkvrecord[=]
    8.  - 

      The output filename for recording output content to a file. The recorded media will be encapsulated using the Matroska file format. A .mkv suffix will be appended to the output file if none is given. The command will fail if an output codec is not supported by the container file format.

      When using multiple parallel output encodings, each individual output stream can be recorded by suffixing this argument with the output encoding index. For eg., to record output stream 2 use --mkvrecord2=.


     
    3.2.9 Arguments Controlling RTP / SRTP Streaming

    1. --mtu=
    2.  - 

      Path MTU (Maximum Transimission Unit) of the payload data in bytes. This is in addition to any RTP, UDP, IP and other packet headers.

    3. --noseqhdrs=
    4.  - 

      Disable inclusion of video specific sequence headers within the codec specific transport bit-stream. For an H.264 output stream this option disables including the SPS / PPS preceding each key-frame.

    5. --out=
    6.  - 

      Output destination or file name. If this option is absent then the output string takes the value of the --stream= argument.

      Multiple UDP / RTP recipients can be specified by suffixing this argument with an incrementing output index. For eg., to output to two destinations use --out1= and --out2=. If performing transcoding to produce multiple parallel output encodings, each destination output index will correspond to the same encoder output index. Up to four output destinations can be given.

      Examples include:

      RTP output of video and audio to port 5004 with MPEG-2 TS media encapsulation.

      --out=rtp://127.0.0.1:5004 --rtptransport=m2t

      Direct UDP output of video and audio to port 5004 with MPEG-2 TS media encapsulation.

      --out=udp://127.0.0.1:5004 --rtptransport=m2t

      RTP output using the RTP/AVP profile where the video elementary stream is sent on port 5004 and the audio elementary stream is sent on port 5006. RTP codec specific native encapsulation instead of MPEG-2 TS is used.

      --out=rtp://127.0.0.1:5004,5006 --rtptransport=native

      RTP output using the RTP/AVP profile where the audio elementary stream is sent on port 5006 and the video stream is not output.

      --out=rtp://127.0.0.1:5006 --novideo --rtptransport=native

      Secure RTP output using the RTP/SAVP profile where the video elementary stream is sent on port 5004 and the audio elementary stream is sent on port 5006.

      --out=srtp://127.0.0.1:5004,5006

      For DTLS RTP output using the UDP/TLS/RTP/SAVP profile where the video elementary stream is sent on port 5004 and the audio elementary stream is sent on port 5006 using secure media channels.

      --out=dtls://127.0.0.1:5004,5006

      RTSP ANNOUNCE to a remote RTSP server

      --out=rtsp://username:password@127.0.0.1/stream-name

      RTMP PUBLISH to a remote RTMP server

      --out=rtmp://username:password@127.0.0.1/app-name/stream-name

      Write the output to local storage using MPEG-2 TS encapsulation.

      --out=/path/outputfile.m2t

      Note: Alternatively the --flvrecord or --mkvrecord parameters can be used to save the stream output into an FLV or MKV container file.

    7. --rtcpavsync=
    8.  - 

      Audio / Video offset in seconds given as float which is advertised in RTCP sender reports. RTP / RTCP receivers may choose to ignore this value and may instead only respect the timestamp of the first packet received.

    9. --rtcpports=
    10.  - 

      RTCP output port number(s). The default behavior is to use an RTCP UDP port number one greater than the RTP port. This parameter can be used to setup non-default RTCP port number(s). The RTCP ports specified with this parameter apply only to direct RTP output and not to RTSP initiated RTP streams. The following example shows video and audio output to two destinations, the first using default RTCP ports, and the second multiplexing the RTCP channels on the RTP channels.

      --out=rtp://10.10.10.10:2000,2002 --rtcpports=2001,2003 out2=rtp://10.10.10.10:3000,3002 --rtcpports2=3000,3002

    11. --rtcp-mux
    12.  - 

      Multiplex both RTCP and RTP channels on the same port. This is a short cut for the following syntax:

      --out=rtp://10.10.10.10:2000,2002 --rtcpports=2000,20002

    13. --rtp-mux
    14.  - 

      Multiplex both audio and video RTP channels on the same port. This is a short cut for the following syntax:

      --out=rtp://10.10.10.10:2000,2000

    15. --rtcpsr=
    16.  - 

      RTCP Sender Report transmission interval in seconds. The default interval is 5 seconds. Set to 0 to disable RTCP Sender Reports. This value overrides the configuration file parameter RTCPSenderReportInterval.

    17. --rtpclock=
    18.  - 

      The RTP output clock rate in Hz. To set the video stream clock to 24KHz and the audio to 48KHz use --rtpclock="v=24000,a=48000" or --rtpclock="24000,48000" . This option can be used to override the default output clock obtained from the input media or SDP file.

    19. --rtpmaxptime=
    20.  - 

      Sets the amount of audio frames aggregated into a single RTP output packet payload for audio codecs which permit compounding multiple audio frames. The value is expressed as a duration in ms. For an audio codec producing an audio frame every 20ms, --rtpmaxptime=40 can be used to include two audio frames into a single RTP packet payload.

    21. --rtppayloadtype=
    22.  - 

      The RTP output payload type(s). To set the video payload type to 97, and the audio to 96 use --rtppayloadtype="v=97,a=96" or --rtppayloadtype="97,96". The default RTP payload type values are codec specific.

    23. --rtppktzmode=
    24.  - 

      The RTP output codec specific packetization mode. This value can be used to control a video packetization mode such as for H.264 (RFC 3984). An H.264 NAL packetization mode can take the values 0, 1, 2, with the default set to 1. When using packetization mode 0 ensure to use an encoding parameter videoSliceSizeMax set to less than the payload.

    25. --rtpretransmit[=]
    26.  - 

      Enable RTP packet retransmissions upon receiving RTCP NACK (negative acknowledgement) for an RTP sequence number that is reported as missing by the receiver. RTP retransmission is only available for a video stream and not the audio stream. Enabling this setting can significantly improve video quality under lossy network conditions.

    27. --rtpssrc=
    28.  - 

      To set the video SSRC to 0x01020304 and the audio to 0x01020305 use --rtpssrc="v=0x01020304,a=0x01020305" or -rtpssrc="0x01020304,0x01020305". Instead of the base 16 notation show in this example, the SSRC can be expressed in standard base 10 notation. If this option is not provided, the default RTP SSRC values are generated randomly.

    29. --rtptransport=
    30.  - 

      Output transport protocol. The following values apply:

      m2t - Use MPEG-2 Transport Stream encapsulation resulting in both audio and video elementary streams being multiplexed in one MPEG-2 Transport stream.

      native - Use Video / Audio codec specific native RTP encapsulation type. This is the default setting.

    31. --rtpusebindport[=]
    32.  - 

      Enable use of the same UDP/RTP local capture port(s) as the source UDP port for outgoing RTP packets. This setting is disabled by default and enabled for any video conference endpoints added through the PIP interface.

    33. --sdp=
    34.  - 

      SDP output file path of the published output session description.

    35. --srtpkey=
    36.  - 

      One ore more base64 encoded SRTP key(s) used to provide confidentiality and authentication of the outgoing media stream(s). If providing more than one key, the base64 strings should be separated by a comma ",". The key before the comma is used for the video media and the key after the comma for the audio session. For eg.,

      --srtpkey="mLBdSdX031vbI9rLOV5foVs5rbdobPl8/S2x2Xp0,x6N4T8dNTVfjyh1TU6XHUZ5SgbctnSTui8GwQyIC".

      This argument is only applicable if the output streaming method is srtp://. If this argument is omitted, a session key(s) will be created using the local pseudo random number generator.

      When using multiple RTP recipients, multiple RTP recipient specific keys can be specified by suffixing this argument with an incrementing output index. For eg., to output to two destinations use --srtpkey1= and --srtpkey2=.

      If using DTLS-SRTP dtls-srtp then the SRTP keying material will be automatically and securely established between the client and the server for each media channel and does not need to be shared prior to opening the media channel.


     
    3.2.10 Arguments Controlling STUN

    1. --stunrequest[=]
    2.  - 

      Issue STUN binding requests on any RTP / RTCP outbound socket. An optional parameter can follow specifying the STUN policy. The following values apply:

      0 - STUN binding requests disabled. This is the default STUN policy if this argument is missing.

      1 - Always send STUN binding requests. This is the default STUN policy if an optional parameter is omitted. For eg., --stunrequest.

      2 - Send STUN binding requests only if first receiving a STUN binding request on the RTP / RTCP socket. For eg., --stunrequest=2.

    3. --stunrequestuser=
    4.  - 

      Specify the STUN USERNAME attribute value used in STUN binding requests. The STUN username should be a base64 encoded string. The presence of this argument without the --stunrequest argument present will implicitly specify a STUN policy equivalent of --stunrequest=2.

      If providing unique usernames for the video and audio channels, the base64 strings should be separated by a comma ",". The value before the comma is used for the video media and the value after the comma for the audio session. For eg.,

      --stunrequestuser="DfLbHAOOqteRaikppk9FyRFr, eWMFv3WdKONzJprjpbrzRjfY".

    5. --stunrequestpass=
    6.  - 

      Specify the STUN password used for computing a MESSAGE-INTEGRITY HMAC STUN attribute in STUN binding requests. The STUN password should be a base64 encoded string. The presence of this argument without the --stunrequest argument present will implicitly specify a STUN policy equivalent of --stunrequest=2.

      If providing unique passwords for the video and audio channels, the base64 strings should be separated by a comma ",". The value before the comma is used for the video media and the value after the comma for the audio session. For eg.,

      --stunrequestpass="DfLbHAOOqteRaikppk9FyRFr, eWMFv3WdKONzJprjpbrzRjfY".

    7. --stunrequestrealm=
    8.  - 

      Specify the STUN realm used for computing a MESSAGE-INTEGRITY HMAC STUN attribute in STUN binding requests. If the STUN realm, password, and username are all present then STUN long-term credentials are used according to RFC 5389, otherwise this argument is ignored. The presence of this argument without the --stunrequest argument present will implicitly specify a STUN policy equivalent of --stunrequest=2.


     
    3.2.11 Arguments Controlling Picture In Picture

    1. --pip=
    2.  - 

      Picture In Picture media source. Set the input path of the media to be used as the PIP. Any input media format used with the --in parameter can also be used as a PIP input media source, such as a capture from a live source. A .bmp, or .png with Alpha mask can be used for a still image PIP source.

    3. --pipalphamax=
    4.  - 

      (PIP) Picture In Picture maximum alpha masking value. Range is from 0 - 255, with 255 being the default, resulting in no transparency. This value caps the maximum alpha transparency of any pixel if the PIP image contains an alpha mask. A lower value results in greater transparency of the PIP.

    5. --pipalphamin=
    6.  - 

      (PIP) Picture In Picture minimum alpha masking value. Range is from 0 - 255, with 0 being the default. This value caps the minimum alpha transparency of any pixel if the PIP image contains an alpha mask. A greater value results in greater less transparency of the PIP.

    7. --pipbefore=
    8.  - 

      If this argument is present, the PIP will be overlayed into the main picture prior to any scaling directive specified in the --xcode argument. By default, PIP placement is done after any scaling of the main picture.

    9. --pipconf=
    10.  - 

      File path of a PIP configuration file defining PIP characteristics. The PIP configuration file can contain directives for picture cropping, padding, PIP motion, and PIP transitioning. For usage examples refer to the file etc/pip.conf

    11. --piphttp=
    12.  - 

      HTTP PIP control interface server listening address and port string delimited by a colon. The PIP control server can be used to dynamically add or remove a PIP. The PIP control interface server is available at the following URL: http[s]://[username:password@][listener-address]:[port]/pip

      To enable a listener on port 8080 use --piphttp=8080 or --piphttp=http://0.0.0.0:8080

      To enable a listener on localhost use --piphttp=http://127.0.0.1:8080

      To enable an SSL/TLS listener use https://

      To enable HTTP Digest authentication credentials with the username myuser and password mypassword use --piphttp=https://myuser:mypassword@0.0.0.0:8443

      If the --piphttp parameter is present with no argument then the server listener will use the properties of the previous HTTP server listener present on the command line, or 0.0.0.0:8080 if it is the first HTTP server parameter.

    13. --piphttpmax=
    14.  - 

      Maximum number of simultaneous HTTP PIP control connections. The default value is 0 (PIP Control interface disabled).

    15. --pipxcode=
    16.  - 

      PIP formatting configuration passed as a quoted comma separated list of key value pairs. These options take the same format as the --xcode parameters documented under Transcoder Configuration. Only options specific to PIP output dimensions, scaling type, cropping, padding, and frame rate (applicable for non static PIP formats) are processed.

    17. --pipx=
    18.  - 

      The horizontal (x axis) placement of the left edge of the PIP relative from the left edge of the main picture.

    19. --pipxright=
    20.  - 

      The horizontal (x axis) placement of the right edge of the PIP relative from the right edge of the main picture. This can be used instead of --pipx to place a PIP with regard to the right edge.

    21. --pipy=
    22.  - 

      The vertical (y) placement of the top edge of the PIP relative from the top edge of the main picture.

    23. --pipybottom=
    24.  - 

      The vertical (y axis) placement of the bottom edge of the PIP relative from the bottom edge of the main picture. This can be used instead of --pipy to place a PIP with regard to the bottom edge.

    25. --pipzorder=
    26.  - 

      The PIP z axis placement order as a signed integer. The default z-order index is 0. A higher z-order will result in the placement of the PIP covering any other PIPs with a lower z-order index.


     
    3.2.12 Arguments Controlling RTMP Streaming

    1. --rtmp[=]
    2.  - 

      RTMP server listening address and port string delimited by a colon. An RTMP client can access the stream at the URL rtmp[s]://[listener-address]:[port]

      To enable a listener on port 1935 use --rtmp=1935 or --rtmp=0.0.0.0:1935

      To enable a listener on localhost use --rtmp=127.0.0.1:1935

      To enable an SSL/TLS listener use rtmps://

      If the --rtmp parameter is present with no argument then the server will bind to 0.0.0.0:1935.

    3. --rtmpmax=
    4.  - 

      Maximum number of simultaneous RTMP Server connections. The default value is 4 if the RTMP server is enabled.

    5. --stream=rtmp://
    6.    

      To connect to a remote RTMP server and PUBLISH an RTMP stream use the stream output syntax.

      --stream=rtmp://username:password@x.x.x.x/app-name/stream-name

      To enable an SSL/TLS stream output use rtmps://


     
    3.2.13 Arguments Controlling RTSP Streaming

    1. --rtsp[=]
    2.  - 

      RTSP server listening address and port string delimited by a colon. An RTSP client can access the stream at the URL rtsp[s]://[username:password@][listener-address]:[port]

      To enable a listener on port 1554 use --rtsp=1554 or --rtsp=0.0.0.0:1554

      To enable a listener on localhost use --rtsp=127.0.0.1:1554

      To enable an SSL/TLS listener use rtsps://

      To enable RTSP Digest authentication credentials with the username myuser and password mypassword use --rtsp=rtsp://myuser:mypassword@0.0.0.0:1554

      If the --rtmp parameter is present with no argument then the server will bind to 0.0.0.0:1554.

      Note: The default listener port 1554 is different from the standard RTSP port 554, which requires root user access privileges.

    3. --rtspmax=
    4.  - 

      Maximum number of simultaneous RTSP Server connections. The default value is 4 if the RTSP server is enabled.

    5. --rtsp-interleaved[=]
    6.  - 

      Force RTSP TCP interleaved mode for all output media streams. Use --rtsp-interleaved=0 to force RTSP UDP/RTP output.

    7. --rtsp-srtp[=]
    8.  - 

      Force SRTP protection for all UDP RTSP media streams. This setting is on by default when using an SSL/TLS RTSP control channel connection. Use --rtsp-strtp=0 to force RTSP UDP/RTP with an SSL/TLS RTSP control channel connection.

    9. --stream=rtsp://
    10.    

      To connect to a remote RTSP server and ANNOUNCE an RTSP stream use the stream output syntax.

      --stream=rtsp://username:password@x.x.x.x:554/stream-name

      To enable an SSL/TLS RTSP output connetion use rtsps://


     
    3.2.14 Arguments Controlling MPEG-2 TSStreaming

    1. --tslive[=]
    2.  - 

      HTTP MPEG-2 Transport Stream encapsulated live content streaming server listening address and port string delimited by a colon. An HTML5 capable client can access the stream at the URL http[s]://[username:password@][listener-address]:[port]/tslive

      To enable a listener on port 8080 use --tslive=8080 or --tslive=http://0.0.0.0:8080

      To enable a listener on localhost use --tslive=http://127.0.0.1:8080

      To enable an SSL/TLS listener use https://

      To enable HTTP Digest authentication credentials with the username myuser and password mypassword use --tsive=https://myuser:mypassword@0.0.0.0:8443

      If the --tslive parameter is present with no argument then the server listener will use the properties of the previous HTTP server listener present on the command line, or 0.0.0.0:8080 if it is the first HTTP server parameter.

    3. --tslivemax=
    4.  - 

      Maximum number of simultaneous MPEG-2 Transport Stream HTTP connections. The default value is 4 if the MPEG-2 Transport Stream HTTP server is enabled.


 
3.3 Transcoding

 

    3.3.1 Arguments Controlling Transcoding

    1. --xcode=
    2.  - 

      The path of the transcoder configuration file or a quoted comma separated list of transcoder configuration parameters expressed as key value pairs.

      The transcoder configuration file defines any transcoder parameters using key value pair format. A reference configuration file etc/xcode.conf is supplied with the OpenVSX installation bundle. Each configuration key can be specified in either long or abbreviated notation. Abbreviated notation is preferred when passing a transcoder parameter list on the command line.

      Video Output Parameters


      1. videoCodec (or vc)
      2.  - 

        Video Codec Name

        • h264 - H.264
           
        • h263 - H.263
           
        • h263+ - H.263+
           
        • mpg4+ - MPEG-4 Part 2
           
        • vp8 - VP8
           
        • rgba - RGBA8888 (32 bits per pixel with 8 bit alpha mask)
           
        • bgra - BGRA8888 (32 bits per pixel with 8 bit alpha mask)
           
        • rgb565 - RGB565 (16 bits per pixel)
           
        • bgr565 - BGR565 (16 bits per pixel)
           
        • rgb - RGB888 (24 bits pr pixel)
           
        • bgr - BGR888 (24 bits pr pixel)
           
        • nv12 - NV12 (12 bits per pixel)
           
        • nv21 - NV21 (YUV420SP) (12 bits per pixel)
           
        • passthru or same - Pass-thru transcoding. Only available if multiple output encodings are enabled.
           
        • yuv420sp - YUV420SP (NV21) (12 bits per pixel)
           
        • yuv420p - YUV420P (12 bits per pixel)
           
        • yuva420p - YUVA420P (20 bits per pixel) 8 bit alpha mask
           
      3. cropBottom (or cropb)
      4.  - 

        Number of pixels to crop at bottom edge of picture. Default 0.

      5. cropLeft (or cropl)
      6.  - 

        Number of pixels to crop at left edge of picture. Default 0.

      7. cropRight (or cropr)
      8.  - 

        Number of pixels to crop at right edge of picture. Default 0.

      9. cropTop (or cropt)
      10.  - 

        Number of pixels to crop at top edge of picture. Default 0.

      11. padAspectRatio (or padaspect)
      12.  - 

        Set to 1 to preserve original aspect ratio of when adding padding pixels. Default 0.

      13. padBottom (or padb)
      14.  - 

        Number of pixels to add as border at bottom edge of picture. The frame output resolution is preserved but the original picture is scaled down to accommodate the padding. Default 0.

      15. padLeft (or padl)
      16.  - 

        Number of pixels to add as border at left edge of picture. The frame output resolution is preserved but the original picture is scaled down to accommodate the padding. Default 0.

      17. padRight (or padr)
      18.  - 

        Number of pixels to add as border at right edge of picture. The frame output resolution is preserved but the original picture is scaled down to accommodate the padding. Default 0.

      19. padTop (or padt)
      20.  - 

        Number of pixels to add as border at top edge of picture. The frame output resolution is preserved but the original picture is scaled down to accommodate the padding. Default 0.

      21. padColorRGB (or padrgb)
      22.  - 

        RGB Color value of padding pixels. The default value is 0x000000 (black). For eg. 0xffffff is the RGB value for white.

      23. videoBitrate (or vb)
      24.  - 

        Video output bitrate in Kilobits per sec. This value is relevant for codecs that use bit rate based rate control (btrt), such as H.264 when it is not using cqp or crf based rate control.

        The videoBitrate parameter should always be specified if HTTPLive output is enabled for multiple parallel encodings to provide adaptive bitrate support. This is applies even if an encoder rate control method other than btrt is used. The value of videoBitrate is used to describe each bitrate specific output stream in the master HTTPLive playlist to enable bitrate stream switching.

      25. videoBitrateTolerance (or vbt)
      26.  - 

        Video output bandwidth variance tolerance in Kilo bits per sec. Applicable only if videoBitrate is set. The default value is 0 which uses the encoder specific output bitrate tolerance.

      27. vbvBufferSize (or vbvbuf)
      28.  - 

        Video output video buffer verifier buffer size in Kilobits. This value is relevant for codecs that support it.

      29. vbvMaxRate (or vbvmax)
      30.  - 

        Video output video buffer verifier maximum bitrate in Kilobits per sec. This value is relevant for codecs that support it.

      31. vbvMinRate (or vbvmin)
      32.  - 

        Video output video buffer verifier minimum bitrate in Kilobits per sec. This value is relevant for codecs that support it.

      33. vbvAggressivity (or vbvaggr)
      34.  - 

        Video output video buffer aggressivity value (0.0 - 1.0) for codecs that support this configuration. This setting maps to VP8 undershoot percentage.

      35. videoInputConstantFps (or vcfrin)
      36.  - 

        Controls input frame rate type. The following values apply:

        0 - (Default) Input frame timestamp is obtained from input transport mechanism (MPEG-2 TS PTS / DTS, RTP Timestamp).

        1 - Input frame rate is always constant according to fps automatically obtained from video codec specific sequence header information, or fps command line hint. The effective input frame timestamp may be automatically adjusted if it drifts beyond a threshold of the actual input frame time.

      37. videoOutputConstantFps (or vcfrout)
      38.  - 

        Controls output frame rate type. The following values apply:

        0 - (Default) Output frame rate timestamp will be the same as the corresponding input frame time stamp. The output fps (videoFrameRate) will not be exceeded even if the input video frame rate is higher than the configured output.

        1 - Output frame rate is always constant according to (videoFrameRate). The output frame timestamp may be automatically internally adjusted if it drifts beyond the threshold (videoOutputFpsDriftTolerance) of the input frame time.

        -1 - Output frame rate timestamp is always the same as the corresponding input frame time stamp, regardless of the output fps (videoFrameRate). If this value is omitted, the configured output fps (videoFrameRate) will not be exceeded even if the input video frame rate is higher than the configured output.

      39. videoOutputFpsDriftTolerance (or vcfrtol)
      40.  - 

        If videoOutputConstantFps is enabled, this value controls the constant frame rate timestamp tolerance in milliseconds. The tolerance is the limit on the deviation of the constant frame rate time stamp with the actual input frame timestamp. If the tolerance is exceeded, the output frame timestamp is reset to the input time stamp. The default value is 400ms.

      41. videoEncoderSpeed (or vf)
      42.  - 

        Encoder speed / quality trade-off presets. This is mutually exclusive with the setting videoEncoderQuality. The following values apply:

        0 - slowest (highest quality)

        1 - slow (high quality)

        2 - medium (medium quality)

        3 - fast (low quality)

        4 - fastest (lowest quality)

      43. videoEncoderQuality (or vqual)
      44.  - 

        Encoder quality / speed trade-off presets. This is mutually exclusive with the setting videoEncoderSpeed. The following values apply:

        0 - lowest (highest speed)

        1 - low (fast speed)

        2 - medium (medium speed)

        3 - high (slow speed)

        4 - highest (lowest speed)

      45. videoFps (or vfr)
      46.  - 

        Output Frame rate expressed as a floating point. For eg videoFps=29.97 . The Output frame rate value is passed to the encoder to determine the bitrate when using non-qp based encoding. The actual output frame rate may deviate from the supplied value depending on the encoder specific configuration, such as settings of videoInputConstantFps, videoUpsampling.

      47. videoForceIDR (or vidr)
      48.  - 

        If present, sends an IDR request to the underlying encoder. This flag should not be present in the initial transcoding configuration but can be used to explicitly request an IDR during an encoder re-configuration update.

      49. videoGOPMs (or vgop)
      50.  - 

        Encoder max Group Of Pictuers (GOP) in milliseconds. The actual frame count passed to the encoder is obtained based on the specified frame rate (videoFps). The value "infinite" can be used to request the encoder to only produce a single IDR. The videoForceIDR parameter can subsequently be used to request an IDR when performing an encoder configuration update.

      51. videoMinGOPMs (or vmingop)
      52.  - 

        Encoder min Group Of Pictures (GOP) in milliseconds. The actual frame count passed to the encoder is obtained based on the specified frame rate (videoFps).

      53. videoLookaheadFramesMin1 (or vlh)
      54.  - 

        Encoder specific look-ahead configuration. If > 0, the value passed to encoder is (n - 1). The following values apply:

        0 - (Default) Use automatic encoder specific configuration.

        1 - Minimal encoder lag configuration (0). This value should be used for real-time encoding. Note that the videoThreads (encoder thread control count) and videoDecoderThreads (decoder thread cont rol count) may also influence the actual transcoder frame lag.

        n - (n - 1) lag encoder configuration.

      55. videoProfile (or vp)
      56.  - 

        Encoder codec specific profile.

        H.264 Profiles:

        0 - (Default) H.264 High

        66 - H.264 Baseline

        77 - H.264 Main

        100 - (Default) H.264 High

        MPEG-4 Part 2 Profiles:

        This 8 bit value is a combination of the profile and level. The profile is the 4 most significant bits, the level is the 4 least significant bits.

        0 - (Default) Simple Profile

        240 - Advanced Simple Profile (Profile value 15 (0x0f) << 4)

        n - (Profile:15 (0x0f) << 4) | (Level:1 (0x01)). For eg. Profile 15, Level 1 (15 << 4 | 1) = 241

        VP8 Profiles. In general, the higher the profile number the faster the encoding speed and the lower the encoder quality. Valid values are 0 - 3. (Default value is 0.)

      57. videoQuantizer (or vq)
      58.  - 

        Video target quantizer for codecs which support it. videoQuantizer should be used in-place of videoBitrate (target bitrate) when using cqp or crf based rate control. For H.264, a valid quantizer range is from 10-51, with lower values giving higher quality. The quantizer value is for P-frames.

      59. videoQuantizerBRatio (or vqbratio)
      60.  - 

        Video target quantizer output ratio for B frames relative to P frames. A value > 1 produces an average B frame with higher quantizer (lower quality) than an average P frame. H.264 default is 1.25.

      61. videoQuantizerIRatio (or vqiratio)
      62.  - 

        Video target quantizer output ratio for I frames relative to P frames. A value < 1 produces an average I frame with a lower quantizer (higher quality) than an average P frame. H.264 default is 0.71

      63. videoQuantizerMax (or vqmax)
      64.  - 

        The maximum output quantizer of generated P fram es. This can be specified to have a supporting encoder produce output frames within a permissible quality threshold . Quantizer Range is codec specific. For H.264: 10 - 51. For MPEG-4: 2 - 31. For VP8: 4 - 63.

      65. videoQuantizerMin (or vqmin)
      66.  - 

        The minimum output quantizer of generated P frames. This can be specified to have a supporting encoder produce output frames within a permissible quality threshold. Quantizer Range is codec specific. For H.264: 10 - 51. For MPEG-4: 2 - 31. For VP8: 4 - 63.

      67. videoQuantizerDiff (or vqdiff)
      68.  - 

        The maximum difference in quantization of consecutive output frames.

      69. videoRateControl (or vrc)
      70.  - 

        Video output rate control type. The following values apply:

        btrt - Bit rate based rate control. This is the default rate control type. A valid videoBitrate bit rate should also be given if using bitrate based rate control. If using a low videoBitrateTolerance bit rate tolerance value, this method is usually able to produce video output with the least standard deviation of output bandwidth, which is best suitable for streaming on congested network links.

        cqp - Rate control utilizing a constant quantizer for P-frames. A valid target quantizer “videoQuantizer” should also be given. This method strives to produce video output where each P frame is encoded to a similar quality, leading to very controlled video output quality at the expense of some bandwidth wasting. Video output bandwidth can have very high standard deviation depending on the scene complexity of the input video.

        crf - Rate control utilizing a constant Rate Factor. A valid target quantizer videoQuantizer should also be given. This method is very similar to cqp but strives to be more intelligent in the allocation of available bandwidth between high complexity and low complexity frames. For network streaming, crf based rate control is generally preferred instead of “cqp” in terms of standard deviation of video output bitrate.

      71. videoScalerType (or vsc)
      72.  - 

        Resolution scaler type The default value is 3. Order should be from fastest to slowest, with fastest being 1. Libswscale scalers are listed below:

        1 - SWS_FAST_BILINEAR

        2 - SWS_BILINEAR

        3 - SWS_BICUBIC

        4 - SWS_GAUSS

        5 - SWS_SINC

      73. videoISceneAggressivity (or vsi)
      74.  - 

        Frame scene cut insertion aggressivity. This value is encoder specific. For the x264 encoder, the value is from 1 .. 100, where 1 is least aggressive, 100 is most aggressive. The default value is 40.

      75. videoSliceSizeMax (or vslmax)
      76.  - 

        Encoder maximum size of each frame slice in bytes. This value is encoder and codec specific. A value less than the MTU should result in multiple slices per frame.

      77. videoThreads (or vth)
      78.  - 

        Video encoder specific number of threads.

        0 - (Default) Encoder will choose a default value. Values greater than 1 will usually incur additional encoder frame output lag. This value should be set to 1 for real-time minimal encoder latency.

      79. videoDecoderThreads (or vthd)
      80.  - 

        Video decoder specific number of threads.

        0 - (Default) Decoder will choose a default value. Values greater than 1 could potentially incur additional decoder frame output lag. This value should be set to 1 for real-time minimal decoder latency.

      81. videoUpsampling (or vup)
      82.  - 

        Controls video up-sampling / frame duplication logic. The following values apply:

        0 - (Default) disables frame up-sampling / frame duplication.

        1 - Enable frame up-sampling in-order to adhere to specified output frame rate. Up-sampling is only enabled if videoOutputConstantFps is set to 1.

      83. videoWidth (or vx)
      84.  - 

        Horizontal output resolution in pixels.

        0 - Default. If videoWidth is set and videoHeight is not set, the input picture aspect ratio will be preserved, and the horizontal resolution will be set to videoWidth. If both videoWidth and videoHeight are not set, the input picture dimensions will be preserved.

      85. videoHeight (or vy)
      86.  - 

        Vertical output resolution in pixels.

        0 - Default. If videoHeight is set and videoWidth is not set, the input picture aspect ratio will be preserved, and the vertical resolution will be set to videoHeight.

      87. mbtree (or mb-tree)
      88.  - 

        Controls macroblock tree rate-control. The following values apply:

        1 - (Default) Enables macroblock tree rate-control if supported by the underlying encoder. This may lead to significant speed loss when encoding an I frame depending on the frame lookahead / encoder delay.

        0 Disables macroblock tree lookahead control.

      Video output parameters controlling dynamically adaptive bitrate configuration. These parameters are used to control maximum and minimum output thresholds when Adaptive Bitrate Control is enabled using --abrauto.


      1. videoBitrateMax (or vbmax)
      2.  - 

        Maximum Video output bitrate in Kilobits per sec. This setting should be at or above the configured video bitrate videoBitrate value. The videoBitrate will be the stream starting bitrate before any adjustment occurs. The dynamically adjusted bitrate will not exceed this ceiling threshold.

      3. videoBitrateMin (or vbmin)
      4.  - 

        Minimum Video output bitrate in Kilobits per sec. This setting should be at or below the configured video bitrate videoBitrate value. The videoBitrate will be the stream starting bitrate before any adjustment occurs. The dynamically adjusted bitrate will not exceed this floor threshold.

      5. videoFpsMax (or vfrmax)
      6.  - 

        Maximum output frame rate expressed as a floating point. This setting should be at or above the configured frame rate videoFps value. The videoFps will be the stream starting frame rate before any adjustment occurs. The dynamically adjusted frame rate will not exceed this ceiling threshold.

      7. videoFpsMin (or vfrmin)
      8.  - 

        Minimum output frame rate expressed as a floating point. This setting should be at or below the configured frame rate videoFps value. The videoFps will be the stream starting frame rate before any adjustment occurs. The dynamically adjusted frame rate will not exceed this floor threshold.

      Audio Output Parameters


      1. audioCodec (or ac)
      2.  - 

        Audio Codec Name

        • aac - AAC
           
        • ac3 - A53 / AC3
           
        • amr - AMR-NB
           
        • g711a - G.711 alaw
           
        • g711u - G.711 mulaw
           
        • opus - OPUS
           
        • silk - SILK
           
        • vorbis - Vorbis
           
        • pcm - PCM signed 16bit Little Endian
           
        • ulaw - PCM 8 bit mulaw
           
        • alaw - PCM 8 bit alaw
      3. audioBitrate (or ab)
      4.  - 

        Audio output bandwidth per channel in bits per second.

        0 - (Default) Uses encoder default setting.

      5. audioForce (or af)
      6.  - 

        Force audio output transcoding even if output codec matches input codec type, sample rate, and channel configuration.

        0 - (Default) enable audio transcoding only if output codec does not match input codec type, sample rate, and channel configuration.

        1 - Force audio transcoding.

      7. audioSampleRate (or ar)
      8.  - 

        Audio output sampling frequency in HZ.

        0 (Default) Reuses input sampling rate.

      9. audioChannels (or ac)
      10.  - 

        Audio output channel count.

        0 (Default) Reuses input channel count.

      11. audioProfile (or ap)
      12.  - 

        Audio encoder specific profile.

      13. audioVolume (or av)
      14.  - 

        Audio output volume adjustment.

        0 - (Default) No volume adjustment.

        0 < n < 256 - Decrease volume by factor of (8 - log2(n)).

        256 Base setting resulting in no volume adjustment.

        256 > n > 65536 - Increase volume by factor of (log2(n) - 8).


       
      3.3.2 Encoding Multiple Parallel Outputs

      OpenVSX is able to produce multiple encoder outputs for the same input media. The same --xcode= parameter string is used to define multiple encoder configurations by assigning a unique output index to any configuration parameter. For eg., to define a second encoded output stream with a video bitrate of 500 Kb/s, include the index 2 following the parameter defining the bitrate, such as videoBitrate2=500. The second encoded stream will use the same properties as the first one, with the specified bitrate being unique. Up to four unique output instances can be defined.

      To enable pass-thru encoding of the original input stream the codec type videoCodec=passthru is used, such as videoCodec2=passthru,videoCodec2=h264,videoBitrate2=500 . If videoCodec2=passthru, or videoCodec3=passthru is specified, the pass-thru output will always be placed into the first index. Alternatively, passthru=on can be specified, which turns on pass-thru encoding into the first output index.

      To access each additional encoder output stream the corresponding output index should be specified in the format specific client request. For eg., to access the encoder output stream defined with the index 2, /2 should be appended to the format specific request for the stream. Such as http://[SERVER IP]:[8080]/tslive/2, or rtsp://[SERVER IP]:1554/stream/2.

      Multiple encoder outputs can be used to provide adaptive bitrate streaming. If HTTPLive output is enabled, OpenVSX will automatically create a master playlist containing multiple output stream descriptions for HTTPLive adaptive bitrate stream switching.

     

    3.4 Live Input Capture

     

      3.4.1 Arguments Controlling Stream Capture and Recording, -h capture

      1. --capture, -c
      2.  - 

        Enable live input stream capture.

      3. --in=
      4.  - 

        Input SDP file, capture string, or capture device of the input media to be processed.

        Refer to the output stream context for Input parameter examples and semantics.

      5. --out=
      6.  - 

        Output destination or filename. --out or --stream is used in conjunction with --capture to re-stream a live input.

        Refer to the output stream context for Output parameter examples and semantics.

      7. --dir=
      8.  - 

        Input storage directory path for recording capture. If this option is absent any recorded file will be written to the OpenVSX home directory.

      9. --filter=
      10.  - 

        Capture specific input filter string. Each filter corresponds to the appropriate index of the input array. Filter parameters are supplied as a quoted comma separated list of key value pairs delimited by the "=" character.

        It is recommended to use an input SDP file when capturing one or more RTP streams instead of constructing a filter argument. The SDP file will serve as the input argument such as --in=input.sdp

        Input capture filter required for all input stream types.

        type - Input Stream designation type.

        • h264 - H.264 over RTP using NAL packetization (RFC 3984)
        • mpg4 - MPEG-4 Part 2 over RTP (RFC 3016)
        • h263 - H.263
        • h263+ - H.263+ or H.263 plus
        • aac - AAC over RTP (RFC 3640)
        • amr - AMR over RTP (RFC 3267)
        • g711u - G.711 mu-law over RTP
        • g711a - G.711 a-law over RTP.
        • raw - Raw input. Raw stream data can be recorded "as-is" and is not dependant on codec specific packetization.
        • rgb - RGB888 (24 bits per pixel)
        • bgr - BGR888 (24 bits per pixel)
        • rgba - RGBA8888 (32 bits per pixel, 8 bit alpha mask)
        • bgra - BGRA8888 (32 bits per pixel, 8 bit alpha mask)
        • rgb565 - RGB565 (16 bits per pixel, 8 bit alpha mask)
        • bgr565 - BGR565 (16 bits per pixel, 8 bit alpha mask)
        • yuv420p - YUV420p (YUV 4:2:0 Y,UU, VV planar)
        • nv12 - YUV420sp (YUV 4:2:0 Y, UV, UV semi-planar)
        • nv21 - YUV420sp (YUV 4:2:0 Y, VU, VU semi-planar)
        • pcm - PCM 16 bit signed little endian
        • alaw - PCM 8 bit a-law
        • ulaw - PCM 8 bit mu-law
        • m2t - MPEG-2 Transport Stream. This type should be used when downloading a live stream within an MPEG-2 TS container via HTTP
        • flv - FLV file. This type should be used when downloading a live stream within an FLV container via HTTP

        Input capture filters useful for PCAP based capture.

        dst - Destination IP Address.

        src - Source IP Address.

        ip - Source or Destination IP Address.

        dstport - Destination Port.

        srcport - Source Port.

        port - Source or Destination Port.

        pt - RTP specific Payload Type.

        ssrc - RTP specific SSRC.

        Input audio stream capture filters.

        clock - Clock Rate in HZ (required for most Audio stream).

        channels - Number of audio channels. (Defaults to 1).

        Input video stream capture filters.

        fps - FPS of input video stream.

        clock - RTP Timestamp Clock Rate in HZ.

        Input raw format filters.

        width - Video input horizontal resolution.

        height - Video input vertical resolution.

        size - Video input resolution given as a single string delimited by "x" For eg. (size=640x480).

        Stream specific recording control.

        file - Output path of recording file. If file is not explicitly given the output file name will be automatically generated based on the input stream characteristics.

        For eg., capture an H.264 RTP video stream on port 5004 with RTP payload type 96, and an AAC audio stream with payload type 97. --capture="rtp://5004" --filter="pt=96, type=h264" --filter="pt=97, type=aac"

      11. --firxmit[=]
      12.  - 

        Controls how RTCP Feedback type Full Intra Refresh messages (RTCP FB FIR) as defined in RFC 5104, are sent from the application. The following behaviors are simultaneously affected with this parameter.

        RTCP FB FIR messages will not be sent to the originator of a multicast RTP transmission unless the configuration file parameter RTCPReplyToMulticastStream is enabled.

        Controls the transmission of RTCP FB FIR messages if requested by the local decoder. The default value is 1, FIR messages may be sent to the remote input capture transmitter if required by the decoding process. If set to 0, FIR messages requested by the local decoder are disabled. This value can be individually controlled by the configuration file parameter FIRSendFromDecoder.

        Controls the transmission of RTCP FB FIR messages if requested by the local RTP receiver. The default value is 0, FIR messages are not sent to the remote input capture transmitter if packet loss has led to corruption of the video bit-stream. If set to 1, FIR messages may be sent if the input video bit-stream corruption has been detected. This value can be individually controlled by the configuration file parameter FIRSendFromInputCapture.

        This value is automatically set to 1 if the input is an SDP file containing the Codec Control Message Full Intra Refresh video media attribute. For eg., "a=rtcp-fb:* ccm fir".

        Controls the transmission of RTCP FB FIR messages when a client connects to a server published instance of the media output and the local output is not transcoded. An example is a connection to the HTTP URLs /tslive, /flvlive, mkvlive, the RTSP, or the RTMP server listener. The default value is 1, which enables generation of an RTCP FB FIR message to the remote input capture transmitter if a key-frame is needed to begin the format specific output. If set to 0, no FIR message will be generated. This value has no effect if the local output is transcoded. This value can be individually controlled by the configuration file parameter FIRSendFromRemoteConnect.

        Controls the transmission of RTCP FB FIR messages upon reception of an RTCP FB FIR message from a remote RTP receiver. The default value is 0, FIR requests are not sent to the remote RTP transmitter upon reception of an RTCP FB FIR. If set to 1, FIR requests may be sent if the local output is not transcoded and FIR message has been received from a remote RTP receiver. This value can be individually controlled by the configuration file parameter FIRSendFromRemoteMessage.

      13. --maxrtpdelay=
      14.  - 

        Maximum RTP play-out buffer time controlling wait time for any out of order or lost packet. The default value is 100 ms.

      15. --maxrtpaudiodelay=
      16.  - 

        Maximum RTP play-out buffer time controlling wait time for any out of order or lost audio packet. The default value is 100 ms.

      17. --maxrtpvideodelay=
      18.  - 

        Maximum RTP play-out buffer time controlling wait time for any out of order or lost video packet. The default value is 100 ms. If RTCP NACK transmission is enabled (--nackxmit=1) then the default value is 500ms for missing video RTP packets.

      19. --nackxmit[=]
      20.  - 

        Controls if RTCP Feedback type NACK (Negative acknowledgement) messages (RTCP FB NACK) as defined in RFC 4585, are sent from the application. NACK messages are used to signal the RTP sender that an RTP sequence number has not been received and is most likely lost. NACK are sent only for a video stream and not an audio stream.

        RTCP FB NACK messages will not be sent to the originator of a multicast RTP transmission unless the configuration file parameter RTCPReplyToMulticastStream is enabled.

      21. --overwrite
      22.  - 

        Enable overwriting of recording output file if it already exists

      23. --pes
      24.  - 

        Enable de-mux of each packetized MPEG-2 TS elementary stream and subsequent re-mux.

        If this argument is not specified then the input MPEG-2 TS stream will be directly replayed without decomposing each PES frame.

        If this argument is given then each MPEG-2 TS PES frame will be decomposed and recomposed. This is automatically turned enabled if transcoding is enabled, or for non MPEG-2 TS based capture.

      25. --queue=
      26.  - 

        Input packet queue number of slots applicable for MPEG-2 Transport Stream capture and recording. Actual size in bytes = queue * 12408.

      27. --realtime
      28.  - 

        Enable download of capture input in real-time according to embedded input file timestamps. Useful for HTTP based capture, such as when loading a static FLV file from a web server to be streamed in real-time. If this option is not present, the input will be loaded without throttling and may overflow any input capture queue.

      29. --rembxmit[=]
      30.  - 

        Controls if RTCP Feedback type REMB (Receiver Estimated Maximum Bandwidth) messages (RTCP FB REMB) as are sent from the application. REMB messages are used to inform the RTP sender of the maximum bandwidth that the network link is able to tolerate to maintain good quality. REMB messages are sent only for a video stream and not an audio stream.

        RTCP FB REMB messages will not be sent to the originator of a multicast RTP transmission unless the configuration file parameter RTCPReplyToMulticastStream is enabled.

      31. --rembxmitmaxrate=
      32.  - 

        Sets the highest possible bandwidth that will be reported if REMB transmission is enabled --rembxmit is enabled. If omitted, the ceiling of the max bitrate reported to the RTP sender is 800Kb/s. For eg. --rembxmitmaxrate=1.2Mb/s or --rembxmitmaxrate=1200Kb/s

      33. --rembxmitminrate=
      34.  - 

        Sets the lowest possible bandwidth that will be reported if REMB transmission is enabled --rembxmit=1. If omitted, the floor of the max bitrate reported to the RTP sender is 90Kb/s. For eg. --rembxmitmaxrate=.1Mb/s or --rembxmitmaxrate=100Kb/s.

      35. --rtcprr[=]
      36.  - 

        RTCP Receiver Report interval in seconds. A non-zero value enables transmission of Receiver Reports to the sender. RTCP Receiver Reports are disabled by default. To enable RTCP Receiver Reports it is recommended to use a reasonable interval between 2.0 and 8.0 seconds. The interval of 5.0 seconds will be used If no value is specified. This value overrides the configuration file parameter RTCPReceiverReportInterval.

        RTCP messages will not be sent to the originator of a multicast RTP transmission unless the configuration file parameter RTCPReplyToMulticastStream is enabled.

      37. --rtmpnosig
      38.  - 

        Disable use of a digital signature in RTMP handshake.

      39. --rtmppageurl=
      40.  - 

        Specify an optional RTMPPageUrl string parameter used in the protocol Connect packet when connecting as a stream client to an RTMP server.

      41. --rtmpswfurl=
      42.  - 

        Specify an optional RTMPSwfUrl string parameter used in the protocol Connect packet when connecting as a stream client to an RTMP server.

      43. --rtpframedrop=
      44.  - 

        The RTP input capture frame dropping policy. This policy applies to the way video frames are re-assembled from RTP input packetization. The following values apply:

        0 - Do not attempt to drop any frames when packet loss is detected for an incoming video stream. This policy may pass corrupted video frames to the local decoder or to any stream output if the stream is not transcoded.

        1 - Attempt to drop a video frame when packet loss is detected for an incoming video stream. This policy may discard reference and non-reference video frames if sufficient corruption of the video payload is detected. This is the default drop policy.

        2 - Attempt to drop a video frame when any packet loss is detected for an incoming video stream. This policy will discard any video frame that is corrupted due to packet loss.

      45. --rtsp-interleaved[=]
      46.  - 

        Use RTSP TCP interleaved mode for all input media streams. Use --rtsp-interleaved=0 to force RTSP UDP/RTP capture input.

      47. --rtsp-srtp[=]
      48.  - 

        Force SRTP protection for all UDP RTSP input media streams. This setting is on by default when using an SSL/TLS RTSP control channel connection. Use --rtsp-strtp=0 to force RTSP UDP/RTP with an SSL/TLS RTSP control channel connection.

      49. --audq=
      50.  - 

        Input audio capture queue number of slots. The default value is specific to the input audio codec.

      51. --vidq=
      52.  - 

        Input video capture queue number of slots. The default value is specific to the input video codec.

      53. --retry[=]
      54.  - 

        Enable input method specific retry logic upon failure. Applicable to client mode capture methods such as HTTP, RTMP, and RTSP. The following values apply.

        --retry=0 - (Default) No input retry on failure to connect or recoverable error.

        --retry - Indefinite amount of retries unless an unrecoverable error is encountered.

        --retry=n - Retry up to n consecutive attempts.

      55. --stunrespond[=]
      56.  - 

        Enable STUN binding responses to be sent when receiving STUN binding requests on listener RTP / RTCP sockets. This argument can be followed by an optional STUN password parameter such as --stunrespond="5rGs9Ow+ST/MM3Sc". The STUN password is used for computing a MESSAGE-INTEGRITY HMAC STUN attribute. If the optional password is not given as a parameter, the value of the input SDP attribute a=ice-ufrag: will be used as the password.

     

    3.5 Video Conferencing

    OpenVSX can run as a video conferencing bridge for up to eight participants. A participant's video and audio streams are added to the video conference similar to how a PIP (Picture In Picture) is added through an HTTP(s) interface.

    An audio mixer is used to combine multiple audio streams into a common audio feed. The mixer provides a unique output for each conference participant's output audio stream, removing the participant's own input audio stream from the mixed output.

    The video conference can be broadcast to an unbounded number of read-only viewers using one of the many output formats supported by the server. The conference can be recorded for future replay.

     

      3.5.1 Arguments Controlling Video Conferencing, -h conference

      The following command line arguments control video conferencing. Video conferencing mode is enabled if either the --conference or --mixer argument is present on the command line.


      1. --conference
      2.  - 

        Enables video conferencing mode. Video Conferencing mode allows video and audio streams to be added and removed from the conference using the PIP HTTP(s) interface.

      3. --in=
      4.  - 

        Input media file or still image path to be used as default background of the conference overlay.

      5. --layout=
      6.  - 

        Sets the layout configuration of the arrangement of participant video on the main output overlay. Valid configurations are below:

        p2pgrid - Peer-to-peer Grid layout. Video participants are arranged in a grid-like pattern. If there are two participants in the conference then the session is treated like a two-way call with each participant seeing the other participant and not the grid layout. The grid layout is maintained for the recorded and webcast media content. This is the default configuration.

        grid - Grid layout. Video participants are always arranged in a grid-like pattern regardless of the number of participants.

        vad - Active speaker switching based on VAD analytics. The active speaker's video will cover most of the output video overlay.

      7. --mixer[=]
      8.  - 

        Enables or disables the audio mixer. The audio mixer is enabled by default when the --conference argument is present. To disable the mixer use --mixer=0.

      9. --mixeragc[=]
      10.  - 

        Enables or disables audio Automatic Gain Control. AGC is enabled by default when using the audio mixer. To disable AGC use --mixeragc=0.

      11. --mixerdenoise[=]
      12.  - 

        Enables or disables audio denoise filtering. Audio denoise filtering is enabled by default when using the audio mixer. To disable denoise filtering use --mixerdenoise=0.

      13. --mixervad[=]
      14.  - 

        Enables or disables voice activity detection. VAD is enabled by default when using the audio mixer. To disable VAD use --mixervad=0.

      15. --piphttp[=]
      16.  - 

        The Picture In Picture (PIP) interface is used for dynamically adding or removing conference participants.

        Refer to the Picture In Picture (PIP) context for piphttp parameter examples and semantics.

     

    3.6 Command Line Examples

     

      3.6.1 Streaming Media Examples

      Stream a file using MPEG-2 TS encapsulation over RTP to a remote host at 10.10.10.10:5004.

      ./bin/vsx --verbose --in="path/inputfile.mp4" --stream=rtp://10.10.10.10:5004 --transport=m2t

      Stream a file via native codec specific encapsulation RTP to a remote host at 10.10.10.10, with the video stream on port 5004 and the audio stream on port 5006.

      ./bin/vsx -v --in=path/inputfile.mp4 --stream=rtp://10.10.10.10:5004,5006 --transport=native

      Stream a file via native codec specific encapsulation over RTP to a remote host at 10.10.10.10, with the video stream on port 5004 and the audio stream on port 5006. The stream is also available via MPEG-2 TS over HTTP (/tslive), HTTPLive (/httplive), both on port 8080, via RTMP on port 1935, and via RTSP on port 1554. The HTTP resources are also accessible via SSL/TLS on port 8443. Adding --live allows automatic User-Agent based stream output method selection when connecting to the URL /live.

      ./bin/vsx -v --in="path/inputfile.mp4" --stream=rtp://10.10.10.10:5004,5006 --tslive=8080 --tslive=https://8443 --httplive=8080 --httplive=https://8443 --rtmp=1935 --rtsp=1554 --live=8080 --live=https://8443

      Record a live MPEG-2 TS capture sent from localhost via UDP on port 41394. The content will be saved to out.ts and any prior file content will be automatically overwritten.

      ./bin/vsx -v --capture=udp://127.0.0.1:41394 --filter="type=m2t,file=out.ts" --overwrite

      Record a live input capture sent from a remote host defined by the Session Description Protocol file input.sdp. The content will be saved in FLV format to the file output.flv and any prior file content will be automatically overwritten.

      /bin/vsx -v --capture=input.sdp --flvwrite=output.flv --overwrite

      Stream a live input stream. The live MPEG-2 TS stream is captured via RTP on port 41394. The stream is sent via RTP to the remote host at 10.10.10.10 via MPEG-2 TS encapsulation. The stream is also available via HTTPLive on port 8080 and via RTSP on port 1554.

      ./bin/vsx -v --capture=udp://41394 --filter="type=m2t" --stream=rtp://10.10.10.10:5004 --httplive=8080 --rtsp=1554

      Stream a live input stream described by a Session Description Protocol file. The stream is sent via RTP to a remote host at 10.10.10.10 via MPEG-2 TS encapsulation. The stream is also available via HTTP Live on port 8080 and via RTSP on port 1554.

      ./bin/vsx -v --capture="inputstream.sdp" --stream=rtp://10.10.10.10:5004 --httplive=8080 --rtsp=1554

      Connect to a remote RTMP server such as Wowza, a CDN or YouTube, and publish a live stream using the credentials myuser and mypass. The stream application name is app-name and the stream name is stream-name.

      ./bin/vsx -v --capture="inputstream.sdp --stream=rtmp://myuser:mypass@10.10.10.10:1935/app-name/stream-name

      Connect to a remote RTSP server and announce a live stream using the credentials myuser and mypass. The stream URI is stream-name.

      ./bin/vsx -v --capture="inputstream.sdp --stream=rtsp://myuser:mypass@10.10.10.10:554/stream-name

      Stream a file via MPEG-2 TS encapsulation over RTP to 10.10.10.10:5004. The output is transcoded to H.264 at 300Kb/s, 320x240 at a constant 25fps. Audio output is AAC mono at 48KHz.

      ./bin/vsx -v --in="path/inputfile.mp4" --stream=rtp://10.10.10.10:5004 --xcode="videoCodec=264,videoBitrate=300,vx=320,vy=240,videoFps=25,videoOutputConstantFps=1,videoUpsampling=1,audioCodec=aac,audioBitrate=64000,audioSampleRate=44100,audioChannels=1"

      Stream a live input stream described by a Session Description file an offer it for download via HTTPLive. The output is transcoded into two unique bitrates and automatically packaged into an adaptive bitrate compatible .m3u playlist file available at the URL /httplive. Each bitrate specific HTTP Live output stream is also available at the transcoding output specific index URL, such as /httplive/1 and /httplive/2.

      ./bin/vsx -v --in="test.sdp" --httplive=http://8080 --httplive=https://8443 --xcode="videoCodec=264,videoBitrate=300,vx=320,vy=240,videoFps=25,videoOutputConstantFps=1,videoUpsampling=1,audioCodec=aac,audioBitrate=64000,audioSampleRate=44100,audioChannels=1,videoBitrate2=150"

      Stream and transcode a live input stream to be viewed by an HTML5 capable web browser such as Chrome. The video is encoded in VP8 and the audio in Vorbis and encapsulated in a WebM container format. The output can be viewed in an HTML5 capable web browser by connecting to http://[SERVER IP]:[8080]/mkv or https://[SERVER IP]:[8443]/mkv

      ./bin/vsx -v --in="test.sdp" --mkvlive=http://8080 --mkvlive=https://8443 --xcode="videoCodec=vp8,videoBitrate=300,vx=320,vy=240,videoFps=25,videoEncoderSpeed=3,videoOutputConstantFps=1,videoUpsampling=1,audioCodec=vorbis,audioBitrate=32000,audioSampleRate=44100,audioChannels=1"


       
      3.6.2 Video Conferencing Setup

      Setup a video conferencing server. The video output will be encoded using the VP8 codec at 15fps, and each input participant aspect ratio will be preserved in the output picture. The audio mixer will run at 16KHz and the default audio output will use the AAC codec. Participants can be added and removed by connecting to the PIP server listener on HTTP port 8080. The video conference output is available using /tslive on port 8080 and RTSP on port 1035.

      ./bin/vsx --conference --piphttp=8080 --xcode="videoCodec=vp8,videoEncoderSpeed=3,videoFps=15,videoGOPMaxMs=2000,videoGOPMinMs=1000,padAspectRatio=1,audioCodec=aac,audioSampleRate=16000" --tslive=8080 --rtsp=1035

      The following HTTP request URL will add a conference participant with VP8 video output sent to 10.10.10.10:2000 using RTP payload type 97, and SILK audio output at 8KHz to port 2002 using RTP payload type 96.

      http://[SERVER IP]:8080/pip?pipstart&in=live1.sdp&out=rtp://10.10.10.10:2000,2002&rtppayloadtype=97,96,audioCodec=silk,audioSampleRate=8000

      The conference can be viewed by connecting to the /tslive listener at http://[SERVER IP]:8080/tslive. The audio output available will be AAC at 16KHz. The RTP output to the conference participant will be encoded using SILK at 8KHz. Each audio participant will receive independently mixed audio output which does not contain it's own audio input samples.


       
      3.6.3 Miscellaneous Examples

      Analyze video elementary stream within a container file.

      ./bin/vsx --analyze=inputfile.mp4

      Found SPS timing: 2500000 Hz, tick 104271 Hz (23.976 fps)
      inputfile.mp4 size: 119946.0KB, duration: 00:02:03.1231
      2500000 Hz, tick 104271 Hz (23.9760 fps)
      H.264 High profile (0x64) level 41
      1920x816 CABAC poc:0 ref_frames:3 frame_mbs_only YUV420
      slices:6077 vcl:6075 decode errors:0
      I:170, P:1635, B:1148, SI:0, SP:0, SPS:1, PPS:1, SEI:3122, unknown:0
      frames: 2953, 2.1 slices/fr GOP avg: 17.4, max: 24
      I:170 84.6KB/fr, P:1635 47.5KB/fr, B:1148 24.3KB/fr
      bandwidth avg: 7980.6Kb/s, max(1sec): 15821.9Kb/s, (300ms): 19265.8Kb/s
      

      Add the --verbose option to dump the entire contents of the H.264 SPS / PPS and to enumerate and examine each NAL unit.

      Create an mp4 file given a raw H.264 Annex B formatted file. The output here will take the name test.mp4.

      ./bin/vsx --create=input_annex_b.h264 --out=test.mp4

      Create or add a track to an existing mp4 file given a raw AAC ADTS formatted input file.

      ./bin/vsx --create=input_adts.aac --out=test.mp4

      --fps=[ video frame rate ] may need to be explicitly given if no timing information is contained within the SPS in the raw H.264 Annex B file.

      Dump the entire box structure of an mp4 container file.

      ./bin/vsx --dump=test.mp4

      Use --verbose to include detailed sample description box contents.

      To dump the entire frame structure and summary of an MPEG2-TS file.

      ./bin/vsx --dump=test.m2t

      To dump the decoding time-to-sample of an mp4 track to a file.

      ./bin/vsx --dump=test.mp4 --stts > test.stts

      The dumped STTS info contains the sync between the audio and video track and can be manually edited to alter lip-sync. Use --stts=[ dumped stts file ] as an argument when creating an mp4 container file to preserve audio video sync.

      Extract the raw H.264 and AAC contents of an mp4 container.

      ./bin/vsx --extract=test.mp4

      This will create test.h264 containing the H.264 NALs in Annex B format and test.aac in AAC ADTS format.

      --extract can be used to convert .flv files to .mp4 by extracting the raw contents and then creating an mp4 container. .mp4 is preferred over .flv to allow fast-play of download files and better seeking during playback.

      To extract the raw video and audio contents of an MPEG-2 TS file.

      ./bin/vsx --extract=test.m2t

      If the MPEG-2 TS file was captured from broadcast TV, this will create test.h262 containing the MPEG-2 elementary stream and test.ac3 containing the a52 (dolby digital) audio contents.

     

    4.0 Streaming Transport Protocols

     

      4.1 Streaming Output Using RTP

      OpenVSX can stream output via RTP or SRTP (Secure-RTP) to both unicast and multicast addresses. The same stream can be output to multiple unicast addresses to allow display on multiple recipient clients.

      Stream output using either DTLS or DTLS-SRTP must use unicast addresses.


       
      4.2 Streaming Output Using RTSP

      OpenVSX has a built-in RTSP server to allow clients to view live content via RTSP. Media is encapsulated and delivered via RTP over UDP or TCP interleaved mode. The server supports both RTSP and RTSP tunneled over HTTP. The following URL can be used directly by an RTSP player:

      rtsp://[SERVER IP]:[1554]/live.sdp

      From any Web Browser the following URL can be used to automatically link to the RTSP server:

      http://[SERVER IP]:[8080]/live

      or

      http://[SERVER IP]:[8080]/rtsp

      To customize the presentation of the RTSP HTML web page made available to clients, edit the file html/rsrc/rtsp_embed.html.

      OpenVSX can also capture a live RTSP stream as a form of input.


       
      4.3 Streaming Output Using RTMP

      OpenVSX has a built-in RTMP server to allow clients to view live content via RTMP. Media is encapsulated via the RTMP protocol allowing an RTMP player such as Flash player to load and view live content. The following URL can be used directly by an RTMP player:

      rtmp://[SERVER IP]:[1935]/live.sdp

      From any Web Browser the following URL can be used to view the live stream via an embedded Flash applet.

      http://[SERVER IP]:[8080]/live

      or

      http://[SERVER IP]:[8080]/rtmp

      To customize the presentation of the RTMP HTML web page made available to clients, edit the file html/rsrc/rtmp_embed.html.

      OpenVSX can also load a live or pre-recorded RTMP in client mode or server mode as a form of input.


       
      4.4 Streaming Output Using FLV Encapsulation Over HTTP

      OpenVSX has a built-in FLV output server to allow clients to view live content via FLV encapsulation. Media is encapsulated via the FLV file format allowing an FLV file handler such as Flash player to load and view live content as if it were a static file. The following URL can be used directly by an FLV player:

      http://[SERVER IP]:[8080]/flvlive

      From any Web Browser the following URL can be used to view the live stream via an embedded Flash applet.

      http://[SERVER IP]:[8080]/live

      or

      http://[SERVER IP]:[8080]/flv

      Multiple HTTP listeners can be used to provide both HTTP and HTTPS listeners using the following options --flvlive=8080 --flvlive=https://8443 --live=8080 --live=https://8443. A web browser connecting to https://[SERVER IP]:8443/flv or https://[SERVER IP]:8443/live will automatically receive a link to access the underlying FLV encapsulated media stream via SSL/TLS.

      To customize the presentation of the FLV HTML web page made available to clients, edit the file html/rsrc/http_embed.html.

      OpenVSX can also load a live or pre-recorded FLV encapsulated stream via HTTP or HTTPS as a form of input.


       
      4.5 Streaming Output Using Matroska / WebM Encapsulation

      OpenVSX has a built-in Matroska / WebM output server to allow clients to view live content via Matroska / WebM encapsulation. A WebM media file contains VP8 video and Vorbis audio using Matroska encapsulation with the document type webm. Media is encapsulated via the Matroska file format allowing a Matroska or WebM file handler such as the Chrome browser to load and view live content as if it were a static file. The following URL can be used directly by a Matroska or WebM HTML5 aware Web Browser such as Google Chrome:

      http://[SERVER IP]:[8080]/mkvlive

      From a Matroska or Webm aware Web Browser the following URL can be used to view the live stream.

      http://[SERVER IP]:[8080]/live

      or

      http://[SERVER IP]:[8080]/mkv

      Multiple HTTP listeners can be used to provide both HTTP and HTTPS listeners using the following options --mkvlive=8080 --mkvlive=https://8443 --live=8080 --live=https://8443. A web browser connecting to https://[SERVER IP]:8443/mkv or https://[SERVER IP]:8443/live will automatically receive a link to access the underlying Matroska / WebM encapsulated media stream via SSL/TLS.

      To customize the presentation of the Matroska HTML web page made available to clients, edit the file html/rsrc/mkv_embed.html.


       
      4.6 Streaming Output Using MPEG-DASH

      OpenVSX has a built-in MPEG-DASH compatible output server to allow DASH enabled clients to download and view live content. OpenVSX maintains and publishes one or more Media Playlist Description (MPD) files which include program stream meta-data. Media is encapsulated in ISO Base Media File Format (BMFF) using an mp4 container file containing movie fragment (MOOF) boxes. The following URLs can be used to access the live stream embedded in an MPEG-DASH player:

      http://[SERVER IP]:[8080]/dash

      The following URL can be used to access an MPD containing the SegmentTemplate XML element:

      http://[SERVER IP]:[8080]/dash/segtemplate.mpd

      Multiple HTTP listeners can be used to provide both HTTP and HTTPS listeners using the following options --dash=8080 --dash=https://8443. A DASH enabled client can connect to https://[SERVER IP]:8443/dash/segtemplate.mpd to access the published MPD.

      • Segment duration
      • OpenVSX creates distinct media segments with a duration between 5 and 10 seconds by default. Each segment should begin with a video key frame. To decrease the real-time media delay, change the configuration line item dashmaxduration to a lower value.

      • Segment output location
      • When using OpenVSX to continuously host MPEG-DASH Streaming content it is recommended to change the directory of the continuously updated segments to be mapped to an in-memory file system. This will reduce unnecessary disk activity.

        • sudo mkdir /usr/local/ram
        • sudo mount -t tmpfs -o size=102400K tmpfs /usr/local/ram
        • Update the configuration line item dashdir=/usr/local/ram

         
      • MPEG-DASH Adaptive Bitrate Streaming
      • OpenVSX will automatically produce multiple MPEG-DASH segmented output streams when multiple output transcoding output is enabled. Each published MPEG-DASH MPD will contain multiple representation elements describing each output stream.


       
      4.7 Streaming Output Using MPEG-TS Over HTTP

      To stream across the internet or through firewalls any clients may need to receive a live broadcast over TCP / HTTP instead of UDP / RTP. Streaming over HTTP can be used simultaneously while delivering content over RTP. OpenVSX uses the Content-Type: video/x-mpeg-ts in the HTTP headers.

      The live stream can be accessed at the URL:

      http://[SERVER IP]:[8080]/live

      or

      http://[SERVER IP]:[8080]/tslive

      Multiple HTTP listeners can be used to provide both HTTP and HTTPS listeners using the following options --tslive=8080 --tslive=https://8443 --live=8080 --live=https://8443

      .

      OpenVSX can also load a live MPEG-2 TS stream via HTTP or HTTPS as a form of input capture.


       
      4.8 Streaming Output Using HTTPLIVE

      OpenVSX has full built-in support for HTTP Live Streaming. HTTP Live Streaming is a standard proposed by Apple which is used to deliver live media to any Apple mobile device such as iPhone, iPod, or iPad, as well as any Mac OS X (>=10.6) with Safari and QuickTime X. HTTP Live streaming works by segmenting a live output stream into small distinct chunks. These chunks are referenced from a continuously updated playlist file available for download via HTTP. This mechanism has an inherent delay of 15-35 seconds, depending on the configuration settings.

      The live stream can be accessed from a supported Apple at the URL:

      http://[SERVER IP]:[8080]/live

      or

      http://[SERVER IP]:[8080]/httplive

      Multiple HTTP listeners can be used to provide both HTTP and HTTPS listeners using the following options --httplive=8080 --httplive=https://8443 --live=8080 --live=https://8443. A web browser connecting to https://[SERVER IP]:8443/httplive or https://[SERVER IP]:8443/live will automatically receive a link to access the underlying media via SSL/TLS.

      To enable HTTP Live Streaming ensure that the iPhone / HTTPLive checkbox is selected in the UI. This will enable any media file, or a live captured stream to be available through the HTTP Live interface.

      Live streaming media can also be opened from Safari on any Mac with Quicktime X. When opening the URL directly from Quicktime X use /httplive/out.m3u8 (eg. http://10.10.10.10:8080/httplive/out.m3u8).

      OpenVSX can also load a live HTTPLive playtlist stream via HTTP or HTTPS as a form of input.

       

      Chunk Segment Duration

      OpenVSX creates the recommended duration of 10 seconds segment chunks by default. The last three segments are stored in the current .m3u playlist file. To decrease the real-time media delay, change the configuration line item httpliveduration from 10.0 seconds to 5.0. A value of less than 5 seconds duration is not recommended.

       

      Chunk Output Location

      When using OpenVSX to continuously host HTTP Live Streaming content it is recommended to change the directory of the continuously updated chunks to be mapped to an in-memory file system. This will reduce unnecessary disk activity.

      sudo mkdir /usr/local/ram
      sudo mount -t tmpfs -o size=102400K tmpfs /usr/local/ram
      Update the configuration line item httplivedir=/usr/local/ram
       
       

      OpenVSX comes with integrated functionality to provide a complete HTTP Live Streaming solution by both automatically segmenting an output stream and hosting it via the bundled web server. This approach requires virtually no configuration. However, for large scale production environments which handle a large amount of HTTP requests it may be desirable to use a third-party web server such as Apache to deliver the media content to clients. In such case:

      Update the configuration line item httplivedir to an existing path within your web server directory tree.

      In the same directory, create your own index.html with the following tag placed inside the body tag.

      <video controls autoplay src="out.m3u8"/>


       
       

      HTTPLive Adaptive Bitrate Streaming

      OpenVSX will automatically produce multiple HTTPLive segmented output streams when multiple output transcoding output is enabled. The default HTTPLive HTTP request (http://[SERVER IP]:8080/httplive) index file will return an embedded video tag pointing to a master multi.m3u8 playlist file. The master playlist file includes references to each bitrate specific playlist file for automatic selection by an HTTPLive client. Alternatively, an HTTPLive client can request a bitrate specific output stream by specifying its output index, such as http://[SERVER IP]:[8080]/httplive/2

       

      5.0 Integrated Web Server

      The OpenVSX Web Server is available over HTTP on the default port 8080. The same web services can be available on multiple address:port listeners as well as SSL/TLS.


         
        5.1 Virtual URLs

        The OpenVSX integrated Web Server is used to host the following virtual URLs when live stream output is enabled.

        1. /config
        2.  - 

          Provides a dynamic runtime configuration interface. Available if --configsrv is given. The configuration interface is used by OpenVCX to perform runtime updates such as conference participant hold, STUN / ICE binding updates, and RTP streaming address changes. The configuration interface can also be used to adjust transcoder settings such as bitrate adjustments. The following URI parameter keys are supported:

          xcode - An updated transcoder configuration passed as a comma separated list of key value pairs. Any running encoder configuration settings will be updated on the fly. The actual underlying running encoder must support this functionality.

          reset - Set to 1 to initiate a full reset of the transcoder configuration. This will cause any transcoders to be closed and re-opened.

        3. /dash
        4.  - 

          Returns an embedded MPEG-DASH player to access live output using MPEG-DASH. Available if the --dash option is present.

          /dash/segtemplate.mpd - Loads the MPD variant which uses the SegmentTemplate element to describe the published media.

        5. /flv
        6.  -  Returns an embedded Flash player to access live output using FLV encapsulation as if loading a static FLV file. Available if the --flvlive and --live options are present.
        7. /flvlive
        8.  - 

          Direct access to live media output encapsulated using FLV over HTTP. Available if the --flvlive option is present.

        9. /live
        10.  - 

          Examines the client User-Agent HTTP Header and automatically chooses what output format to return. Available if the --live option is present.

          The /live URL is intended to be the universal connection path to an input stream from any client. OpenVSX will adapt the output format of the stream to be the same as either /rtmp, /rtsp, /flvlive, or /httplive. The specific stream format is determined based on the User-Agent header lookup of the device type configuration defined in /etc/devices.conf

        11. /httplive
        12.  - 

          Direct access to live media output for Apple HTTP Live Streaming clients. Available if the --httplive option is present.

        13. /mkv
        14.  - 

          Returns an embedded HTML5 video tag to access live output using Matroska / WebM encapsulation as if loading a static Matroska / WebM file. Available if the --mkvlive and --live options are present.

          The /mkv URL is synonomous with /webm.

        15. /mkvlive
        16.  - 

          Direct access to live media output encapsulated using Matroska / WebM over HTTP. Available if the --mkvlive or --webmlive option is present.

        17. /pip
        18.  - 

          Allows for runtime configuration of a PIP (Picture In Picture). This URL is also used to add and remove participants to a video conference.

          Configuration parameters are passed as URL key value pairs to the HTTP(s) GET request to the /pip URL. For eg.,

          http://[SERVER IP]:8080/pip?pipstart&in=liveinput.sdp&pipxright=0&xcode=videoWidth=160,videoHeight=120

          pipstart - If present will try to start a new PIP. The result of the operation is returned in the HTTP Response as a result code equal to the index of the new PIP. The result code should be used as the parameter to pipstop= to stop the PIP. A code < 0 indicates an error. For eg. http://[SERVER IP]:8080/pip?pipstart&in=liveinput.sdp&pipxright=0&xcode=videoWidth=160,videoHeight=120 starts a new PIP overlay with dimensions of 160x120 pixels located at the upper right corner of the main overlay.

          pipstop - Stops the PIP at the given index. The result of the operation is returned in the HTTP Response as a result code where 0 indicates success and a code < 0 indicates an error. For eg. http://[SERVER IP]:8080/pip?pipstop=1 stops the PIP at index 1.

          in - The PIP input media file path or SDP file to be processed. For eg., &in=liveinput.sdp or &in=logo.png

          out - The RTP output destination of the video conference participant. Refer to the out parameter semantics found under Arguments Controlling RTP / SRTP Streaming. This option is only valid if using video conferencing mode (if --conference or --mixer is present on the command line). For eg., &out=rtp://10.10.10.10:5004,5006.

          rtppayloadtype - PIP RTP output stream payload type(s). Refer to the rtppayloadtype parameter semantics found under Arguments Controlling RTP / SRTP Streaming. This option is only valid when using the &out= parameter to add a video conference endpoint.

          xcode - Any PIP formatting configuration passed as a list of key value pairs. These options take the same format as the --xcode= command line parameters documented under Transcoder Configuration. Only options specific to PIP output dimensions, scaling type, cropping, padding, and frame rate (applicable for non static PIP formats) are processed. PIP audio parameters are only valid if using the video conferencing mixer. For eg., to specify the PIP RTP output audio stream to be encoded at 8KHz with the SILK codec use &xcode=audioCodec=silk,audioSampleRate=8000.

          pipalphamax - (PIP) maximum alpha masking value. Refer to the pipalphamax parameter semantics found under Arguments Controlling Picture In Picture.

          pipalphamin - (PIP) Picture In Picture minimum alpha masking value. Refer to the pipalphamin parameter semantics found under Arguments Controlling Picture In Picture.

          pipx - The horizontal (x axis) placement of the left edge of the PIP relative from the left edge of the main picture.

          pipxright - The horizontal (x axis) placement of the edge corner of the PIP relative from the right edge of the main picture.

          pipy - The vertical (y axis) placement of the top edge of the PIP relative from the top edge of the main picture.

          pipybottom - The vertical (y axis) placement of the bottom edge of the PIP relative from the bottom edge of the main picture.

          pipzorder - The PIP z axis placement order as a signed integer. Refer to the pipzorder parameter semantics found under Arguments Controlling Picture In Picture.

        19. /rtmp
        20.  - 

          Returns an embedded Flash player to access live output via RTMP. Available if the --rtmp and --live options are present.

        21. /rtsp
        22.  - 

          Returns a link to the live output available via RTSP. Available if the --rtsp option is present.

        23. /status
        24.  - 

          Returns status information about the server. Available if --statusmax= is set to > 0. The following URI parameter keys are supported to control which output is displayed. If no URI parameters are present, the output option is used as the default. For eg. the following URL can be used to view the current stream statistics: http://[SERVER IP]:[8080]/status?streamstats.

          output - Shows how many active output sessions are being serviced. The intention of the output status URL is to be used by OpenVSX-WP web portal to determine how many concurrent output sessions each stream processor is handling.

          streamstats - Display the stream output statistics. Statistics include the overall throughput, average burst rate over the past 2 seconds, and past 8 seconds. RTP stream output will contain any RTCP Receiver Report metrics such as reported packet loss. TCP stream output will contain the current state of the output buffer. The statistics response is returned as URL key value pairs. The same statistics are available via the --streamstats command line argument.

        25. /tslive
        26.  - 

          Direct access to live media output encapsulated over MPEG-2 TS. Available if the --tslive option is present.

         

        6.0 Troubleshooting

           
          6.1 Failure To Start

          OpenVSX Fails to start due to missing library dependancies.

          The OpenVSX executable binary bin/vsxbin should be launced from the wrapper script bin/vsx. The OpenVSX executable binary may fail to start due to the following linker error:

          ./bin/vsxbin: error while loading shared libraries: libvsx.so:
          cannot open shared object file: No such file or directory

          or

          ./bin/vsxbin: error while loading shared libraries: libxcode.so:
          cannot open shared object file: No such file or directory
            

          This means that the system library loader cannot find the OpenVSX shared libs. The following command tells the shell where to look for shared libraries:

          export LD_LIBRARY_PATH=./lib

           

          OpenVSX Fails to start due to the following system message

          ./bin/vsxbin: error while loading shared libraries: libvsx.so:  cannot restore segment prot after reloc: Permission denied
          

          or

          ./bin/vsxbin: error while loading shared libraries: libxcode.so:  cannot restore segment prot after reloc: Permission denied
          

          On some systems certain kernel security extensions may prohibit shared libraries from loading correctly. To override this do:

          chcon -t texrel_shlib_t lib/libvsx.so

          chcon -t texrel_shlib_t lib/libxcode.so

           

          Unable to find license file

          If you have just installed the license file 'license.dat' into etc\license.dat but OpenVSX fails to read the license ensure that you have the entry license=etc\license.dat in the OpenVSX configuration file etc\vsx.conf. This may occur if you are starting OpenVSXby double-clicking from File Explorer.


           
          6.2 64bit Linux Support

          OpenVSX for Linux is compiled as a 32bit ELF executable. The vsxbin binary distribution should be able to run on most major flavors of 32 or 64bit linux distributions based on x86 architecture. To run on 64bit systems, 32bit library support needs to be installed.

          On CentOS / Fedora / RedHat systems this can typically be done with the following command:

          sudo yum install ia32-libs

          or

          sudo yum install glibc.i686

          On Ubuntu / Debian systems this can typically be done with the following command:

          sudo apt-get install ia32-libs

          If vsxbin still fails to start you should check if all library dependancies are fulfilled by running the following command:

          ldd ./bin/vsxbin


           
          6.3 Problems Loading SSL Services

          Some clients may fail to load media links via HTTPS (SSL) if the media URL being referenced from a web page points to a different origin than the web page. For instance, if publishing HTTPLive services over HTTPS, the URL https://[SERVER FQDN / IP]/httplive may fail to load any media from iOS devices if the .m3u8 playlist contains a different originating URL. This can be addressed by setting the localHost configuration option in the OpenVSX configuration file to be consistent with the server's public FQDN.

          In etc/vsx.conf

          localhost=httplive.cdn.mycompany.com

          An example OpenVSX command line to publish an HTTPLive stream using SSL on port 8443 would be:

          ./bin/vsx --in=input.mp4 --httplive=https://0.0.0.0:8443 --httpliveurlhost="https://httplive.cdn.mycompany.com:8443/httplive"

          .

           

           

           

 

 

Don't hesitate to contact us via our contact page or email us at openvcx@gmail.com.