docs: update (#5152)

This commit is contained in:
Alessandro Ros 2025-11-18 04:07:03 +01:00 committed by GitHub
parent 9a82874601
commit 8505e8d83f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
14 changed files with 49 additions and 11 deletions

View file

@ -310,7 +310,7 @@ The resulting stream is available in path `/cam`.
The Raspberry Pi Camera can be controlled through a wide range of parameters, that are listed in the [configuration file](/docs/references/configuration-file).
If you want to run the server inside Docker, you need to use the `latest-rpi` image and launch the container with some additional flags:
If you want to run the server inside Docker, you need to use the `1-rpi` image and launch the container with some additional flags:
```sh
docker run --rm -it \
@ -319,7 +319,7 @@ docker run --rm -it \
--tmpfs /dev/shm:exec \
-v /run/udev:/run/udev:ro \
-e MTX_PATHS_CAM_SOURCE=rpiCamera \
bluenviron/mediamtx:latest-rpi
bluenviron/mediamtx:1-rpi
```
Be aware that precompiled binaries and Docker images are not compatible with cameras that require a custom `libcamera` (like some ArduCam products), since they come with a bundled `libcamera`. If you want to use a custom one, you can [compile from source](/docs/other/compile#custom-libcamera).

View file

@ -158,9 +158,9 @@ To decrease the latency, you can:
- try decreasing the hlsPartDuration parameter
- try decreasing the hlsSegmentDuration parameter
- The segment duration is influenced by the interval between the IDR frames of the video track. An IDR frame is a frame that can be decoded independently from the others. The server changes the segment duration in order to include at least one IDR frame into each segment. Therefore, you need to decrease the interval between the IDR frames. This can be done in two ways:
- if the stream is being hardware-generated (i.e. by a camera), there's usually a setting called Key-Frame Interval in the camera configuration page
- otherwise, the stream must be re-encoded. It's possible to tune the IDR frame interval by using ffmpeg's -g option:
- try decreasing the interval between random access frames of the video track, which are frames that can be decoded independently from others. The server adjusts the segment duration in order to include at least one random access frame into every segment. This interval can be changed in two ways:
- if the stream is being hardware-generated (i.e. by a camera), there's usually a setting called "Key Frame Interval" in the camera configuration page
- otherwise, the stream must be re-encoded. It is possible to tune the random access frame interval by using ffmpeg's -g option:
```sh
ffmpeg -i rtsp://original-stream -c:v libx264 -pix_fmt yuv420p -preset ultrafast -b:v 600k -max_muxing_queue_size 1024 -g 30 -f rtsp rtsp://localhost:$RTSP_PORT/compressed

View file

@ -9,7 +9,7 @@ There are several ways to change the configuration:
- available in the root folder of the Docker image (`/mediamtx.yml`); it can be overridden in this way:
```sh
docker run --rm -it --network=host -v "$PWD/mediamtx.yml:/mediamtx.yml:ro" bluenviron/mediamtx
docker run --rm -it --network=host -v "$PWD/mediamtx.yml:/mediamtx.yml:ro" bluenviron/mediamtx:1
```
The configuration can be changed dynamically when the server is running (hot reloading) by writing to the configuration file. Changes are detected and applied without disconnecting existing clients, whenever it's possible.
@ -42,7 +42,7 @@ There are several ways to change the configuration:
This method is particularly useful when using Docker; any configuration parameter can be changed by passing environment variables with the `-e` flag:
```
docker run --rm -it --network=host -e MTX_PATHS_TEST_SOURCE=rtsp://myurl bluenviron/mediamtx
docker run --rm -it --network=host -e MTX_PATHS_TEST_SOURCE=rtsp://myurl bluenviron/mediamtx:1
```
3. By using the [Control API](control-api).

View file

@ -1,6 +1,6 @@
# Embed streams in a website
Live streams can be embedded into an external website by using the WebRTC or HLS protocol. Before embedding, check that the stream is ready and can be accessed with intended protocol by using URLs mentioned in [Read a stream](#read).
Live streams can be embedded into an external website by using the WebRTC or HLS protocol. Before embedding, check that the stream is ready and can be accessed with intended protocol by using URLs mentioned in [Read a stream](read).
## WebRTC

View file

@ -0,0 +1,35 @@
# Log management
By default, log entries are printed on the console (stdout). It is possible to write logs to a file by using the `logDestinations` and `logFile` settings:
```yml
# Destinations of log messages; available values are "stdout", "file" and "syslog".
logDestinations: [file]
# If "file" is in logDestinations, this is the file which will receive the logs.
logFile: mediamtx.log
```
The log file can be periodically rotated or truncated by using an external utility.
On most Linux distributions, the `logrotate` utility is in charge of managing log files. It can be configured to handle the _MediaMTX_ log file too by creating a configuration file, placed in `/etc/logrotate.d/mediamtx`, with this content:
```
/my/mediamtx/path/mediamtx.log {
daily
copytruncate
rotate 7
compress
delaycompress
missingok
notifempty
}
```
This file will rotate the log file every day, adding a `.NUMBER` suffix to older copies:
```
mediamtx.log.1
mediamtx.log.2
mediamtx.log.3
...
```

View file

@ -44,7 +44,7 @@ If there's a NAT / container between server and clients, it must be configured t
docker run --rm -it \
-p 8189:8189/udp
....
bluenviron/mediamtx
bluenviron/mediamtx:1
```
If you still have problems, the UDP protocol might be blocked by a firewall. Enable the TCP protocol by enabling the local TCP listener:

View file

@ -1,8 +1,11 @@
# Decrease packet loss
MediaMTX is meant for routing live streams, and makes use of a series of protocol which try to preserve the real-time aspect of streams and minimize latency at cost of losing packets in transmit. In particular, most protocols are built on UDP, which is an "unrealiable transport", specifically picked because it allows to drop late packets in case of network congestions.
MediaMTX is meant for routing live streams, and makes use of a series of protocols and techniques which try to preserve the real-time aspect of streams and minimize latency at cost of losing packets in transmit, in particular:
These packet losses are usually detected and printed in MediaMTX logs.
- most protocols are built on UDP, which is an "unreliable transport", specifically picked because it allows to drop late packets in case of network congestions.
- there's a circular buffer that stores outgoing packets and starts dropping packets when full.
Packet losses are usually detected and printed in MediaMTX logs.
If you need to improve the stream reliability and decrease packet losses, the first thing to do is to check whether the network between the MediaMTX instance and the intended publishers and readers has sufficient bandwidth for transmitting the media stream. Most of the times, packet losses are caused by a network which is not fit for this scope. This limitation can be overcome by either recompressing the stream with a lower bitrate, or by upgrading the network physical infrastructure (routers, cables, Wi-Fi, firewalls, topology, etc).