I have made a gst-launch prototype script to encode and stream a high bandwidth video stream. So far so good, performance and quality are great, very little CPU usage, the hardware is great.
However, I must login on a graphical session and run my script from within that session for it to work. I have not found a way to automate the start of such script. I cannot put in a startup file (systemd or rc.local) the script will run but throw a bunch of cryptic errors.
For now I will mitigate by installing x11vnc and perform a graphical login from my laptop and launch the script manually, but what could it be that prevent this to work? Ideally this would be a headless system.
Please note that a physical screen MUST be connected to either HDMI or DP even for the x11vnc trick to work; if the system starts without a monitor connected, the gstreamer script will not work.
Hi,
Please share the gstreamer pipeline to reproduce the issue. X is enable by default in each release and all tests are done with it. It is possible certain cases do not work without X. We may need to check if your usecase can be supported without X.
keep in mind I do not use the local display. I want to capture from a v4l2 interface (in principle it could be a USB webcam) and stream it to the YouTube. When I start this pipeline manually in the GUI, it works real well.
I do not intend to use the local HDMI at all. The video is encoded and then streamed over the internet.
Of I course tried to write a systemd script and, as I said in the top message, the script starts but the pipeline throws a bunch of errors. The only way to make it work successfully is to 1) have a monitor physically attached to the Jetson Nano 2) login manually and run the command from a local terminal.
Well unfortunately my problem is not that I don’t know how to start a script at boot, but that such script only works if I login manually. I’ll perform more tests.
That uses the hardware format converter is the culprit. If I use software conversion, that obviously uses CPU to turn data around (about 70% of one CPU in my case) I can run the script without a graphical environment, just by having systemd or rc.local start the script.
I have tried to look into the documentation and - I have not found any explicit dependency between nvvidconv and a graphical login. Maybe the drive is not going to be initialized, memory not allocated?
Hi manoweb,
Please share full patch and steps in detail for r32.2.1/Jetson Nano. So that we can reproduce the issue first. Certain plugins such as nveglglessink have dependency to X. But nvvidconv should not. Probably some other reasones trigger the failure.
I am now running 31.1.1. Unfortunately having to upgrade the whole system all the time is resource-intensive. I will not be able to update right away to 32.1.1.
However, this is the full script that uses nvvidconv, that does NOT work unless I login manually:
#!/bin/bash
export GST_DEBUG=3
# test key
key="aacc-ccaa-aacc-ccaa"
while true
do
# The following does not seem to work unless one logs in manually
/usr/bin/gst-launch-1.0 v4l2src ! video/x-raw,width=1920,height=1080,framerate=30/1 ! queue max-size-time=0 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! omxh264enc bitrate=5000000 iframeinterval=60 ! video/x-h264 ! h264parse ! video/x-h264 ! queue ! flvmux name=mux ! rtmpsink location="rtmp://a.rtmp.youtube.com/live2/$key" alsasrc device="hw:CARD=Capture,DEV=0" ! audio/x-raw,rate=44100,channels=2 ! queue ! voaacenc bitrate=128000 ! audio/mpeg ! aacparse ! audio/mpeg, mpegversion=4 ! mux.
sleep 1
done
This is the version of the script that does work in a headless system, no problem:
To test this, I have a v4l2 USB3 device that does video capture at 1920x1080.
If you do not care to setup a proper Youtube account to test this, just start a networking usage utility (I use iptraf-ng) and check there is a steady ~5-6Mbps stream up to a youtube server. If you only see 50-60kbps the stream is not working.
Note how the first pipeline uses nvvidconv ! ‘video/x-raw(memory:NVMM), format=(string)I420’ to translate the v4l2 output for the encoder while the working pipeline uses a software approach with ‘videoconvert’
In fact, I just tried, one needs to have a proper youtube key instead of the mock key=“aacc-ccaa-aacc-ccaa” I wrote on the forum. I am performing some tests with filesink to see if it produces the same behavior.
Hi, https://developer.nvidia.com/embedded/linux-tegra-r311
It is a non-production release for Xavier. You should use another version for Jetson Nano. Please run ‘head -1 /etc/nv_tegra_release’ to get the information.
By the way, in the script above, one can substitute the rtmpsink element with either “fakesink” (but then there is no feedback) or “filesink location=‘file.bin’”. With the latter, the “working” pipeline with software conversion shows a steady increase of the size of the output, the one that uses nvvidconv stays stuck at one size for minutes, then sometimes spits out something, then again stops. That is, unless I log in manually in the system, with a physical monitor attached.