Thanks HN, much love
If you want AV1 you will not be able to use RTMP. The protocol is orphaned/deprecated, so avoid if possible!
If I was building it this is what I would do, and my reasoning.
* For capture + encoding I would use OBS. You will want to use something that is easy for users to install configure. Professors will also have lots of custom requirements when it comes to layout etc... it will be tempting to do a ffmpeg command directly, but it will fall apart quick I believe.
* To get AV1 out of OBS I would use FFMPEG output. I would have it send RTP. RTP is used to carry video in a sub-second manner. This is the same protocol that WebRTC uses. You know have AV1 + low latency.
* Then for users to watch I would use WebRTC. That will allow them to watch in their web browser. Conceptually it will be like this https://github.com/pion/webrtc/tree/master/examples/rtp-to-w... this takes the RTP packets and puts them in the browser.
Lots of great projects exist that you could use for 'RTP -> WebRTC' like https://galene.org/ and https://livekit.io/ I would suggest checking them all out!
If you have more questions/want to talk to people in the video space always happy to chat on https://pion.ly/slack :)
Another server is SRS [3], but IMO it is more difficult to use, lacks features and the delay is about 1-3s — it requires far less bandwidth though.
[1] https://github.com/AirenSoft/OvenMediaEngine [2] https://github.com/hashworks/simple-ome [3] https://github.com/ossrs/srs
I looked into a similar ffmpeg solution a couple of months ago and broadcast an libx264 video only stream with ~1 second latency to an rtsp server: https://gist.github.com/andrewmackrodt/88c2233fb9cc4797ada93...
However, I found this solution to be too CPU intensive as I was unsure if/how to instruct x11grab to interact with the GPU directly. Perhaps if I used my nvidia card for encoding it would have been more performant. However, OBS makes this entire process much easier.
https://github.com/tcarrio/ffmpeg-streamcast/blob/master/bas...
Also, seven years, and I eventually ended up moving to OBS instead of diving even further, but that was working at the time.
However, it won't give you any pointers for using AV1, and this is just the streaming source, you'll still need an RTSP server and clients to access them.
I don't think it uses ffmpeg, but it may have the features you are looking for
My experiment was streaming via a ipfs transport layer.
fauxstream is a very thin wrapper around ffmpeg, it proved invaluable in providing a reference command I could iterate off of to get my stream working.
https://github.com/rfht/fauxstream
edit: it is hard to tell what op wants but it could be as simple(hah, the ffmpeg command line terrifies me) as
stream caster: ffmpeg -video_size 1280x720 -f x11grab -i :0 -r 30 -c:v libx264 -vb 3500k -minrate 3500k -maxrate 3500k -bufsize 7000k -preset ultrafast -vf format=yuv420p -g 60 -keyint_min 30 -f flv - | nc -l 1234
stream viewer: nc stream_caster 1234 | ffplay -
If you're after very low latency, then try Mjpeg. Most modern compressions are temporal, there will always be a several frames lag.
Now do you have a streaming server, where the students will connect to? If not you can put something simple together with nginx and RTMP.
* how to create content for a stream
* how to broadcast that stream to viewers
Which one are you trying to solve?