2.5 KiB
notes
video streaming notes
- using glslShader is buggy, setting time with
stream,ID,time,0
buffers and then changes playback speed. not good - idea: use v4l to stream to
/dev/video2
and read from there- test works!
ffmpeg -re -i video.mp4 -map 0:v -f v4l2 /dev/video2
- see https://jiafei427.wordpress.com/2019/04/15/v4l2loopback/
- if restarted, glslviewer picks it right up again -> success
- but can we control the playback?
- no.. mpv does not stream, vlc does not allow controls, .
- idea: use processing to output a window to v4l2?
- if we use processing to do that why not run the shader there as well?
- dont want to deal with on the fly reload and mapping the controls?
- but wouldn't mapping the controls be easier this way anyway? m)
- dont want to deal with on the fly reload and mapping the controls?
- example v4l2 processing export [TODO test]: https://gist.github.com/UriShX/fc34825843c36849af2dbc49afa1d0a6a
- would need to build control for everything, meh
- if we use processing to do that why not run the shader there as well?
- idea: use OBS-Studio
- yeah that works. also might give some audio info
- test works!
prelim setup
- OBS-Studio
- plays with vlc video source
- streams through v4l2 with "start virtual camera"
- check which output it is with
v4l2-ctl --list-devices
- check which output it is with
- also collects audio from line in for loudness control
- qpwgraph
- routes line in to obs
- routes obs to processing
- processing
- collects audio in
- runs audo analysis and midi control
- sends osc to glslViewer
- glslViewer
- runs shader, listens to osc
This is, admittedly, bullshit. Running obs should not be necessary, but it provides the most stable stream that I can control without programming my own controls (ffmpeg, gstreamer don't do realtime skipping and vlc doesn't do v4l2 streaming). glslViewer might be silly here, but it allows to adjust the effect in realtime and provides includes for the shaders, no idea if processing would support it. also, doing the audio processing in glslViewer is silly because then every friggin pixel would calculate on the audio - no thank you very much. Sending osc 30/s is overwhelming glsl tho, so we might have find another way of them communicating. Processing can do shaders and possibly read from v4l2 video and recursive shaders (see lygia examples). But hot reloading would mean fixing and changing things would all happen in the same sketch, and if the audio processing needs work we can't do that. There's an idea how to do it over at https://github.com/processing/processing/issues/5648 tho.