vf_dnn_processing.c: add dnn backend openvino
We can try with the srcnn model from sr filter. 1) get srcnn.pb model file, see filter sr 2) convert srcnn.pb into openvino model with command: python mo_tf.py --input_model srcnn.pb --data_type=FP32 --input_shape [1,960,1440,1] --keep_shape_ops See the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer We'll see srcnn.xml and srcnn.bin at current path, copy them to the directory where ffmpeg is. I have also uploaded the model files at https://github.com/guoyejun/dnn_processing/tree/master/models 3) run with openvino backend: ffmpeg -i input.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg (The input.jpg resolution is 720*480) Also copy the logs on my skylake machine (4 cpus) locally with openvino backend and tensorflow backend. just for your information. $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.tf.mp4 … frame= 343 fps=2.1 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.0706x video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.517637% [aac @ 0x2f5db80] Qavg: 454.353 real 2m46.781s user 9m48.590s sys 0m55.290s $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.mp4 … frame= 343 fps=4.0 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.137x video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.517640% [aac @ 0x31a9040] Qavg: 454.353 real 1m25.882s user 5m27.004s sys 0m0.640s Signed-off-by: Guo, Yejun <yejun.guo@intel.com> Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
This commit is contained in:
parent
ff37ebaf30
commit
9bcf2aa477
@ -9291,13 +9291,21 @@ TensorFlow backend. To enable this backend you
|
||||
need to install the TensorFlow for C library (see
|
||||
@url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with
|
||||
@code{--enable-libtensorflow}
|
||||
|
||||
@item openvino
|
||||
OpenVINO backend. To enable this backend you
|
||||
need to build and install the OpenVINO for C library (see
|
||||
@url{https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md}) and configure FFmpeg with
|
||||
@code{--enable-libopenvino} (--extra-cflags=-I... --extra-ldflags=-L... might
|
||||
be needed if the header files and libraries are not installed into system path)
|
||||
|
||||
@end table
|
||||
|
||||
Default value is @samp{native}.
|
||||
|
||||
@item model
|
||||
Set path to model file specifying network architecture and its parameters.
|
||||
Note that different backends use different file formats. TensorFlow and native
|
||||
Note that different backends use different file formats. TensorFlow, OpenVINO and native
|
||||
backend can load files for only its format.
|
||||
|
||||
Native model file (.model) can be generated from TensorFlow model file (.pb) by using tools/python/convert.py
|
||||
|
@ -58,10 +58,13 @@ typedef struct DnnProcessingContext {
|
||||
#define OFFSET(x) offsetof(DnnProcessingContext, x)
|
||||
#define FLAGS AV_OPT_FLAG_FILTERING_PARAM | AV_OPT_FLAG_VIDEO_PARAM
|
||||
static const AVOption dnn_processing_options[] = {
|
||||
{ "dnn_backend", "DNN backend", OFFSET(backend_type), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, FLAGS, "backend" },
|
||||
{ "dnn_backend", "DNN backend", OFFSET(backend_type), AV_OPT_TYPE_INT, { .i64 = 0 }, INT_MIN, INT_MAX, FLAGS, "backend" },
|
||||
{ "native", "native backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = 0 }, 0, 0, FLAGS, "backend" },
|
||||
#if (CONFIG_LIBTENSORFLOW == 1)
|
||||
{ "tensorflow", "tensorflow backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = 1 }, 0, 0, FLAGS, "backend" },
|
||||
#endif
|
||||
#if (CONFIG_LIBOPENVINO == 1)
|
||||
{ "openvino", "openvino backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = 2 }, 0, 0, FLAGS, "backend" },
|
||||
#endif
|
||||
{ "model", "path to model file", OFFSET(model_filename), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },
|
||||
{ "input", "input name of the model", OFFSET(model_inputname), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },
|
||||
|
Loading…
x
Reference in New Issue
Block a user